Quote for the day:
"Your worth consists in what you are and not in what you have." -- Thomas Edison
Protecting Your Software Supply Chain: Assessing the Risks Before Deployment

Given the vast number of third-party components used in modern IT, it's
unrealistic to scrutinize every software package equally. Instead, security
teams should prioritize their efforts based on business impact and attack
surface exposure. High-privilege applications that frequently communicate with
external services should undergo product security testing, while lower-risk
applications can be assessed through automated or less resource-intensive
methods. Whether done before deployment or as a retrospective analysis, a
structured approach to PST ensures that organizations focus on securing the
most critical assets first while maintaining overall system integrity. ...
While Product Security Testing will never prevent a breach of a third party
out of your control, it is necessary to allow organizations to make informed
decisions about their defensive posture and response strategy. Many
organizations follow a standard process of identifying a need, selecting a
product, and deploying it without a deep security evaluation. This lack of
scrutiny can leave them scrambling to determine the impact when a supply chain
attack occurs. By incorporating PST into the decision-making process, security
teams gain critical documentation, including dependency mapping, threat
models, and specific mitigations tailored to the technology in use.
Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Entities out there doing things you don’t like are always going to be able to
get generative AI (genAI) services and tools from somebody. You think large
terrorist cells can’t use their money to pay somebody to craft LLMs for them?
Even the most powerful enterprises can’t stop it from happening. But, that may
not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and
Toyota and the rest of those heavy-hitters merely want to pick and choose
where their monies are spent. Big enterprises can’t stop AI from being used to
do things they don’t like, but they can make sure none of it is being funded
with their money. If they add a clause to every RFP that they will only work
with model-makers that agree to not do X, Y, or Z, that will get a lot of
attention. The contract would have to be realistic, though. It might say, for
instance, “If the model-maker later chooses to accept payments for the
above-described prohibited acts, they must reimburse all of the dollars we
have already paid and must also give us 18 months notice so that we can
replace the vendor with a company that will respect the terms of our
contracts.” From the perspective of Google, along with Microsoft, OpenAI, IBM,
AWS and others, the idea is to take enterprise dollars on top of government
contracts.
Is Fine-Tuning or Prompt Engineering the Right Approach for AI?

It’s not just about having access to GPUs — it’s about getting the most out
of proprietary data with new tools that make fine-tuning easier. Here’s why
fine-tuning is gaining traction:Better results with proprietary data:
Fine-tuning allows businesses to train models on their own data, making the
AI much more accurate and relevant to their specific tasks. This leads to
better outcomes and real business value. Easier than ever before: Tools like
Hugging Face’s Open Source libraries, PyTorch and TensorFlow, along with
cloud services, have made fine-tuning more accessible. These frameworks
simplify the process, even for teams without deep AI expertise. Improved
infrastructure: The rising availability of powerful GPUs and cloud-based
solutions has made it much easier to set up and run fine-tuning at scale.
While fine-tuning opens the door to more customized AI, it does require
careful planning and the right infrastructure to succeed. ... As enterprises
accelerate their AI adoption, choosing between prompt engineering and
fine-tuning will have a significant impact on their success. While prompt
engineering provides a quick, cost-effective solution for general tasks,
fine-tuning unlocks the full potential of AI, enabling superior performance
on proprietary data.
Shifting left without slowing down

On the one hand, automation enabled by GenAI tools in software development
is driving unprecedented developer productivity, further emphasizing the gap
created by manual application security controls, like security reviews or
threat modeling. But in parallel, recent advancements in code understanding
enabled by these technologies, together with programmatic policy-as-code
security policies, enable a giant leap in the value security automation can
bring. ... The first step is recognizing security as a shared responsibility
across the organization, not just a specialized function. Equipping teams
with automated tools and clear processes helps integrate security into
everyday workflows. Establishing measurable goals and metrics to track
progress can also provide direction and accountability. Building
cross-functional collaboration between security and development teams sets
the foundation for long-term success. ... A common pitfall is treating
security as an afterthought, leading to disruptions that strain teams and
delay releases. Conversely, overburdening developers with security
responsibilities without proper support can lead to frustration and neglect
of critical tasks. Failure to adopt automation or align security goals with
development objectives often results in inefficiency and poor
outcomes.
How To Approach API Security Amid Increasing Automated Attack Sophistication

We’ve now gone from ‘dumb’ attacks—for example, web-based attacks focused on
extracting data from third parties and on a specific or single
vulnerability—to ‘smart’ AI-driven attacks often involving picking an actual
target, resulting in a more focused attack. Going after a particular
organization, perhaps a large organization or even a nation-state, instead of
looking for vulnerable people is a significant shift. The sophistication is
increasing as attackers manipulate request payloads to trick the backend
system into an action. ... Another element of API security is being aware of
sensitive data. Personal Identifiable Information (PII) is moving through APIs
constantly and is vulnerable to theft or data exfiltration. Organizations do
not often pay attention to vulnerabilities. Still, they pay attention when the
result is damage to their organization through leaked PII, stolen finances, or
brand reputation. ... The security teams know the network systems and the
infrastructure well but don't understand the application behaviors. The DevOps
team tends to own the applications but doesn’t see anything in production.
This split boundary in most organizations makes it ripe for exploitation. Many
data exfiltration cases fall in this no man’s land since an authenticated user
executes most incidents.
Top 5 ways attackers use generative AI to exploit your systems

Gen AI tools help criminals pull together different sources of data to
enrich their campaigns — whether this is group social profiling, or targeted
information gleaned from social media. “AI can be used to quickly learn what
types of emails are being rejected or opened, and in turn modify its
approach to increase phishing success rate,” Mindgard’s Garraghan explains.
... The traditionally difficult task of analyzing systems for
vulnerabilities and developing exploits can be simplified through use of gen
AI technologies. “Instead of a black hat hacker spending the time to probe
and perform reconnaissance against a system perimeter, an AI agent can be
tasked to do this automatically,” Mingard’s Garraghan says. ... “This
sharp decrease strongly indicates that a major technological advancement —
likely GenAI — is enabling threat actors to exploit vulnerabilities at
unprecedented speeds,” ReliaQuest writes. ... Check Point Research explains:
“While ChatGPT has invested substantially in anti-abuse provisions over the
last two years, these newer models appear to offer little resistance to
misuse, thereby attracting a surge of interest from different levels of
attackers, especially the low skilled ones — individuals who exploit
existing scripts or tools without a deep understanding of the underlying
technology.”
Why firewalls and VPNs give you a false sense of security

VPNs and firewalls play a crucial role in extending networks, but they also
come with risks. By connecting more users, devices, locations, and clouds,
they inadvertently expand the attack surface with public IP addresses. This
expansion allows users to work remotely from anywhere with an internet
connection, further stretching the network’s reach. Moreover, the rise of IoT
devices has led to a surge in Wi-Fi access points within this extended
network. Even seemingly innocuous devices like Wi-Fi-connected espresso
machines, meant for a quick post-lunch pick-me-up, contribute to the
proliferation of new attack vectors that cybercriminals can exploit. ... More
doesn’t mean better when it comes to firewalls and VPNs. Expanding a
perimeter-based security architecture rooted in firewalls and VPNs means more
deployments, more overhead costs, and more time wasted for IT teams – but less
security and less peace of mind. Pain also comes in the form of degraded user
experience and satisfaction with VPN technology for the entire organization
due to backhauling traffic. Other challenges like the cost and complexity of
patch management, security updates, software upgrades, and constantly
refreshing aging equipment as an organization grows are enough to exhaust even
the largest and most efficient IT teams.
Building Trust in AI: Security and Risks in Highly Regulated Industries
/articles/building-trust-ai/en/smallimage/building-trust-ai-logo-small-1737705826871.jpg)
AI hallucinations have emerged as a critical problem, with systems generating
plausible but incorrect information - for instance, AI fabricated software
dependencies, such as PyTorture, leading to potential security risks. Hackers
could exploit these hallucinations by creating malicious components
masquerading as real ones. In another case, an AI libelously fabricated an
embezzlement claim, resulting in legal action - marking the first time AI was
sued for libel. Security remains a pressing concern, particularly with plugins
and software supply chains. A ChatGPT plugin once exposed sensitive data due
to a flaw in its OAuth mechanism, and incidents like PyTorch’s vulnerable
release over Christmas demonstrate the risks of system exploitation. Supply
chain vulnerabilities affect all technologies, while AI-specific threats like
prompt injection allow attackers to manipulate outputs or access sensitive
prompts, as seen in Google Gemini. ... Organizations can enhance their
security strategies by utilizing frameworks like Google’s Secure AI Framework
(SAIF). These frameworks highlight security principles, including access
control, detection and response systems, defense mechanisms, and risk-aware
processes tailored to meet specific business needs.
When LLMs become influencers

Our ability to influence LLMs is seriously circumscribed. Perhaps if you’re
the owner of the LLM and associated tool, you can exert outsized influence on
its output. For example, AWS should be able to train Amazon Q to answer
questions, etc., related to AWS services. There’s an open question as to
whether Q would be “biased” toward AWS services, but that’s almost a secondary
concern. Maybe it steers a developer toward Amazon ElastiCache and away from
Redis, simply by virtue of having more and better documentation and
information to offer a developer. The primary concern is ensuring these tools
have enough good training data so they don’t lead developers astray. ... Well,
one option is simply to publish benchmarks. The LLM vendors will ultimately
have to improve their output or developers will turn to other tools that
consistently yield better results. If you’re an open source project,
commercial vendor, or someone else that increasingly relies on LLMs as
knowledge intermediaries, you should regularly publish results that showcase
those LLMs that do well and those that don’t. Benchmarking can help move the
industry forward. By extension, if you’re a developer who increasingly relies
on coding assistants like GitHub Copilot or Amazon Q, be vocal about your
experiences, both positive and negative.
Deepfakes: How Deep Can They Go?

Metaphorically, spotting deepfakes is like playing the world’s most
challenging game of “spot the difference.” The fakes have become so
sophisticated that the inconsistencies are often nearly invisible, especially
to the untrained eye. It requires constant vigilance and the ability to
question the authenticity of audiovisual content, even when it looks or sounds
completely convincing. Recognizing threats and taking decisive actions are
crucial for mitigating the effects of an attack. Establishing well-defined
policies, reporting channels, and response workflows in advance is imperative.
Think of it like a citywide defense system responding to incoming missiles.
Early warning radars (monitoring) are necessary to detect the threat;
anti-missile batteries (AI scanning) are needed to neutralize it; and
emergency services (incident response) are essential to quickly handle any
impacts. Each layer works in concert to mitigate harm. ... If a deepfake
attack succeeds, organizations should immediately notify stakeholders of the
fake content, issue corrective statements, and coordinate efforts to remove
the offending content. They should also investigate the source, implement
additional verification measures, and provide updates to rebuild trust and
consider legal action.
No comments:
Post a Comment