Quote for the day:
"Leadership is the capacity to translate
vision into reality." -- Warren Bennis

“This is insane,” Harris told Maher, stressing that companies are releasing the
most “powerful, uncontrollable, and inscrutable technology” ever invented — and
doing so under intense pressure to cut corners on safety. The self-preservation
behaviors include rewriting code to extend the genAI’s run time, escaping
containment, and finding backdoors in infrastructure. In one case, a model found
15 new backdoors into open-source infrastructure software that it used to
replicate itself and remain “alive.” “It wasn’t until about a month ago that
that evidence came out,” Harris said. “So, when stuff we see in the movies
starts to come true, what should we be doing about this?” ... “The same
technology unlocking exponential growth is already causing reputational and
business damage to companies and leadership that underestimate its risks. Tech
CEOs must decide what guardrails they will use when automating with AI,” Gartner
said. Gartner recommends that organizations using genAI tools establish
transparency checkpoints to allow humans to access, assess, and verify AI
agent-to-agent communication and business processes. Also, companies need to
implement predefined human “circuit breakers” to prevent AI from gaining
unchecked control or causing a series of cascading errors.

With significant workloads in the cloud, many specialists demand DLP in the
cloud. However, discussions often turn ambiguous when asked for clear
requirements – an immense project risk. The organization-specific setup, in
particular, detection rules and the traffic in scope, determines whether a DLP
solution reliably identifies and blocks sensitive data exfiltration attempts or
just monitors irrelevant data transfers. ... Network DLP inspects traffic from
laptops and servers, whether it originates from browsers, tools and
applications, or the command line. It also monitors PaaS services. However, all
traffic must go through a network component that the DLP can intercept,
typically a proxy. This is a limitation if remote workers do not go through a
company proxy, but it works for laptops in the company network and data
transfers originating from (cloud) VMs and PaaS services. ... Effective cloud
DLP implementation requires a tailored approach that addresses your
organization’s specific risk profile and technical landscape. By first
identifying which user groups and communication channels present the greatest
exfiltration risks, organizations can deploy the right combination of Email,
Endpoint, and Network DLP solutions.

From the developer’s perspective, multi-agent flows reshape their work by
distributing tasks across domain-specific agents. “It’s like working with a team
of helpful collaborators you can spin up instantly,” says Warp’s Loyd. Imagine
building a new feature while, simultaneously, one agent summarizes a user log
and another handles repetitive code changes. “You can see the status of each
agent, jump in to review their output, or give them more direction as needed,”
adds Lloyd, noting that his team already works this way. ... As it stands today,
multi-agent processes are still quite nascent. “This area is still in its
infancy,” says Digital.ai’s To. Developers are incorporating generative AI in
their work, but as far as using multiple agents goes, most are just manually
arranging them in sequences. Roeck admits that a lot of manual work goes into
the aforementioned adversarial patterns. Updating system prompts and adding
security guardrails on a per-agent basis only compound the duplication. As such,
orchestrating the handshake between various agents will be important to reach a
net positive for productivity. Otherwise, copy-and-pasting prompts and outputs
across different chat UIs and IDEs will only make developers less efficient.

Organizations face several dangers when credentials are stolen, including
account takeovers, which allow threat actors to gain unauthorized access and
conduct phishing and financial scams. Attackers also use credentials to break
into other accounts. Cybersecurity companies point out that companies should
implement measures to protect digital identities, including the usual suspects
such as single sign-ons (SSO), multifactor authentication (MFA). But new
research also suggests that identity attacks are not always so easy to
recognize. ... “AI agents, chatbots, containers, IoT sensors – all of these have
credentials, permissions, and access rights,” says Moir. “And yet, 62 per cent
of organisations don’t even consider them as identities. That creates a huge,
unprotected surface.” As an identity security company, Cyberark has detected a
1,600 percent increase in machine identity-related attacks. At the same time,
only 62 percent of agencies or organizations do not see machines as an identity,
he adds. This is especially relevant for public agencies, as hackers can get
access to payments. Many agencies, however, have separated identity management
from cybersecurity. And while digital identity theft is rising, criminals are
also busy stealing our non-digital identities.

For enterprise technology leaders, the promise of productivity gains comes with
a sobering reality: these systems represent an entirely new attack surface that
most organizations aren’t prepared to defend. The researchers dedicate
substantial attention to what they diplomatically term “safety and privacy”
concerns, but the implications are more alarming than their academic language
suggests. “OS Agents are confronted with these risks, especially considering its
wide applications on personal devices with user data,” they write. The attack
methods they document read like a cybersecurity nightmare. “Web Indirect Prompt
Injection” allows malicious actors to embed hidden instructions in web pages
that can hijack an AI agent’s behavior. Even more concerning are “environmental
injection attacks” where seemingly innocuous web content can trick agents into
stealing user data or performing unauthorized actions. Consider the
implications: an AI agent with access to your corporate email, financial
systems, and customer databases could be manipulated by a carefully crafted web
page to exfiltrate sensitive information. Traditional security models, built
around human users who can spot obvious phishing attempts, break down when the
“user” is an AI system that processes information differently.
Since the dawn of this profession, developers and engineers have been under
pressure to ship faster and deliver bigger projects. The business wants to
unlock a new revenue stream or respond to a new customer need — or even just get
something out faster than a competitor. With executives now enamored with
generative AI, that demand is starting to exceed all realistic expectations. As
Andrew Boyagi at Atlassian told StartupNews, this past year has been "companies
fixing the wrong problems, or fixing the right problems in the wrong way for
their developers." I couldn't agree more. ... This year, we've seen the rise of
a new term: "slopsquatting." It's the descendant of our good friend
typosquatting, and it involves malicious actors exploiting generative AI's
tendency to hallucinate package names by registering those fake names in public
repos like npm or PyPi. Slopsquatting is a variation on classic dependency chain
abuse. The threat actor hides malware in the upstream libraries from which
organizations pull open-source packages, and relies on insufficient controls or
warning mechanisms to allow that code to slip into production. ... The key is to
create automated policy enforcement at the package level. This creates a more
secure checkpoint for AI-assisted development, so no single person or team is
responsible for manually catching every vulnerability.

Security debt can be viewed as a sibling to technical debt. In both cases, teams
make intentional short-term compromises to move fast, betting they can "pay back
the principal plus interest" later. The longer that payback is deferred, the
steeper the interest rate becomes and the more painful the repayment. With
technical debt, the risk is usually visible — you may skip scalability work
today and lose a major customer tomorrow when the system can't handle their
load. Security debt follows the same economic logic, but its danger often lurks
beneath the surface: Vulnerabilities, misconfigurations, unpatched components,
and weak access controls accrue silently until an attacker exploits them. The
outcome can be just as devastating — data breaches, regulatory fines, or
reputational harm — yet the path to failure is harder to predict because
defenders rarely know exactly how or when an adversary will strike. In citizen
developer environments, this hidden interest compounds quickly, making proactive
governance and timely "repayments" essential. ... While addressing past debt,
also implement policy enforcement and security guardrails to prevent recurrence.
This might include discovering and monitoring new apps, performing automated
vulnerability assessments, and providing remediation guidance to application
owners.
In the race to appear cutting-edge, a growing number of companies are engaging
in what industry experts refer to as “AI washing”—a misleading marketing
strategy where businesses exaggerate or fabricate the capabilities of their
technologies by labelling them as “AI-powered.” At its core, AI washing involves
passing off basic automation, scripted workflows, or rudimentary algorithms as
sophisticated artificial intelligence. ... This trend has escalated to such
an extent that regulatory bodies are beginning to intervene. In the United
States, the Securities and Exchange Commission (SEC) has started scrutinizing
and taking action against public companies that make unsubstantiated AI-related
claims. The regulatory attention underscores the severity and widespread nature
of the issue. ... The fallout from AI washing is significant and growing.
On one hand, it erodes consumer and enterprise trust in the technology. Buyers
and decision-makers, once optimistic about AI’s potential, are now increasingly
wary of vendors’ claims. ... AI washing not only undermines innovation but
also raises ethical and compliance concerns. Companies that misrepresent their
technologies may face legal risks, brand damage, and loss of investor
confidence. More importantly, by focusing on marketing over substance, they
divert attention and resources away from responsible AI development grounded in
transparency, accountability, and actual performance.

Many cyber insurance providers provide free risk assessments for businesses, but
John Candillo, field CISO at CDW, recommends doing a little upfront work to
smooth out the process and avoid getting blindsided. “Insurers want to know how
your business looks from the outside looking in,” he says. “A focus on this
ahead of time can greatly improve your situation when it comes to who's willing
to underwrite your policy, but also what your premiums are going to be and how
you’re answering questionnaires,” Conducting an internal risk assessment and
engaging with cybersecurity ratings companies such as SecurityScorecard or
Bitsight can help SMBs be more informed policy shoppers. “If you understand what
the auditor is going to ask you and you're prepared for it, the results of the
audit are going to be way different than if you're caught off guard,” Candillo
says. These steps get stakeholders thinking about what type of risk requires
coverage. Cyber insurance can broadly be put into two categories. First-party
coverage will protect against things such as breach response costs, cyber
extortion costs, data-loss costs and business interruptions. Third-party
coverage insures against risks such as breach liabilities and regulatory
penalties.

What's harder to pin down is what's business-critical. These are the assets that
support the processes the business can't function without. They're not always
the loudest or most exposed. They're the ones tied to revenue, operations, and
delivery. If one goes down, it's more than a security issue ... Focus your
security resources on systems that, if compromised, would create actual business
disruption rather than just technical issues. Organizations that implemented
this targeted approach reduced remediation efforts by up to 96%. ... Integrate
business context into your security prioritization. When you know which systems
support core business functions, you can make decisions based on actual impact
rather than technical severity alone. ... Focus on choke points - the systems
attackers would likely pass through to reach business-critical assets. These
aren't always the most severe vulnerabilities but fixing them delivers the
highest return on effort. ... Frame security in terms of business risk
management to gain support from financial leadership. This approach has proven
essential for promoting initiatives and securing necessary budgets. ... When you
can connect security work to business outcomes, conversations with leadership
change fundamentally. It's no longer about technical metrics but about business
protection and continuity. ... Security excellence isn't about doing more - it's
about doing what matters.
No comments:
Post a Comment