Quote for the day:
"In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it." -- Jane Smiley
Crooks are hijacking and reselling AI infrastructure: Report
In a report released Wednesday, researchers at Pillar Security say they have
discovered campaigns at scale going after exposed large language model (LLM) and
MCP endpoints – for example, an AI-powered support chatbot on a website. “I
think it’s alarming,” said report co-author Ariel Fogel. “What we’ve discovered
is an actual criminal network where people are trying to steal your credentials,
steal your ability to use LLMs and your computations, and then resell it.” ...
How big are these campaigns? In the past couple of weeks alone, the researchers’
honeypots captured 35,000 attack sessions hunting for exposed AI infrastructure.
“This isn’t a one-off attack,” Fogel added. “It’s a business.” He doubts a
nation-state it behind it; the campaigns appear to be run by a small group. ...
Defenders need to treat AI services with the same rigor as APIs or databases, he
said, starting with authentication, telemetry, and threat modelling early in the
development cycle. “As MCP becomes foundational to modern AI integrations,
securing those protocol interfaces, not just model access, must be a priority,”
he said. ... Despite the number of news stories in the past year about AI
vulnerabilities, Meghu said the answer is not to give up on AI, but to keep
strict controls on its usage. “Do not just ban it, bring it into the light and
help your users understand the risk, as well as work on ways for them to use
AI/LLM in a safe way that benefits the business,” he advised.AI-Powered DevSecOps: Automating Security with Machine Learning Tools
Here's the uncomfortable truth: AI is both causing and solving the same
problem. A Snyk survey from early 2024 found that 77% of technology leaders
believe AI gives them a competitive advantage in development speed. That's
great for quarterly demos and investor decks. It's less great when you realize
that faster code production means exponentially more code to secure, and most
organizations haven't figured out how to scale their security practice at the
same rate. ... Don't try to AI-ify your entire security stack at once. Pick
one high-pain problem — maybe it's the backlog of static analysis findings
nobody has time to triage, or maybe it's spotting secrets accidentally
committed to repos — and deploy a focused tool that solves just that problem.
Learn how it behaves. Understand its failure modes. Then expand. ... This is
non-negotiable, at least for now. AI should flag, suggest, and prioritize. It
should not auto-merge security fixes or automatically block deployments
without human confirmation. I've seen two different incidents in the past year
where an overzealous ML system blocked a critical hotfix because it
misclassified a legitimate code pattern as suspicious. Both cases were
resolved within hours, but both caused real business impact. The right mental
model is "AI as junior analyst." ... You need clear policies around which AI
tools are approved for use, who owns their output, and how to handle
disagreements between human judgment and AI recommendations.
AI & the Death of Accuracy: What It Means for Zero-Trust
The basic idea is that as the signal quality degrades over time through junk
training data, models can remain fluent and fully interact with the user while
becoming less reliable. From a security standpoint, this can be dangerous, as
AI models are positioned to generate confident-yet-plausible errors when it
comes to code reviews, patch recommendations, app coding, security triaging,
and other tasks. More critically, model degradation can erode and misalign
system guardrails, giving attackers the opportunity exploit the opening
through things like prompt injection. ... "Most enterprises are not training
frontier LLMs from scratch, but they are increasingly building workflows that
can create self-reinforcing data stores, like internal knowledge bases, that
accumulate AI-generated text, summaries, and tickets over time," she tells
Dark Reading. ... Gartner said that to combat the potential impending
issue of model degradation, organizations will need a way to identify and tag
AI-generated data. This could be addressed through active metadata practices
(such as establishing real-time alerts for when data may require
recertification) and potentially appointing a governance leader that knows how
to responsibly work with AI-generated content. ... Kelley argues that there
are pragmatic ways to "save the signal," namely through prioritizing
continuous model behavior evaluation and governing training data.
The Friction Fix: Change What Matters
From devops to CTO: 8 things to start doing now
Devops leaders have the opportunity to make a difference in their organization
and for their careers. Lead a successful AI initiative, deploy to production,
deliver business value, and share best practices for other teams to follow.
Successful devops leaders don’t jump on the easy opportunities; they look for
the ones that can have a significant business impact. ... Another area where
devops engineers can demonstrate leadership skills is by establishing standards
for applying genAI tools throughout the software development lifecycle (SDLC).
Advanced tools and capabilities require effective strategies to extend best
practices beyond early adopters and ensure that multiple teams succeed. ... If
you want to be recognized for promotions and greater responsibilities, a place
to start is in your areas of expertise and with your team, peers, and technology
leaders. However, shift your focus from getting something done to a practice
leadership mindset. Develop a practice or platform your team and colleagues want
to use and demonstrate its benefits to the organization. Devops engineers can
position themselves for a leadership role by focusing on initiatives that
deliver business value. ... One of the hardest mindset transitions for CTOs is
shifting from being the technology expert and go-to problem-solver to becoming a
leader facilitating the conversation about possible technology implementations.
If you want to be a CTO, learn to take a step back to see the big picture and
engage the team in recommending technology solutions.The stakes rise for the CIO role in 2026
The CIO's days as back-office custodian of IT are long gone, to be sure, but
that doesn't mean the role is settled. Indeed, Seewald and others see plenty of
changes still underway. In 2026, the CIO's role in shaping how the business
operates and performs is still expanding. It reflects a nuanced change in
expectations, according to longtime CIOs, analysts and IT advisors -- and one
that is showing up in many ways as CIOs become more directly involved in nailing
down competitive advantage and strategic success across their organizations. ...
"While these core responsibilities remain the same, the environment in which
CIOs operate has become far more complex," Tanowitz added. Conal Gallagher, CIO
and CISO at Flexera, said the CIO in 2026 is now "accountable for outcomes:
trusted data, controlled spend, managed risk and measurable productivity. "The
deliverable isn't a project plan," Gallagher said. "It's proof that the business
runs faster, safer and more cost-disciplined because of the operating model IT
enables." ... In 2026, the CIO role is less about being the technology owner and
more about being a business integrator, Hoang said. At Commvault, that shift
places greater emphasis on governance and orchestration across ecosystems.
"We're operating in a multicloud, multivendor, AI-infused environment," she
said. "A big part of my job is building guardrails and partnerships that enable
others to move fast -- safely," she said. Inside the Shift to High-Density, AI-Ready Data Centres
As density increases, design philosophy must evolve. Power infrastructure, backup systems, and cooling can no longer be treated as independent layers; they have to be tightly integrated. Our facilities use modular and scalable power and cooling architectures that allow us to expand capacity without disrupting live environments. Rated-4 resilience is non-negotiable, even under continuous, high-density AI workloads. The real focus is flexibility. Customers shouldn’t be forced into an all-or-nothing transition. Our approach allows them to move gradually to higher densities while preserving uptime, efficiency, and performance. High-density AI infrastructure is less about brute force and more about disciplined engineering that sustains reliability at scale. ... The most common misconception is that AI data centres are fundamentally different entities. While AI workloads do increase density, power, and cooling demands, the core principles of reliability, uptime, and efficiency remain unchanged. AI readiness is not about branding; it’s about engineering and operations. Supporting AI workloads requires scalable and resilient power delivery, precision cooling, and flexible designs that can handle GPUs and accelerators efficiently over sustained periods. Simply adding more compute without addressing these fundamentals leads to inefficiency and risk. The focus must remain on mission-critical resilience, cost-effective energy management, and sustainability.Software Supply Chain Threats Are on the OWASP Top Ten—Yet Nothing Will Change Unless We Do
As organizations deepen their reliance on open-source components and embrace
AI-enabled development, software supply chain risks will become more prevalent.
In the OWASP survey, 50% of respondents ranked software supply chain failures
number one. The awareness is there. Now the pressure is on for software
manufacturers to enhance software transparency, making supply chain attacks far
less likely and less damaging. ... Attackers only need one forgotten open-source
component from 2014 that still lives quietly inside software to execute a
widespread attack. The ability to cause widespread damage by targeting the
software supply chain makes these vulnerabilities alluring for attackers. Why
break into a hardened product when one outdated dependency—often buried several
layers down—opens the door with far less effort? The SolarWinds software supply
chain attack that took place in 2020 demonstrated the access adversaries gain
when they hijack the build process itself. ... “Stable” legacy components often
go uninspected for years. These aging libraries, firmware blocks, and
third-party binaries frequently contain memory-unsafe constructs and unpatched
vulnerabilities that could be exploited. Be sure to review legacy code and not
give it the benefit of the doubt. ... With an SBOM in hand, generated at every
build, you can scan software for vulnerabilities and remediate issues before
they are exploited.
What the first 24 hours of a cyber incident should look like
When a security advisory is published, the first question is whether any assets
are potentially exposed. In the past, a vendor’s claim of exploitation may have
sufficed. Given the precedent set over the past year, it is unwise to rely
solely on a vendor advisory for exploited-in-the-wild status. Too often,
advisories or exploitation confirmations reach teams too late or without the
context needed to prioritise the response. CISA’s KEV, trusted third-party
publications, and vulnerability researchers should form the foundation of any
remediation programme. ... Many organisations will leverage their incident
response (IR) retainers to assess the extent of the compromise or, at a minimum,
perform a rudimentary threat hunt for indicators of compromise (IoCs) before
involving the IR team. As with the first step, accurate, high-fidelity
intelligence is critical. Simply downloading IoC lists filled with dual-use
tools from social media will generate noise and likely lead to inaccurate
conclusions. Arguably, the cornerstone of the initial assessment is ensuring
that intelligence incorporates decay scoring to validate command-and-control
(C2) infrastructure. For many, the term ‘threat hunt’ translates to little more
than a log search on external gateways. ... The approach at this stage will be
dependent on the results of the previous assessments. There is no default
playbook here; however, an established decision framework that dictates how a
company reacts is key.
NIST’s AI guidance pushes cybersecurity boundaries
For CISOs, what should matter is that NIST is shifting from a broad,
principle-based AI risk management framework toward more operationally grounded
expectations, especially for systems that act without constant human oversight.
What is emerging across NIST’s AI-related cybersecurity work is a recognition
that AI is no longer a distant or abstract governance issue, but a near-term
security problem that the nation’s standards-setting body is trying to tackle in
a multifaceted way. ... NIST’s instinct to frame AI as an extension of
traditional software allows organizations to reuse familiar concepts — risk
assessment, access control, logging, defense in depth — rather than starting
from zero. Workshop participants repeatedly emphasized that many controls
do transfer, at least in principle. But some experts argue that the analogy
breaks down quickly in practice. AI systems behave probabilistically, not
deterministically, they say. Their outputs depend on data that may change
continuously after deployment. And in the case of agents, they may take actions
that were not explicitly scripted in advance. ... “If you were a consumer of all
of these documents, it was very difficult for you to look at them and understand
how they relate to what you are doing and also understand how to identify where
two documents may be talking about the same thing and where they overlap.”
No comments:
Post a Comment