Quote for the day:
“Remember, teamwork begins by building
trust. And the only way to do that is to overcome our need for
invulnerability.” -- Patrick Lencioni

The U.S. Environmental Protection Agency (EPA) identified 97 drinking water
systems serving approximately 26.6 million users as having either critical or
high-risk cybersecurity vulnerabilities. Water utility leaders are especially
worried about ransomware, malware, and phishing attacks. American Water, the
largest water and wastewater utility company in the US, experienced a
cybersecurity incident that forced the company to shut down some of its systems.
That came shortly after a similar incident forced Arkansas City’s water
treatment facility to temporarily switch to manual operations. These attacks are
not limited to the US. Recently, UK-based Southern Water admitted that criminals
had breached its IT systems. In Denmark, hackers targeted the consumer data
services of water provider Fanø Vand, resulting in data theft and operational
hijack. These incidents show that this is a global risk, and authorities believe
they may be the work of foreign actors. ... The EU is taking a serious approach
to cybersecurity, with stricter enforcement and long-term investment in
essential services. Through the NIS2 Directive, member states are required to
follow security standards, report incidents, and coordinate national oversight.
These steps are designed to help utilities strengthen their defenses and improve
resilience.

Cheap, off-the-shelf language models are erasing the technical hurdles. FraudGPT
and WormGPT subscriptions start at roughly $200 per month, promising
‘undetectable’ malware, flawless spear-phishing prose, and step-by-step exploit
guidance. An aspiring criminal no longer needs the technical knowledge to tweak
GitHub proof-of-concepts. They paste a prompt such as ‘Write a PowerShell loader
that evades EDR’ and receive usable code in seconds. ... Researchers pushed the
envelope further with ReaperAI and AutoAttacker, proof-of-concept ‘agentic’
systems that chain LLM reasoning with vulnerability scanners and exploit
libraries. In controlled tests, they breached outdated Web servers, deployed
ransomware, and negotiated payment over Tor, without human input once launched.
Fully automated cyberattacks are just around the corner. ... Core defensive
practice now revolves around four themes. First, reducing the attack surface
through relentless automated patching. Second, assuming breach via Zero-Trust
segmentation and immutable off-line backups that neuter double-extortion
leverage. Third, hardening identity with universal multi-factor authentication
(MFA) and phishing-resistant authentication. Finally, exercising
incident-response plans with table-top and red-team drills that mirror
AI-assisted adversaries.

NVIDIA Omniverse on Azure allows for building and seamlessly integrating
advanced simulation and generative AI into existing 3D workflows. This
cloud-based platform includes APIs and services enabling developers to easily
integrate OpenUSD, as well as other sensor and rendering applications. OpenUSD’s
capabilities accelerate workflows, teams, and projects when creating 3D assets
and environments for large-scale, AI-enabled virtual worlds. The Omniverse
Development Workstation on Azure accelerates the process of building Omniverse
apps and tools, removing the time and complexity of configuring individual
software packages and GPU drivers. With NVIDIA Omniverse on Azure and OpenUSD,
marketing teams can create ultra-realistic 3D product previews and environments
so that customers can explore a retailer’s products in an engaging and
informative way. The platform also can deliver immersive augmented and virtual
reality experiences for customers, such as virtually test-driving a car or
seeing how new furniture pieces would look in an existing space. For retailers,
NVIDIA Omniverse can help create digital twins of stores or in-store displays to
simulate and evaluate different layouts to optimize how customers interact with
them.

Emerging data privacy regulations, coupled with escalating cybersecurity risks,
are flipping the script. Organisations can no longer afford to treat deletion as
an afterthought. From compliance violations to breach fallout, retaining data
beyond its lifecycle has a real downside. Many organisations still don’t have a
reliable, scalable way to delete data. Policies may exist on paper, but
consistent execution across environments, from cloud storage to aging legacy
systems, is rare. That gap is no longer sustainable. In fact, failing to delete
data when legally required is quickly becoming a regulatory, security, and
reputational risk. ... From a cybersecurity perspective, every byte of retained
data is a potential breach exposure. In many recent cases, post-incident
investigations have uncovered massive amounts of sensitive data that should have
been deleted, turning routine breaches into high-stakes regulatory events. But
beyond the legal risks, excess data carries hidden operational costs. ... Most
CISOs, privacy officers, and IT leaders understand the risks. But deletion is
difficult to operationalise. Data lives across multiple systems, formats, and
departments. Some repositories are outdated or no longer supported. Others are
siloed or partially controlled by third parties. And in many cases, existing
tools lack the integration or governance controls needed to automate deletion at
scale.

IT teams need to look for flexible, agnostic workspace management solutions that
can respond to whether endpoints are running Windows 11, MacOS, ChromeOS,
virtual desktops, or cloud PCs. They want to future proof their endpoint
investments, knowing that their workspace management must be highly adaptable as
business requirements change. To support this disparate endpoint estate, DEX
solutions have come to the forefront as they have evolved from a one-off tool
for monitoring employee experience to an integrated platform by which
administrators can manage endpoints, security tools, and performance
remediation. ... In the composite environment IT has the challenge of securing
workflows across the endpoint estate, regardless of delivery platform, and doing
so without interfering with the employee experience. As the number of both
installed and SaaS applications grows, IT teams can leverage automation to
streamline patching and other security updates and to monitor SaaS credentials
effectively. Automation becomes invaluable in operational efficiency across an
increasingly complex application landscape. Another security challenge is the
existence of ‘Shadow SaaS’ in which employees, like shadow IT/AI, use
unsanctioned tools they believe will help productivity.
Effective identity investigations start with asking the right questions and not
merely responding to alerts. Security teams need to look deeper: Is this login
location normal for the user? Is the device consistent with their normal
configuration? Is the action standard for their role? Are there anomalies
between systems? These questions create necessary context, enabling defenders to
differentiate between standard deviations and hostile activity. Without that
investigative attitude, security teams might pursue false positives or overlook
actual threats. By structuring identity events with focused, behavior-based
questions, analysts can get to the heart of the activity and react with accuracy
and confidence. ... Identity theft often hides in plain sight, flourishing in
the ordinary gaps between expected and actual behavior. Its deception lies in
normalcy, where activity at the surface appears authentic but deviates quietly
from established patterns. That’s why trust in a multi-source approach to truth
is essential. Connecting insights from network traffic, authentication logs,
application access, email interactions, and external integrations can help teams
build a context-aware, layered picture of every user. This blended view helps
uncover subtle discrepancies, confirm anomalies, and shed light on threats that
routine detection will otherwise overlook, minimizing false positives and
revealing actual risks.
Addressing AI data quality requires more human involvement, not less.
Organizations need data stewardship frameworks that include subject matter
experts who understand not just technical data structures, but business context
and implications. These data stewards can identify subtle but crucial
distinctions that pure technical analysis might miss. In educational technology,
for example, combining parents, teachers, and students into a single “users”
category for analysis would produce meaningless insights. Someone with domain
expertise knows these groups serve fundamentally different roles and should be
analyzed separately. ... Despite the industry’s excitement about new AI model
releases, a more disciplined approach focused on clearly defined use cases
rather than maximum data exposure proves more effective. Instead of opting for
more data to be shared with AI, sticking to the basics and thinking about
product concepts produces better results. You don’t want to just throw a lot of
good stuff in a can and assume that something good will happen. ... Future AI
systems will need “data entitlement” capabilities that automatically understand
and respect access controls and privacy requirements. This goes beyond current
approaches that require manual configuration of data permissions for each AI
application.

With agentic AI, APIs evolve from passive endpoints into active dialogue
partners. They need to handle more than single, fixed transactions. Instead,
APIs must support iterative engagement, where agents adjust their calls based on
prior results and current context. This leads to more flexible communication
models. For instance, an agent might begin by querying one API to gather user
data, process it internally, and then call another endpoint to trigger a
workflow. APIs in such environments must be reliable, context-aware and be able
to handle higher levels of interaction – including unexpected sequences of
calls. One of the most powerful capabilities of agentic AI is its ability to
coordinate complex workflows across multiple APIs. Agents can manage chains of
requests, evaluate priorities, handle exceptions, and optimise processes in real
time. ... Agentic AI is already setting the stage for more responsive,
autonomous API ecosystems. Get ready for systems that can foresee workload
shifts, self-tune performance, and coordinate across services without waiting
for any command from a human. Soon, agentic AI will enable seamless
collaboration between multiple AI systems—each managing its own workflow, yet
contributing to larger, unified business goals. To support this evolution, APIs
themselves must transform.

Technical debt is a business’s running tally of aging or defunct software and
systems. While workarounds can keep the lights on, they come with risks. For
instance, there are operational challenges and expenses associated with managing
older systems. Additionally, necessary expenses can accumulate if technical debt
is allowed to get out of control, ballooning the costs of a proper fix. While
eliminating technical debt is challenging, it’s fundamentally an investment in a
business’s future security. Excess technical debt doesn’t just lead to
operational inefficiencies. It also creates cybersecurity weaknesses that
inhibit threat detection and response. ... “As threats evolve, technical debt
becomes a roadblock,” says Jeff Olson, director of software-defined WAN product
and technical marketing at Aruba, a Hewlett Packard Enterprise company.
“Security protocols and standards have advanced to address common threats, but
if you have older technology, you’re at risk until you can upgrade your
devices.” Upgrades can prove challenging, however. ... The first step to
reducing technical debt is to act now, Olson says. “Sweating it out” for another
two or three years will only make things worse. Waiting also stymies innovation,
as reducing technical debt can help SMBs take advantage of advanced technologies
such as artificial intelligence.
The best CISOs now operate less like technical gatekeepers and more like
orchestral conductors, aligning procurement, legal, finance, and operations
around a shared expectation of risk awareness. ... The responsibility for
managing third-party risk no longer rests solely on IT security teams. CISOs
must transform their roles from technical protectors to strategic leaders who
influence enterprise risk management at every level. This evolution
involves:Embracing enterprise-wide collaboration: Effective management of
third-party risk requires cooperation among diverse departments such as
procurement, legal, finance, and operations. By collaborating across the
organization, CISOs ensure that third-party risk management is comprehensive and
proactive rather than reactive. Integrating risk management into governance
frameworks: Third-party risk should be a top agenda item in board meetings and
strategic planning sessions. CISOs need to work with senior leadership to embed
vendor risk management into the organization’s overall risk landscape. Fostering
transparency and accountability: Establishing clear reporting lines and
protocols ensures that issues related to third-party risk are promptly escalated
and addressed. Accountability should span every level of the organization to
ensure effective risk management.