Quote for the day:
"Great leaders start by leading themselves and to do that you need to know who you are" -- @GordonTredgold
Digital sovereignty feels good, but is it really?
There are no European equivalents of the American hyperscalers, let alone
national ones. Although OVHcloud, Intermax, and BIT can be proposed as
alternative managed locations for Azure, AWS, or Google Cloud, they are not
comparable to those services. They lack the same huge ecosystem of partners, are
less scalable, and are simply less user-friendly, especially when adopting new
services. The reality is that many software packages also accompany the move to
the cloud with a departure from on-premises. ... It is as much a ‘start’ of a
digital migration as it is an end. Good luck transferring a system with deep AWS
integrations to another location (even another public cloud). Although
cloud-native principles would allow the same containerized workloads to run
elsewhere, that has no bearing on the licenses purchased, compatibility and
availability of applications, scalability, or ease of use. A self-built variant
inside one’s own data center requires new expertise and almost assuredly a
larger IT team. ... In some areas, European alternatives will be perfectly
capable of replacing American software. However, there is no guarantee that a
secure, consistent, and mature offering will be available in every area, from
networking to AI inferencing and from CRM solutions to server hardware. The
reality is not only that IT players from the US are prominent, but that the
software ecosystem is globally integrated. Those who limit their choices must be
prepared to encounter problems. Operational data: Giving AI agents the senses to succeed
Agents need continuous streams of telemetry, logs, events, and metrics across
the entire technology stack. This isn't batch processing; it is live data
flowing from applications, infrastructure, security tools, and cloud platforms.
When a security agent detects anomalous behavior, it needs to see what is
happening right now, not what happened an hour ago ... Raw data streams aren't
enough. Agents need the ability to correlate information across domains
instantly. A spike in failed login attempts means nothing in isolation. But
correlate it with a recent infrastructure change and unusual network traffic,
and suddenly you have a confirmed security incident. This context separates
signal from noise. ... The data infrastructure required for successful agentic
AI has been on the "we should do that someday" list for years. In traditional
analytics, poor data quality results in slower insights. Frustrating, but not
catastrophic. ... Sophisticated organizations are moving beyond raw data
collection to delivering data that arrives enriched with context. Relationships
between systems, dependencies across services, and the business impact of
technical components must be embedded in the data workflow. This ensures agents
spend less time discovering context and more time acting on it. ... "Can our
agents sense what is actually happening in our environment accurately,
continuously, and with full context?" If the answer is no, get ready for agentic
chaos. The good news is that this infrastructure isn't just valuable for AI
agents. Identity, Data Security Converging Into Trouble for Security Teams: Report
Adversaries are shifting their focus from individual credentials to identity
orchestration, federation trust, and misconfigured automation, it continued.
Since access to critical data stores starts with identity, unified visibility
across identity and data security is required to detect misconfigurations,
reduce blind spots, and respond faster. That shift, experts warned, dramatically
increases the potential impact of identity failures. ... AI automation is often
a chain of agents, Schrader explained. “Each agent is a non-human identity that
needs lifecycle governance, and each step accesses, transforms, or hands off
data,” he said. “That means a mistake in identity governance — over-permissioned
agent, weak token control, missing attestation — immediately becomes a data
security incident — at machine speed and at scale — because the workflow keeps
executing and propagating access and data downstream.” “As AI automation runs
continuously, authorization becomes a live control system, not a quarterly
review,” he continued. “Agent chains amplify failures. One over-permissioned
non-human identity can propagate access and data downstream like workflow-shaped
lateral movement. Non-human identities sprawl fast via APIs and OAuth. Data risk
also shifts dynamically as agents transform and enrich outputs.” ... “Risk
multiplies with automation,” he told TechNewsWorld. “A compromised service
identity can cause automated data exfiltration, model poisoning, or large-scale
misconfiguration in seconds, which is far faster than manual attacks.”Why your AI agents need a trust layer before it’s too late
While traditional ML pipelines require human oversight at every step — data
validation, model training, deployment and monitoring — modern agentic AI
systems enable autonomous orchestration of complex workflows involving
multiple specialized agents. But with this autonomy comes a critical question:
How do we trust these agents? ... DNS transformed the internet by mapping
human-readable names to IP addresses. ANS does something similar for AI
agents, but with a crucial addition: it maps agent names to their
cryptographic identity, their capabilities and their trust level. Here’s how
it works in practice. Instead of agents communicating through hardcoded
endpoints like “http://10.0.1.45:8080,” they use self-describing names like
“a2a://concept-drift-detector.drift-detection.research-lab.v2.prod.” This
naming convention immediately tells you the protocol (agent-to-agent), the
function (drift detection), the provider (research-lab), the version (v2) and
the environment (production). But the real innovation lies beneath this naming
layer. ... The technical implementation leverages what’s called a zero-trust
architecture. Every agent interaction requires mutual authentication using
mTLS with agent-specific certificates. Unlike traditional service mesh mTLS,
which only proves service identity, ANS mTLS includes capability attestation
in the certificate extensions. An agent doesn’t just prove “I am agent X” — it
proves “I am agent X and I have the verified capability to retrain models.”
... The broader implications extend beyond just ML operations.
3 things cost-optimized CIOs should focus on to achieve maximum value
For Lenovo CIO Art Hu, optimization involves managing a funnel of
business-focused ideas. His company’s portfolio-based approach to AI includes
over 1,000 registered projects across all business areas. Hu has established a
policy for AI exploration and optimization that allows thousands of flowers to
bloom before focusing on value. “It’s important I don’t over-prioritize on
quality initially, because we have so many projects,” he says. ... “There’s a
technology thing, where you probably need multiple types of models and tools
to work together,” he says. “So Microsoft or OpenAI on their own probably
won’t do very well. However, when you combine Databricks, Microsoft, and your
agents, then you get a solution.” ... But another key area is revenue growth
management. Schildhouse’s team has developed an in-house diagnostic and
predictive tool to help employees make pricing decisions quicker. They tracked
usage to ensure the technology was effective, and the tool was scaled
globally. This success has sponsored AI-powered developments in related areas,
such as promotion and calendar optimization technology. “Scale is important at
a company the size and breadth of Colgate-Palmolive, because one-off solutions
in individual markets aren’t going to drive that value we need,” she says. “I
travel around to our key markets, and it’s nice to be in India or Brazil and
have the teams show how they’re using these tools, and how it’s making a
difference on the ground.”
Gauging the real impact of AI agents
Enterprises aren’t totally sold on AI, but they’re increasingly buying into AI
agents. Not the cloud-hosted models we hear so much about, but smaller,
distributed models that fit into IT as it has been used by enterprises for
decades. Given this, you surely wonder how it’s going. Are agents paying back?
Yes. How do they impact hosting, networking, operations? That’s complicated.
... There’s a singular important difference between an AI agent component and
an ordinary software component. Software is explicit in its use of data. The
programming includes data identification. AI is implicit in its data use; the
model was trained on data, and there may well be some API linkage to databases
that aren’t obvious to the user of the model. It’s also often true that when
an agentic component is used, it’s determined that additional data resources
are needed. Are all these resources in the same place? Probably not. ... As
agents evolve into real-time applications, this requires they also be
proximate to the real-time system they support (a factory or warehouse), so
the data center, the users, and any real-time process pieces all pull at the
source of hosting to optimize latency. Obviously, they can’t all be moved into
one place, so the network has to make a broad and efficient set of
connections. That efficiency demands QoS guarantees on latency as well as on
availability. It’s in the area of availability, with a secondary focus on QoS
attributes like latency, that the most agent-experienced enterprises see
potential new service opportunities.
OT–IT Cybersecurity: Navigating The New Frontier Of Risk
IT systems managing data and corporate services and OT systems managing
physical operations like energy, manufacturing, transportation, and utilities
were formerly distinct worlds, but they are now intricately linked. ...
Organizations can no longer treat IT and OT as distinct security areas as long
as this interconnection persists. Instead, they must embrace comprehensive
strategies that integrate protection, visibility, and risk management in both
domains. ... It is evident to attackers that OT systems are valuable targets.
Data, electricity grids, pipelines, industrial facilities, and public safety
are all at risk from breaches that formerly affected traditional IT settings
and increasingly spread to physical process networks. According to recent
incident statistics, an increasing number of firms report breaches that affect
both IT and OT systems; this is indicative of adversaries taking use of legacy
vulnerabilities and interconnected routes. ... The dynamic threat environment
created by contemporary OT-IT convergence is incompatible with traditional
perimeter defenses and flat network trusts. In order to prevent threats from
moving laterally both within and between IT/OT ecosystems, zero trust designs
place a strong emphasis on segmentation, stringent access control, and
continuous authentication. ... OT cybersecurity is an organizational issue
rather than just a technological one. IT security leaders and OT teams have
always worked in distinct silos with different goals and cultures.
SolarWinds, again: Critical RCE bugs reopen old wounds for enterprise security teams
SolarWinds is yet again disclosing security vulnerabilities in one of its
widely-used products. The company has released updates to patch six critical
authentication bypass and remote command execution vulnerabilities in its Web
Help Desk (WHD) IT software. ... The four critical bugs are typically very
reliable to exploit due to their deserialization and authentication logic
flaws, noted Ryan Emmons, security researcher at Rapid7. “For attackers,
that’s good news, because it means avoiding lots of bespoke exploit
development work like you’d see with other less reliable bug classes.”
Instead, attackers can use a standardized malicious payload across many
vulnerable targets, Emmons noted. “If exploitation is successful, the
attackers gain full control of the software and all the information stored by
it, along with the potential ability to move laterally into other systems.”
Meanwhile, the high-severity vulnerability CVE-2025-40536 would allow threat
actors to bypass security controls and gain access to certain functionalities
that should be restricted only to authenticated users. ... While this incident
is bad news, the good news is it’s not the same error, he noted.
... Vendors must get down past the symptom layer and address the root
cause of vulnerabilities in programming logic, he said, pointing out, “they
plug the hole, but don’t figure out why they keep having holes.”Policy to purpose: How HR can design sustainable scale in DPI
“In DPI, the human impact is immediate and profound: our systems touch
citizens, markets, and national platforms every single day,” Anand says. The
proximity to public outcomes, he notes, heightens expectations across the
organisation. Employees are no longer insulated from the downstream effects of
their work. “Employees increasingly recognise that their choices—technical,
operational, and ethical—directly influence outcomes for millions,” he
says. ... “The opportunity is to reframe governance as an enabler of
meaningful, durable impact rather than a constraint,” he says. Systems that
millions rely on require deep technical excellence and responsible design—work
that appeals to professionals who value longevity over novelty. ... As DPI
platforms scale and regulatory attention intensifies, Anand believes HR must
rethink what agility really means. “As scale and scrutiny intensify, HR must
design organisations where agility is achieved through clarity and
discipline,” he says. Flexibility, in this framing, is not ad hoc. It must be
institutionalised—across workforce models, talent mobility and capability
development—within clearly articulated guardrails. ... “The role of HR will
evolve from custodians of policy to architects of sustainable scale,” Anand
says. In DPI contexts, that means ensuring growth, governance and human
potential advance together, rather than pulling against one another.
No comments:
Post a Comment