Quote for the day:
“Appreciate the people who give you expensive things like time, loyalty and honesty.” -- Vala Afshar
Making sense of 6G: what will the ‘agentic telco’ look like?
6G will be the fundamental network for physical AI, promises Nvidia. Think of
self-driving cars, robots in warehouses, or even AI-driven surgery. It’s all
very futuristic; to actually deliver on these promises, a wide range of industry
players will be needed, each developing the functionality of 6G. ... The
ultimate goal for network operators is full automation, or “Level 5” automation.
However, this seems too ambitious for now in the pre-6G era. Google refers to
the twilight zone between Levels 4 and 5, with 4 assuming fully autonomous
operation in certain circumstances. Currently, the obvious example of this type
of automation is a partially self-driving car. As a user, you must always be
ready to intervene, but ideally, the vehicle will travel without corrections. A
Waymo car, which regularly drives around without a driver, is officially Level
4. ... Strikingly, most users hardly need this ongoing telco innovation. Only
exceptionally extensive use of 4K streams, multiple simultaneous downloads,
and/or location tracking can exceed the maximum bandwidth of most forms of 5G.
Switch to 4G and in most use cases of mobile network traffic, you won’t notice
the difference. You will notice a malfunction, regardless of the generation of
network technology. However, the idea behind the latest 5G and future 6G
networks is that these interruptions will decrease. Predictions for 6G assume a
hundredfold increase in speed compared to 5G, with a similar improvement in
bandwidth.FinOps for agents: Loop limits, tool-call caps and the new unit economics of agentic SaaS
FinOps practitioners are increasingly treating AI as its own cost domain. The
FinOps Foundation highlights token-based pricing, cost-per-token and
cost-per-API-call tracking and anomaly detection as core practices for managing
AI spend. Seat count still matters, yet I have watched two customers with the
same licenses generate a 10X difference in inference and tool costs because one
had standardized workflows and the other lived in exceptions. If you ship agents
without a cost model, your cloud invoice quickly becomes the lesson plan ... In
early pilots, teams obsess over token counts. However, for a scaled agentic SaaS
running in production, we need one number that maps directly to value:
Cost-per-Accepted-Outcome (CAPO). CAPO is the fully loaded cost to deliver one
accepted outcome for a specific workflow. ... We calculate CAPO per workflow and
per segment, then watch the distribution, not just the average. Median tells us
where the product feels efficient. P95 and P99 tell us where loops, retries and
tool storms are hiding. Note, failed runs belong in CAPO automatically since we
treat the numerator as total fully loaded spend for that workflow (accepted +
failed + abandoned + retried) and the denominator as accepted outcomes only, so
every failure is “paid for” by the successes. Tagging each run with an outcome
state and attributing its cost to a failure bucket allows us to track Failure
Cost Share alongside CAPO and see whether the problem is acceptance rate,
expensive failures or retry storms. AI went from assistant to autonomous actor and security never caught up
The first is the agent challenge. AI systems have moved past assistants that
respond to queries and into autonomous agents that execute multi-step tasks,
call external tools, and make decisions without per-action human approval.
This creates failure conditions that exist without any external attacker. An
agent with overprivileged access and poor containment boundaries can cause
damage through ordinary operation. ... The second category is the visibility
challenge. Sixty-three percent of employees who used AI tools in 2025 pasted
sensitive company data, including source code and customer records, into
personal chatbot accounts. The average enterprise has an estimated 1,200
unofficial AI applications in use, with 86% of organizations reporting no
visibility into their AI data flows. ... The third is the trust challenge.
Prompt injection moved from academic research into recurring production
incidents in 2025. OWASP’s 2025 LLM Top 10 list ranked prompt injection at the
top. The vulnerability exists because LLMs cannot reliably separate
instructions from data input. ... Wang recommended tiering agents by risk
level. Agents with access to sensitive data or production systems warrant
continuous adversarial testing and stronger review gates. Lower-risk agents
can rely on standardized controls and periodic sampling. “The goal is to make
continuous validation part of the engineering lifecycle,” she said.A scorecard for cyber and risk culture
Cybersecurity and risk culture isn’t a vibe. It’s a set of actions, behaviors
and attitudes you can point to without raising your voice. ... You can’t train
people into that. You have to build an environment where that behavior makes
sense, an environment based on trust and performance not one or the other ...
Ownership is a design outcome. Treat it like product design. Remove friction.
Clarify choices. Make it hard to do the wrong thing by accident and easy to
make the best possible decision. ... If you can’t measure the behavior, you
can’t claim the culture. You can claim a feeling. Feelings don’t survive
audits, incidents or Board scrutiny. We’ve seen teams measure what’s easy and
then call the numbers “maturity.” Training completion. Controls “done.” Zero
incidents. Nice charts. Clean dashboards. Meanwhile, the real culture runs
beneath the surface, making exceptions, working around friction and staying
quiet when speaking up feels risky. ... One of the most dangerous culture
metrics is silence dressed up as success. “Zero incidents reported” can mean
you’re safe. It can also mean people don’t trust the system enough to speak
up. The difference matters. The wrong interpretation is how organizations walk
into breaches with a smile. Measure culture as you would safety in a factory.
... Metrics without governance create cynical employees. They see numbers.
They never see action. Then they stop caring. Be careful not to make
compliance ‘the culture’ as it’s what people do when no one is looking that
counts.Why encrypted backups may fail in an AI-driven ransomware era
For 20 years, I've talked up the benefits of the tech industry's best-practice
3-2-1 backup strategy. This strategy is just how it's done, and it works. Or
does it? What if I told you that everything you know and everything you do to
ensure quality backups is no longer viable? In fact, what if I told you that
in an era of generative AI, when it comes to backups, we're all pretty much
screwed? ... The easy-peasy assumption is that your data is good before it's
backed up. Therefore, if something happens and you need to restore, the data
you're bringing back from the backup is also good. Even without malware, AI,
and bad actors, that's not always the way things turn out. Backups can get
corrupted, and they might not have been written right in the first place,
yada, yada, yada. But for this article, let's assume that your backup and
restore process is solid, reliable, and functional. ... Even if the thieves
are willing to return the data, their AI-generated vibe-coded software might
be so crappy that they're unable to keep up their end of the bargain. Do you
seriously think that threat actors who use vibe coding test their threat
engines? ... Some truly nasty attacks specifically target immutable storage by
seeking out misconfigurations. Here, they attack the management
infrastructure, screwing with network data before it ever reaches the backup
system. The net result is that before encryption of off-site backups begins,
and before the backups even take place, the malware has suitably corrupted and
infected the data. How Deepfakes and Injection Attacks Are Breaking Identity Verification
Unlike social media deception, these attacks can enable persistent access
inside trusted environments. The downstream impact is durable: account
persistence, privilege-escalation pathways, and lateral movement opportunities
that start with a single false verification decision. ... One practical
problem for deepfake defense is generalization: detectors that test well in
controlled settings often degrade in “in-the-wild” conditions. Researchers at
Purdue University evaluated deepfake detection systems using their real-world
benchmark based on the Political Deepfakes Incident Database (PDID). PDID
contains real incident media distributed on platforms such as X, YouTube,
TikTok, and Instagram, meaning the inputs are compressed, re-encoded, and
post-processed in the same ways defenders often see in production. ... It’s
important to be precise: PDID measures robustness of media detection on real
incident content. It does not model injection, device compromise, or
full-session attacks. In real identity workflows, attackers do not choose one
technique at a time; they stack them. A high-quality deepfake can be replayed.
A replay can be injected. An injected stream can be automated at scale. The
best media detectors still can be bypassed if the capture path is untrusted.
That’s why Deepsight goes even deeper than asking “Is this video a
deepfake?”Virtual twins and AI companions target enterprise war rooms
Organisations invest millions digitising processes and implementing enterprise
systems. Yet when business leaders ask questions spanning multiple domains,
those systems don’t communicate effectively. Teams assemble to manually
cross-reference data, spending days producing approximations rather than
definitive answers. Manufacturing experts at the conference framed this as
decades of incomplete digitisation. ... Addressing this requires fundamentally
changing how enterprise data is structured and accessed. Rather than systems
operating independently with occasional data exchanges, the approach involves
projecting information from multiple sources onto unified representations that
preserve relationships and context. Zimmerman used a map analogy to explain the
concept. “If you take an Excel spreadsheet with location of restaurants and
another Excel spreadsheet with location of flower shops, and you try to find a
restaurant nearby a flower shop, that’s difficult,” he said. “If it’s on the
map, it is simple because the data are correlated by nature.” ... Having unified
data representations solves part of the problem. Accessing them requires
interfaces that don’t force users to understand complex data structures or
navigate multiple applications. The conversational AI approach – increasingly
common across enterprise software – aims to let users ask questions naturally
rather than construct database queries or click through application menus.
The rise of the outcome-orchestrating CIO
Delivering technology isn’t enough. Boards and business leaders want results —
revenue, measurable efficiency, competitive advantage — and they’re increasingly
impatient with IT organizations that can’t connect their work to those outcomes.
... Funding models change, too. Traditional IT budgets fund teams to deliver
features. When the business pivots, that becomes a change request — creating
friction even when it’s not an adversarial situation. “Instead, fund a value
stream,” Sample says. “Then, whatever the business needs, you absorb the change
and work toward shared goals. It doesn’t matter what’s on the bill because
you’re all working toward the same outcome.” It’s a fundamental reframing of
IT’s role. “Stop talking about shared services,” says Ijam of the Federal
Reserve. “Talk about being a co-owner of value realization.” That means evolving
from service provider to strategic partner — not waiting for requirements but
actively shaping how technology creates business results. ... When outcome
orchestration is working, the boardroom conversation changes. “CIOs are
presenting business results enabled by technology — not just technology updates
— and discussing where to invest next for maximum impact,” says Cox Automotive’s
Johnson. “The CFO begins to see technology as an investment that generates
returns, not just a cost to be managed.” ... When outcome orchestration takes
hold, the impact shows up across multiple dimensions — not just in business
metrics, but in how IT is perceived and how its people experience their work.
The future of banking: When AI becomes the interface
Experiences must now adapt to people—not the other way around. As generative
capabilities mature, customers will increasingly expect banking interactions to
be intuitive, conversational, and personalized by default, setting a much higher
bar for digital experience design. ... Leadership teams must now ask harder
questions. What proprietary data, intelligence, or trust signals can only our
bank provide? How do we shape AI-driven payment decisions rather than merely
fulfill them? And how do we ensure that when an AI decides how money moves, our
institution is not just compliant, but preferred? ... AI disruption presents
both significant risk and transformative opportunity for banks. To remain
relevant, institutions must decide where AI should directly handle customer
interactions, how seamlessly their services integrate into AI-driven ecosystems,
and how their products and content are surfaced and selected by AI-led discovery
and search. This requires reimagining the bank’s digital assistant across seven
critical dimensions: being front and centre at the point of intent, contextual
in understanding customer needs, multi-modal across voice, text, and interfaces,
agentic in taking action on the customer’s behalf, revenue-generating through
intelligent recommendations, open and connected to broader ecosystems, and
capable of providing targeted, proactive support.
No comments:
Post a Comment