Quote for the day:
"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode
The hard part of purple teaming starts after detection
Imagine you’re driving, and you see the car ahead braking suddenly. Awareness
helps, but it’s your immediate reaction that avoids the collision. Insurance
plans don’t matter at that moment. Nor do compliance reports or
dashboards. Only vigilance and rehearsal matter. Cyber resilience
works the same way. You can’t build the instinct required to act by running one
simulation a year. You build it through repetition. Through testing how specific
scenarios unfold. Through examining not only how adversaries get in, but also
how they move, escalate, evade, and exfiltrate. This is the heart of
real purple teaming. ... AI can accelerate analysis, but it can’t replace
intuition, design, or the judgment required to act. If the organization hasn’t
rehearsed what to do when the signal appears, AI only accelerates the moment
when everyone realises they don’t know what happens next. This is why so
much testing today only addresses opportunistic attacks. It cleans up the
low-hanging fruit. ... The standard testing model traps everyone involved:
One-off tests create false confidence; Scopes limit imagination. Time
pressure eliminates depth; Commercial structures discourage
collaboration; Tooling gives the illusion of capability;
and Compliance encourages the appearance of rigour instead of the reality
of it. This is why purple teaming often becomes “jump out, stabilize, pull
the chute, roll on landing.” But what about the hard scenarios? What about
partial deployments? What about complex failures? That’s where resilience is
built.
State AI regulations could leave CIOs with unusable systems
Numerous states are considering AI regulations for systems used in medical care,
insurance, human resources, finance and other critical areas. ... Despite the
growing regulatory risk, businesses appear unwilling to slow AI deployments.
"Moving away from AI with the regulation is not going to be an option for us,"
Juttiyavar said. He said AI is already deeply embedded in how organizations
operate and is essential for speed and competitiveness. ... If CIOs establish
strong internal frameworks for AI deployment, "that helps you react better to
legislative change" and anticipate new requirements, Kourinian said. Still,
regulatory shifts can leave companies with systems that are technically sound
but legally unusable, said Peter Cassat, a partner at CM Law. To manage that
risk, Cassat advises CIOs to negotiate "change of law" provisions in vendor
contracts that provide termination rights if regulations make continued use of a
system impossible or impractical. But such provisions do not eliminate the risk
of sunk costs. "If it's a SaaS provider and you've signed a three-year term,
they don't want to necessarily let you walk for free either," Cassat said.
Beyond legal exposure, CIOs must also anticipate public and political reaction
to AI and biometric tools. "The CIO absolutely has the responsibility to
understand how this technology could be perceived -- not just internally, but by
the public and lawmakers," said Mark Moccia, an analyst at Forrester
Research.Your dev team isn’t a cost center — it’s about to become a multiplier
If you treat AI as a pathway to eliminate developer headcount, sure, you’ll
capture some cost savings in the short term. But you’ll miss the bigger
opportunity entirely. You’ll be the bank executive in 1975 who saw ATMs and
thought, “Great, we can close branches and fire tellers.” Meanwhile, your
competitors have automated the mundane teller tasks and are opening new branches
to sell higher-end services to more people. The 1.4-1.6x productivity
improvement that GDPval documented isn’t about doing the same work with fewer
people. It’s about doing vastly more work with the same people. That new product
idea you had that was 10x too expensive to develop? It’s now possible. That
customer experience improvement that could drive loyalty that you didn’t have
the headcount for? It’s on the table. The technical debt you’ve been
accumulating? You can start to pay it down. ... What struck me about Werner’s
final keynote wasn’t the content, it was the intent. This was Werner’s last time
at that podium. He could have done a victory lap through AWS’s greatest hits.
Instead, he spent his time outlining a framework of success for the next
generation of developers. For those of us leading technology organizations, the
framework is both validating and challenging. Validating because these traits
aren’t new. They have always separated good developers from great ones.
Challenging because AI amplifies everything, including the gaps in our
capabilities.Cloud teams are hitting maturity walls in governance, security, and AI use
Migration activity remains heavy across enterprises, especially for data
platforms. At the same time, downtime tolerance is limited. Nearly half of
respondents said their organizations can accept only one to six hours of
downtime for cutover during migration. That combination creates pressure to
migrate at speed while keeping data integrity intact. In regulated environments,
that pressure extends to audit evidence and compliance validation, which often
needs to be produced in parallel with migration execution. ... Cloud-native
managed database adoption is also high. More than half of respondents reported
using managed cloud databases, and a third reported using SaaS-based database
services. Only 10% reported operating self-hosted databases. This shift toward
managed services reduces operational burden on infrastructure teams, but it
increases reliance on identity governance, network segmentation, and
application-layer security controls. It also creates stronger dependency on
cloud provider logging and access models. ... Development stacks also reflect
this shift. Python was reported as a primary language, with Java close behind.
These languages remain central to AI workflows, data engineering, and enterprise
application back ends. Machine learning adoption is also widespread since
organizations reported actively training ML models. Many of these pipelines are
now part of production environments, making operational continuity a
priority.MIT's new fine-tuning method lets LLMs learn new skills without losing old ones
To build truly adaptive AI, the industry needs to solve "continual learning,"
allowing systems to accumulate knowledge much like humans do throughout their
careers. The most effective way for models to learn is through "on-policy
learning.” In this approach, the model learns from data it generates itself
allowing it to correct its own errors and reasoning processes. This stands in
contrast to learning by simply mimicking static datasets. ... The standard
alternative is supervised fine-tuning (SFT), where the model is trained on a
fixed dataset of expert demonstrations. While SFT provides clear ground truth,
it is inherently "off-policy." Because the model is just mimicking data rather
than learning from its own attempts, it often fails to generalize to
out-of-distribution examples and suffers heavily from catastrophic forgetting.
SDFT seeks to bridge this gap: enabling the benefits of on-policy learning using
only prerecorded demonstrations, without needing a reward function. ... For
teams considering SDFT, the practical tradeoffs come down to model size and
compute. The technique requires models with strong enough in-context learning to
act as their own teachers — currently around 4 billion parameters with newer
architectures like Qwen 3, though Shenfeld expects 1 billion-parameter models to
work soon. It demands roughly 2.5 times the compute of standard fine-tuning, but
is best suited for organizations that need a single model to accumulate multiple
skills over time, particularly in domains where defining a reward function for
reinforcement learning is difficult or impossible.
The Illusion of Zero Trust in Modern Data Architectures
Modern data stacks stretch far beyond a single system. Data flows from SaaS tools into ingestion pipelines, through transformation layers, into warehouses, lakes, feature stores, and analytics tools. Each hop introduces a new identity, a new permission model, and a new surface area for implicit trust. Not to mention, niches like healthcare data storage are a completely different beast. Whatever the system may be, teams may enforce strict access at the perimeter while internal services freely exchange data with long-lived credentials and broad scopes. This is where the illusion forms. Zero Trust is declared because no user gets blanket access, yet services trust other services almost entirely. Tokens are reused, roles are overprovisioned, and data products inherit permissions they were never meant to have. The architecture technically verifies everything, but conceptually trusts too much. ... Data rarely stays where Zero Trust policies are strongest. Warehouses enforce row-level security, masking, and role-based access, but data doesn’t live exclusively in warehouses. Extracts are generated, snapshots are shared, and datasets are copied into downstream systems for performance or convenience. Each copy weakens the original trust guarantees and problems worse than increasing cloud costs come to fruition. Once data leaves its source, context is often stripped away.Top Cyber Industry Defenses Spike CO2 Emissions
Though rarely discussed, like any other technologies, cybersecurity
protections carry their own costs to the planet. Programs run on electricity.
Servers demand water. Devices are built from natural resources and eventually
get thrown out. ... "CISOs can help or make the situation worse [when it comes
to] sustainability, depending on the way they write security rules," he says.
"And that's why we started a study: to enable the CISO to be part of the
sustainability process of his or her company, and to find actionable ways to
reduce CO2 consumption while at the same time not adding more risks." ... "We
collect a lot of logs, not exactly always knowing why, and the retention
period is a huge cost in terms of infrastructure, and also CO2," Billois says.
"So at some point, you can revisit your log collection, and log retention, and
if there are no legal issues, you can think about compressing them to reduce
their volume. It's something that is, I would say, quite easy to do. ... All
of that said, unfortunately, the biggest cyber polluter, by far, is also the
most difficult to scale back without incurring risk. Some companies can swap
underutilized physical infrastructure for virtualized backups, which eat less
power, if they're not already doing that; but there are few other great ways
to make cyber resilience more efficient. "You can reduce CO2 [from backups]
very easily: you stop buying two servers, or you stop having a duplicate of
all your data," Billois says.Five ways quantum technology could shape everyday life
There is growing promise of quantum technology’s ability to solve problems that today’s systems struggle to overcome, or cannot even begin to tackle, with implications for industry, national security and everyday life. ... In healthcare, faster drug discovery could bring quicker response to outbreaks and epidemics, personalised medicine and insight into previously inscrutable biological interactions. Quantum simulation of how materials behave could lead to new high efficiency energy materials, catalysts, alloys and polymers. ... In medicine, quantum sensors could improve diagnostic capabilities via more sensitive, quicker and noninvasive imaging modes. In environmental monitoring, these sensors could track delicate shifts beneath the Earth’s surface, offer early warnings of seismic activity, or detect trace pollutants in air and water with exceptional accuracy. ... Airlines and rail networks could automatically reconfigure to avoid cascading delays, while energy providers might balance renewable generation, storage and consumption with far greater precision. Banks could use quantum computers to evaluate numerous market scenarios in parallel, informing the management of investment portfolios. ... While still at an early stage of development, quantum algorithms might accelerate a subset of AI called machine learning (where algorithms improve with experience), help simulate complex systems, or optimise AI architectures more efficiently.Nokia predicts huge WAN traffic growth, but experts question assumptions
“Consumer- and enterprise-generated AI traffic imposes a substantial impact on
the wide-area network (WAN) by adding AI workloads processed by data centers
across the WAN. AI traffic does not stay inside one data center; it moves
across edge, metro, core, and cloud infrastructure, driving dense lateral
flows and new capacity demands,” the report says. An explosion in agentic AI
applications further fuels growth “by inducing extra machine-to-machine (M2M)
traffic in the background,” Nokia predicts. “AI traffic isn’t just creating
more demand inside data centers; it’s driving a sustained surge of traffic
between them. AI inferencing traffic—both user-initiated and
agentic-AI-induced M2M—moving over inter-data-center links grows at a 20.3%
CAGR through 2034.” ... Global enterprise and industrial traffic, including
fixed wireless access, will also steadily rise over the next decade, “as more
operations, machines, and workers become digitally connected,” Nokia predicts.
“Pervasive automation, high-resolution video, AI-driven analytics, and remote
access to industrial systems,” will drive traffic growth. “Factory lines are
streaming machine vision data to the cloud. AI copilots are assisting
personnel in real time. Field teams are using AR instead of manuals. Robots
are coordinating across sites,” the Nokia report says. “Industrial systems are
continuously sending telemetry over the WAN instead of keeping it on-site.
This shift makes wide-area connectivity part of the core production
workflow.”The death of reactive IT: How predictive engineering will redefine cloud performance in 10 years
Reactive monitoring fails not because tools are inadequate, but because the
underlying assumption that failures are detectable after they occur no longer
holds true. Modern distributed systems have reached a level of interdependence
that produces non-linear failure propagation. A minor slowdown in a storage
subsystem can exponentially increase tail latencies across an API gateway. ...
Predictive engineering is not marketing jargon. It is a sophisticated
engineering discipline that combines statistical forecasting, machine
learning, causal inference, simulation modeling and autonomous control
systems. ... Predictive engineering will usher in a new operational era where
outages become statistical anomalies rather than weekly realities. Systems
will no longer wait for degradation, they will preempt it. War rooms will
disappear, replaced by continuous optimization loops. Cloud platforms will
behave like self-regulating ecosystems, balancing resources, traffic and
workloads with anticipatory intelligence. ... In distributed networks, routing
will adapt in real time to avoid predicted congestion. Databases will adjust
indexing strategies before query slowdowns accumulate. The long-term
trajectory is unmistakable: autonomous cloud operations. Predictive
engineering is not merely the next chapter in observability, it is the
foundation of fully self-healing, self-optimizing digital
infrastructure.
No comments:
Post a Comment