uote for the day:
"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward
AQ Is The New EQ: Why Adaptability Now Defines Success
AQ describes the ability to adjust thinking, behaviour, and strategy in response
to rapid change and uncertainty. Unlike IQ, which measures cognitive capacity,
or EQ, which best captures emotional regulation, AQ predicts how quickly someone
can learn, unlearn, and recalibrate when conditions change. ... One key reason
AQ is eclipsing other forms of intelligence is that it is dynamic rather than
static. IQ remains stable across adulthood for the most part. Adaptability,
however, varies with experience, exposure to stress, and environmental demands.
Research on psychological flexibility shows that people who can manage ambiguity
and shift perspectives under pressure are more likely to adapt effectively to
uncertainty. ... At the end of the day, AQ is neither fixed nor innate. When it
comes to learning and organizational development, adaptability can be
strengthened deliberately through structured challenges, supportive feedback
loops, and reflective practices. ... Adaptable people seek feedback, revise
strategies quickly when presented with new evidence, don’t get stuck, and remain
effective even when the rules of the game are shifting under their feet. This
high degree of cognitive flexibility - the ability to shift between
problem-solving approaches versus defaulting to the “but we’ve always done it
this way” approach - best predicts effective decision-making under stress.Why AI Governance Risk Is Really a Data Governance Problem
Modern enterprise AI systems now use retrieval-augmented generation, which has
further exacerbated these weaknesses. Trained AI models retrieve context from
internal repositories during inference, pulling from file shares, collaboration
platforms, CRM systems and knowledge bases. That retrieval layer must extract
meaning from complex documents, preserve structure, generate AI embeddings and
retrieve relevant fragments - while enforcing the same access controls as the
source systems. This is where governance assumptions begin to break down. ...
"We have to accept two things: Data will never be fully governed. Second,
attempting to fully govern data before delivering AI is just not realistic. We
need a more practical solution like trust models," Zaidi said. AI-first
organizations are, therefore, exposing curated proprietary data as reusable
"data products" that can be consumed by both humans and AI agents. The
alternative is growing risk. As AI systems integrate more deeply with enterprise
applications, APIs have become a critical but often under-governed data pathway.
... Regulators are converging on the same conclusion: AI accountability depends
on data governance. Data protection regimes such as GDPR already require
accuracy, purpose limitation and security. Emerging AI regulations, including
the EU AI Act, explicitly tie AI risk to data sourcing, preparation and
governance practices. Is AI killing open source?
It takes a developer 60 seconds to prompt an agent to fix typos and optimize
loops across a dozen files. But it takes a maintainer an hour to carefully
review those changes, verify they do not break obscure edge cases, and ensure
they align with the project’s long-term vision. When you multiply that by a
hundred contributors all using their personal LLM assistants to help, you
don’t get a better project. You get a maintainer who just walks away. ...
On one side, we’ll have massive, enterprise-backed projects like Linux or
Kubernetes. These are the cathedrals, the bourgeoisie, and they’re
increasingly guarded by sophisticated gates. They have the resources to build
their own AI-filtering tools and the organizational weight to ignore the
noise. On the other side, we have more “provincial” open source projects—the
proletariat, if you will. These are projects run by individuals or small cores
who have simply stopped accepting contributions from the outside. The irony is
that AI was supposed to make open source more accessible, and it has. Sort of.
... Open source isn’t dying, but the “open” part is being redefined. We’re
moving away from the era of radical transparency, of “anyone can contribute,”
and heading toward an era of radical curation. The future of open source, in
short, may belong to the few, not the many. ... In this new world, the most
successful open source projects will be the ones that are the most difficult
to contribute to. They will demand a high level of human effort, human
context, and human relationship.Designing Effective Multi-Agent Architectures
Some coordination patterns stabilize systems. Others amplify failure. There is no universal best pattern, only patterns that fit the task and the way information needs to flow. ... Neural scaling1 is continuous and works well for models. As shown by classic scaling laws, increasing parameter count, data, and compute tends to result in predictable improvements in capability. This logic holds for single models. Collaborative scaling,2 as you need in agentic systems, is different. It’s conditional. It grows, plateaus, and sometimes collapses depending on communication costs, memory constraints, and how much context each agent actually sees. Adding agents doesn’t behave like adding parameters. This is why topology matters. Chains, trees, and other coordination structures behave very differently under load. Some topologies stabilize reasoning as systems grow. Others amplify noise, latency, and error. ... If your multi-agent system is failing, thinking like a model practitioner is no longer enough. Stop reaching for the prompt. The surge in agentic research has made one truth undeniable: The field is moving from prompt engineering to organizational systems. The next time you design your agentic system, ask yourself: How do I organize the team? (patterns); Who do I put in those slots? (hiring/architecture); and Why could this fail at scale? (scaling laws) That said, the winners in the agentic era won’t be those with the smartest instructions but the ones who build the most resilient collaboration structures.Never settle: How CISOs can go beyond compliance standards to better protect their organizations
A CISO running a compliant program may only review a vendor once a year or after
significant system changes. Compliance standards haven’t caught up to the best
practice of continuously monitoring vendors to stay on top of third-party risk.
This highlights one of the most unfortunate incentives any CISO who manages a
compliance program knows: It is often easier to set a less stringent standard
and exceed it than to set a better target and risk missing it. ... One of the
most common shortfalls of compliance-driven risk assessments is simplistic math
around likelihood and impact. Many of the emergent risks mentioned above have a
lower likelihood but an extremely high impact and even a fair amount of
uncertainty around timeframes. Using this simplistic math, these tail risks do
not often bubble up organically; instead, they have to be pulled up from the
batch of lower frequency-x-impact scoring. Defining that impact in dollars and
cents cuts through the noise. ... If your budget has already been approved
without these focus areas in mind, now is the time to start weaving a risk-first
approach into discussions with your board. You should be talking about this
year-round, not only during budget season when it’s time to present your plan.
It will position security as a way to protect revenue, improve capital
efficiency, preserve treasury integrity and optimize costs, rather than a cost
center.
India Reveals National Plan for Quantum-Safe Security
India is building a foundation to address the national security risks posed by
quantum computing through the implementation of a Quantum Safe Ecosystem. As
quantum computing rapidly advances, the Task Force, formed under the National
Quantum Mission (NQM), has outlined critical steps for India to safeguard its
digital infrastructure and maintain economic resilience. ... Critical
Information Infrastructure sectors — including defense, power,
telecommunications, space and core government systems — are identified as the
highest priority for early adoption. According to the report, these sectors
should begin formal implementation of post-quantum cryptography by 2027, with
accelerated migration schedules reflecting the long operational lifetimes and
high-risk profiles of their systems. The task force notes that these
environments often support sensitive communications and control functions that
must remain confidential for decades, making them especially vulnerable to
“harvest now, decrypt later” attacks. ... To support large-scale adoption of
post-quantum cryptography, the task force recommends the creation of a national
testing and certification framework designed to bring consistency, credibility
and risk-based assurance to quantum-safe deployments. Rather than mandating a
single technical standard across all use cases, the proposed framework aligns
levels of evaluation with the operational criticality of the system being
secured.
TeamPCP Worm Exploits Cloud Infrastructure to Build Criminal Infrastructure
TeamPCP is said to function as a cloud-native cybercrime platform, leveraging
misconfigured Docker APIs, Kubernetes APIs, Ray dashboards, Redis servers, and
vulnerable React/Next.js applications as main infection pathways to breach
modern cloud infrastructure to facilitate data theft and extortion. In addition,
the compromised infrastructure is misused for a wide range of other purposes,
ranging from cryptocurrency mining and data hosting to proxy and
command-and-control (C2) relays. Rather than employing any novel tradecraft,
TeamPCP leans on tried-and-tested attack techniques, such as existing tools,
known vulnerabilities, and prevalent misconfigurations, to build an exploitation
platform that automates and industrializes the whole process. This, in turn,
transforms the exposed infrastructure into a "self-propagating criminal
ecosystem," Flare noted. Successful exploitation paves the way for the
deployment of next-stage payloads from external servers, including shell- and
Python-based scripts that seek out new targets for further expansion. ...
"The PCPcat campaign demonstrates a full lifecycle of scanning, exploitation,
persistence, tunneling, data theft, and monetization built specifically for
modern cloud infrastructure," Morag said. "What makes TeamPCP dangerous is not
technical novelty, but their operational integration and scale. Deeper analysis
shows that most of their exploits and malware are based on well-known
vulnerabilities and lightly modified open-source tools."
The evolving AI data center: Options multiply, constraints grow, and infrastructure planning is even more critical
AI use has moved from experiment to habit. Usage keeps growing in both consumer
and enterprise settings. Model design has also diversified. Some workloads are
dominated by large training runs. Others are dominated by inference at scale.
Agentic systems add a different pattern again (e.g., long-lived sessions, many
tool calls). From an infrastructure standpoint, that tends to increase sustained
utilisation of accelerators and networks. ... AI also increases the importance
of connectivity between data centers. Training data must move. Checkpoints and
replicas must be protected. Inference often runs across regions for latency,
resilience, and capacity balancing. As a result, data center interconnect (DCI)
is scaling, with operators planning for multi-terabit campus capacity and
wide-area links that support both throughput and operational resilience. This
reinforces a simple point: the AI infrastructure story is not confined to a
single room or building. The ‘shape’ of the network increasingly includes
campus, metro, and regional connectivity. ... Connectivity has to match that
reality. The winners will be connectivity systems that are: Dense but
serviceable – designed for access, not just packing factor; Repeatable –
standard blocks that can be deployed many times; Proven – inspection
discipline, and documentation that survives handoffs; Compatible with factory
workflows – pre-terminated assemblies and predictable integration steps;
and Designed for change – expansion paths that do not degrade order and
legibility.Living off the AI: The Next Evolution of Attacker Tradecraft
Organizations are rapidly adopting AI assistants, agents, and the emerging Model
Context Protocol (MCP) ecosystem to stay competitive. Attackers have noticed.
Let’s look at how different MCPs and AI agents can be targeted and how, in
practice, enterprise AI becomes part of the attacker’s playbook. ... With access
to AI tools, someone with minimal expertise can assemble credible offensive
capabilities. That democratization changes the risk calculus. When the same AI
stack that accelerates your workforce also wields things like code execution,
file system access, search across internal knowledge bases, ticketing, or
payments, then any lapse in control turns into real business impact. ... Unlike
smash‑and‑grab malware, these campaigns piggyback on your sanctioned AI
workflows, identities, and connectors. ... Poorly permissioned tools let an
agent read more data than it needs. An attacker nudges the agent to chain tools
in ways the designer didn’t anticipate. ... If an agent learns from prior chats
or a shared vector store, an attacker can seed malicious “facts” that reshape
future actions—altering decisions, suppressing warnings, or inserting endpoints
used for data exfiltration that look routine. ... Teams that succeed make AI
security boring: agents have crisp scopes; high‑risk actions need explicit
consent; every tool call is observable; and detections catch weird behavior
quickly. In that world, an attacker can still try to live off your AI, sure, but
they’ll find themselves fenced in, logged, rate‑limited, and ultimately blocked.
No comments:
Post a Comment