Quote for the day:
"Worry less, smile more. Don't regret, just learn and grow." -- @Pilotspeaker
The battle for agent connectivity: Can MCP survive the enterprise?
"MCP is the UI for agents. The future of asking ChatGPT to book an Uber and have
a pizza available when you arrive at the hotel only works if we have the
connectivity," said Dag Calafell III, director of Technology Innovation at MCA
Connect, an IT consultancy for manufacturers. But while seamless connectivity
might be the Holy Grail for consumer apps, critics argue that it is irrelevant
-- or even dangerous -- for the enterprise. ... Notably, MCP has significant
backing from prominent companies, including Google, OpenAI, Microsoft and its
creator, Anthropic. Indeed, Calafell argued that while there are competitors out
there, "MCP is winning" precisely because it has seen significant adoption by
large software providers. Still, MCP clearly has significant issues -- mostly
because it's in its infancy. MCP's rapidly evolving specification, uneven
tooling, unclear security and governance controls, and lack of standardized
memory, debugging, and orchestration make it better for experimentation than
reliable enterprise use today. ... "There is room to innovate with a
security-first 'MCP-like' standard that is resource aware, with trusted
catalogues, privileges, scopes, etc. These would either be built on top of MCP,
a sort of MCP v2, or introduced as part of a new protocol," said Liav Caspi,
co-founder and CTO at Legit Security. And, of course, there remains an evolving
trend that the AI industry will take an entirely different direction.Digital Twin in Railways: A Practical Solution to Managing Complex Rail Systems
In the context of railways, digital twins are being deployed to improve asset
lifecycle management, predictive maintenance, and infrastructure planning. By
integrating inputs from IoT devices and advanced analytics platforms, these
models help engineers monitor structural health, detect anomalies, and plan
maintenance before failures occur. ... As the scale and complexity of rail
networks continue to grow, the use of digital twins offers a unified,
comprehensive view of interconnected assets, which empowers rail operators with
faster decision-making and better coordination across departments. This
technology is gradually becoming a core component of smart railway ecosystems.
... The architecture of a digital twin in railway systems is built upon the
integration of multiple digital technologies, including Building Information
Modelling (BIM), the Internet of Things (IoT), Geographic Information Systems
(GIS), and data analytics platforms. Together, these technologies create a
unified framework that connects the physical and digital environments of railway
infrastructure and operations. ... The integration of operational data,
including train movements, energy consumption, and passenger flows, allows
operators to simulate different scenarios and optimise timetables, headways, and
energy use. In dense networks such as urban metro systems, this contributes to
improved punctuality and efficient energy utilisation.
Stop mimicking and start anchoring
It’s a fundamental truth that most CIOs are ignoring in their rush to emulate
Big Tech playbooks. The result is a systematic misallocation of resources based
on a fundamental misunderstanding of how value creation works across industries.
... the strategic value of IT should be measured by how effectively it addresses
industry-specific value creation. Different industries have vastly different
technology intensity and value-creation dynamics. In our view, CIOs must
therefore resist trend-driven decisions and view IT investment through their
industry’s value-creation to sharpen competitive edge. To understand why IT
strategies diverge across industries shaped by sectoral realities and maturity
differences, we need to examine how business models shape the role of
technology. ... funding business outcomes rather than chasing technology fads is
easier said than done. It’s difficult to unravel the maze created by the
relentless march of technological hype versus the grounded reality of business.
But the role of IT is not universal; its business relevance changes from one
industry to another. ... Long-term value from emerging technologies comes from
grounded application, not blind adoption. In the race to transform, the wisest
CIOs will be those who understand that the best technology decisions are often
the ones that honour, rather than abandon the fundamental nature of their
business. The future belongs not to those who adopt the most tech, but to those
who adopt the right tech for the right reasons.
Build vs buy is dead — AI just killed it
Ssomething fundamental has changed: AI has made building accessible to everyone.
What used to take weeks now takes hours, and what used to require fluency in a
programming language now requires fluency in plain English.When the cost and
complexity of building collapse this dramatically, the old framework goes down
with them. It’s not build versus buy anymore. It’s something stranger that we
haven't quite found the right words for. ... And it's not some future state.
This is already happening. Right now, somewhere, a customer rep is using AI to
fix a product issue they spotted minutes ago. Somewhere else, a finance team is
prototyping their own analytical tools because they've realized they can iterate
faster than they can write up requirements for engineering. Somewhere, a team is
realizing that the boundary between technical and non-technical was always more
cultural than fundamental. The companies that embrace this shift will move
faster and spend smarter. They’ll know their operations more deeply than any
vendor ever could. They'll make fewer expensive mistakes, and buy better tools
because they actually understand what makes tools good. The companies that stick
to the old playbook will keep sitting through vendor pitches, nodding along at
budget-friendly proposals. They’ll debate timelines, and keep mistaking
professional decks for actual solutions. Until someone on their own team pops
open their laptop, says, “I built a version of this last night. Want to check it
out?,”
Quantum Tech Hits Its “Transistor Moment,” Scientists Say
“This transformative moment in quantum technology is reminiscent of the
transistor’s earliest days,” said lead author David Awschalom, the Liew Family
Professor of molecular engineering and physics at the University of Chicago, and
director of the Chicago Quantum Exchange and the Chicago Quantum Institute. “The
foundational physics concepts are established, functional systems exist, and now
we must nurture the partnerships and coordinated efforts necessary to achieve
the technology’s full, utility-scale potential. How will we meet the challenges
of scaling and modular quantum architectures?” ... Although advanced prototypes
have demonstrated system operation and public cloud access, their raw
performance remains early in development. For example, many meaningful
applications, including large-scale quantum chemistry simulations, could require
millions of physical qubits with error performance far beyond what is
technologically viable today. ... “While semiconductor chips in the 1970s were
TLR-9 for that time, they could do very little compared with today’s advanced
integrated circuits,” he said. “Similarly, a high TRL for quantum technologies
today does not indicate that the end goal has been achieved, nor does it
indicate that the science is done and only engineering remains. Rather, it
reflects a significant, yet relatively modest, system-level demonstration has
been achieved—one that still must be substantially improved and scaled to
realize the full promise.”
Before you build your first enterprise AI app
Model weights are becoming undifferentiated heavy lifting, the boring
infrastructure that everyone needs but no one wants to manage. Whether you use
Anthropic, OpenAI, or an open weights model like Llama, you are getting a level
of intelligence that is good enough for 90% of enterprise tasks. The differences
are marginal for a first version. The “best” model is usually just the one you
can actually access securely and reliably. ... We used to obsess over the
massive cost of training models. But for the enterprise, that is largely
irrelevant. AI is all about inference now, or the application of knowledge to
power applications. In other words, AI will become truly useful within the
enterprise as we apply models to governed enterprise data. The best place to
build up your AI muscle isn’t with some moonshot agentic system. It’s a simple
retrieval-augmented generation (RAG) pipeline. What does this mean in practice?
Find a corpus of boring, messy documents, such as HR policies, technical
documentation, or customer support logs, and build a system that allows a user
to ask a question and get an answer based only on that data. This forces you to
solve the hard problems that actually build a moat for your company. ... When
you build your first application, design it to keep the human in the loop. Don’t
try to automate the entire process. Use the AI to generate the first draft of a
report or the first pass at a SQL query, and then force a human to review and
execute it.
Cloudflare reveals AI surge & Internet ‘bot wars’ in 2025
Cloudflare reported that use of AI models and AI crawling activity increased
sharply. It said crawling for model training accounted for the majority of AI
crawler traffic during the year. Training-related crawlers generated traffic
that reached as much as seven to eight times the level of retrieval-augmented
generation and search crawlers at peak. Traffic from training crawlers was also
as much as 25 times higher than AI crawlers tied to direct user actions. The
company said Meta’s llama-3-8b-instruct model was the most widely used on its
network. It was used by more than three times as many accounts as the next most
popular models from providers such as OpenAI and Stability AI. Cloudflare added
that Google’s crawling bot remained the dominant automated actor on the
Internet. It said Googlebot’s crawl volume exceeded that of all other leading AI
bots by a wide margin and was the largest single source of automated traffic it
observed. ... Cloudflare reported a notable shift in the sectors that face the
highest volume of cyber attacks. Civil society and non-profit organisations
became the most attacked group for the first time. The company linked this trend
to the sensitivity and financial value of the data held by such organisations.
This includes personal information about donors, volunteers and beneficiaries.
Cloudflare’s data also showed changes in the causes of major Internet
outages.
Who Owns AI Risk? Why Governance Begins with Architecture
But as AI systems grow more complex, so do their risks. Bias, opacity, data
misuse, model drift, or even overreliance on AI outputs can all cause serious
business, ethical, and reputational damage. This raises an uncomfortable
question: who actually owns the risk of AI? ... AI doesn’t live in isolation. It
consumes enterprise data, depends on cloud services, interacts with APIs, and
influences real business processes.Governance, therefore, can’t rely on policies
alone, it must be designed, structured, and embedded into the architecture
itself. For instance, companies like Microsoft and Google have embedded AI
governance directly into their architectural blueprints creating internal AI
Ethics and Risk Committees that review model design before deployment. This
proactive structure ensures compliance and builds trust long before a model
reaches production. ... In other words, AI Governance is not a department, it’s
an ecosystem of shared responsibility.Enterprise Architects connect the dots,
Business Owners set the direction, Data Scientists implement, and Governance
Boards oversee. But the real maturity comes when everyone in the organization,
from the C-suite to the operational level, understands that AI is a shared asset
and a shared risk. ... Modern enterprise architecture is no longer only about
connecting systems. It’s about connecting responsibility. The moment artificial
intelligence becomes part of the business fabric, architecture must evolve to
ensure that governance isn’t something external or reactive, it’s embedded in
the very design of every AI-enabled solution.
The 5 power skills every CISO needs to master in the AI era
According to the World Economic Forum’s Future of Jobs Report, nearly 40% of
core job skills will change by 2030, driven primarily by AI, data and
automation. For security professionals, this means that expertise in network
defense, forensics and patching — while still essential — is no longer enough to
create value. The real impact comes from how we interpret, communicate and apply
what AI enables. ... The biggest myth in security is that technical mastery
equals longevity. In truth, the more we automate, the more we value human
differentiation. Success in the next decade won’t depend on how much code you
can write — but on how effectively you can connect, translate and lead across
systems and silos. When I look at the most resilient organizations today, they
share one trait: They see cybersecurity not as a control function, but as a
strategic enabler. And their leaders? They’re fluent in both algorithms and
empathy. The future of cybersecurity belongs to those who build bridges — not
just firewalls. Cybersecurity is no longer a war between humans and machines —
it’s a collaboration between both. The organizations that succeed will be the
ones that combine AI’s precision with human empathy and creative foresight. As
AI handles scale, leaders must handle meaning. And that’s the true essence of
power skills. The future of cybersecurity belongs to those who can blend AI’s
precision with human expertise — and lead with both.
Manufacturing is becoming a test bed for ransomware shifts
“Manufacturing depends on interconnected systems where even brief downtime can
stop production and ripple across supply chains,” said Alexandra Rose, Director
of Threat Research, Sophos Counter Threat Unit. “Attackers exploit this
pressure: despite encryption rates falling to 40%, the median ransom paid still
reached $1 million. While half of manufacturers stopped attacks before
encryption, recovery costs average $1.3 million and leadership stress remains
high. Layered defenses, continuous visibility, and well-rehearsed response plans
are essential to reduce both operational impact and financial risk,” Rose
continued. Teams were able to stop attacks before encryption in a larger share
of cases, which likely contributed to the decline. Early detection helped reduce
disruption, although strong detection did not guarantee a smooth recovery. ...
IT and security leaders in manufacturing see progress in some areas but ongoing
gaps in others. Detection appears to be improving. Recovery is becoming
steadier. Payment rates are declining. But operational weaknesses persist.
Skills shortages, aging protections, and limited visibility into vulnerabilities
continue to contribute to compromises. These factors shape outcomes as much as
attacker capability. The findings also show a need for stronger internal
support. Security teams are absorbing organizational and emotional strain that
can affect long term performance. Manufacturing operations depend on stable
systems, and teams cannot maintain stability without workloads they can manage.


























/articles/overload-protection-platform-engineering/en/smallimage/overload-protection-platform-engineering-1764921362184.jpg)


