Quote for the day:
“If you're not prepared to be wrong, you'll never come up with anything original.” -- Ken Robinson
Strategy is dying from learning lag, not market change
At first, you might think this is about being more agile, more innovative, or
more aggressive. However, those are reactions, not solutions. The real shift
is deeper: strategy no longer scales when the underlying assumptions expire
too quickly. The advantage erodes because the environment moves faster than
the organization’s ability to sense, understand and adapt to it. ... Strategic
failure today is less about being wrong and more about staying wrong for too
long. ... One way and perhaps the only one, out of uncertainty is to learn
faster and closer to where the actual signals appear. Learning to me is the
disciplined updating of beliefs when new evidence arrives. Every decision is a
prediction about how things will work. When reality proves you wrong, learning
is how you fix that prediction. In a stable environment, you can afford to
learn slowly. However, in unstable ones, like today’s, slow learning becomes
existential. ... Organizations don’t fall behind all at once. They fall behind
step by step: first in what they notice, then in how they interpret it, then
in how long it takes to decide what to do and finally in how slowly they act.
... Strategy stalls not because people refuse to change, but because they
can’t agree on the story beneath the change. They chased precision in
interpretation when the real advantage would have come from running small
tests to find out faster which interpretation is correct.The new tech job doesn't require a degree. It starts in a data center
The answer won't be found in Silicon Valley or Data Center Alley. It's closer
to home. Veterans, trade workers, and high school graduates not headed to
college don't come through traditional pipelines, but they bring the right
aptitude and mindset to the data center. Veterans have discipline and
process-driven thinking that fits naturally into our operations — and for
many, these roles offer a transition into a stable career. Someone who kept an
aircraft carrier running knows what it means to manage infrastructure that
can't fail. Many arrive with experience in related systems and are comfortable
with shift work and high stakes. ... Young adults without college plans are
often overlooked, but some excel in hands-on settings and just need an
opportunity to prove it. Once they learn about a data center career and where
it can take them, it becomes a chance to build a middle-class lifestyle close
to home. ... Hiring nontraditional candidates is only the first step. What
keeps them is a promotion track that works. After four weeks of hands-on and
self-guided onboarding, techs can pursue certifications in battery backup
systems, tower clearance, generator safety, and more. When qualified, they
show it in the field and move up. This kind of investment has a ripple effect.
A paycheck can lead to a mortgage and financial stability. And as techs move
up or out, someone else steps in — maybe through a local program that appeared
once your jobs did.
The technique, created by researchers from universities in China and
Singapore, is to inject plausible but false data into what’s known as a
knowledge graph (KG) created by an AI operator. A knowledge graph holds the
proprietary data used by the LLM. Injecting poisoned or adulterated data into
a data system for protection against theft isn’t new. What’s new in this tool
– dubbed AURA (Active Utility Reduction via Adulteration)– is that authorized
users have a secret key that filters out the fake data so the LLM’s answer to
a query is usable. If the knowledge graph is stolen, however, it’s unusable by
the attacker unless they know the key, because the adulterants will be
retrieved as context, causing deterioration in the LLM’s reasoning and leading
to factually incorrect responses. The researchers say AURA degrades the
performance of unauthorized systems to an accuracy of just 5.3%, while
maintaining 100% fidelity for authorized users, with “negligible overhead,”
defined as a maximum query latency increase of under 14%. ... As the use of AI
spreads, CSOs have to remember that artificial intelligence and everything
needed to make it work also make it much harder to recover from bad data being
put into a system, Steinberg noted. ... “For now, many AI systems are being
protected in similar manners to the ways we protected non-AI systems. That
doesn’t yield the same level of protection, because if something goes wrong,
it’s much harder to know if something bad has happened, and its harder to get
rid of the implications of an attack.”
Automated data poisoning proposed as a solution for AI theft threat
The technique, created by researchers from universities in China and
Singapore, is to inject plausible but false data into what’s known as a
knowledge graph (KG) created by an AI operator. A knowledge graph holds the
proprietary data used by the LLM. Injecting poisoned or adulterated data into
a data system for protection against theft isn’t new. What’s new in this tool
– dubbed AURA (Active Utility Reduction via Adulteration)– is that authorized
users have a secret key that filters out the fake data so the LLM’s answer to
a query is usable. If the knowledge graph is stolen, however, it’s unusable by
the attacker unless they know the key, because the adulterants will be
retrieved as context, causing deterioration in the LLM’s reasoning and leading
to factually incorrect responses. The researchers say AURA degrades the
performance of unauthorized systems to an accuracy of just 5.3%, while
maintaining 100% fidelity for authorized users, with “negligible overhead,”
defined as a maximum query latency increase of under 14%. ... As the use of AI
spreads, CSOs have to remember that artificial intelligence and everything
needed to make it work also make it much harder to recover from bad data being
put into a system, Steinberg noted. ... “For now, many AI systems are being
protected in similar manners to the ways we protected non-AI systems. That
doesn’t yield the same level of protection, because if something goes wrong,
it’s much harder to know if something bad has happened, and its harder to get
rid of the implications of an attack.”From Zero Trust to Cyber Resilience: Why Architecture Alone Will Not Protect Enterprises in 2026
The core challenge facing CISOs is not whether Zero Trust is implemented, but
whether the organization can continue to operate when, inevitably, controls
fail. Modern threat actors no longer focus exclusively on breaching defenses;
they aim to disrupt operations, degrade trust, and extend business impact over
time. In this context, architecture alone is insufficient. What enterprises
require is cyber resilience: the ability to anticipate, withstand, recover
from, and adapt to cyber disruption. ... Zero Trust answers the question “Who
can access what?” Cyber resilience answers a more consequential one: “How
quickly can the business recover when access controls are no longer the
primary failure point?” ... Resilience engineering reframes cybersecurity as a
property of complex socio-technical systems. In this model, failure is not an
anomaly; it is an expected condition. The objective shifts from breach
avoidance to disruption management. In practice, this means evolving from an
assume breach mindset to an assume disruption operating model, one where
systems, teams, and leadership are prepared to function under degraded
conditions. ... To prepare for 2026, CISOs should: Treat cyber resilience as a
continuous operating capability, not a project; Integrate cybersecurity with
business continuity and crisis management; Train executives and board
members through realistic disruption scenarios; and Invest in recovery
validation, not just control deployment.
Generative AI and the future of databases
The data is at the heart of your line of business application, but it is also
changing all the time, and if you keep extracting the data into some other
corpus it gets stale. You can view it as two approaches: replication or
federation. Am I going to replicate out of the database to some other thing or
am I going to federate into the database? ... engineers know how to write good
SQL queries. Whether they know how to write good English language description
of the SQL queries is a completely different matter, but let’s assume for a
second we can or we can have AI do it for us. Then the AI can figure out which
tool to call for the user request and then generate the parameters. There are
some things to worry about in terms of security. How can you set the right
secure parameters? What parameters are the LLM allowed to set versus not
allowed to set? ... When you combine structured and unstructured data, the
next step is that it’s not just about exact results but about the most
relevant results. In this sense databases start to have some of the
capabilities of search engines, which is about relevance and ranking, and what
becomes important is almost like precision versus recall for information
retrieval systems. But how do you make all of this happen? One key piece is
vector indexing. ... AI search is a key attribute of an AI-native database.
And the other key attribute is AI functions. Cyber Risk Trends for 2026: Building Resilience, Not Just Defenses
On the defensive side, AI can accelerate detection and response, but tooling
without guardrails will create fresh exposures. Your questions as a board should
be: Where have we embedded AI in critical workflows? How do we assure the
provenance and integrity of the data those models touch? Are we red-teaming our
AI-enabled processes, not just our perimeter? ... Second, third party ecosystems
present attack surface. The risk isn’t abstract: it’s a payroll provider outage
that stops salaries, a logistics partner breach that stalls distribution, or a
SaaS compromise that leaks your crown jewels. ... Third is quantum computing.
Some will say it’s too early; some will say it’s too late. The pragmatic
position is this: crypto agility is a business requirement now. Inventory where
and how you use cryptography—applications, devices, certificates, key
management, data at rest and in transit. Prioritize crown-jewel systems and
long-lived data that must remain confidential for years. ... Fourth is the risk
posed by geopolitics. We live in a more unstable world, and digital risk doesn’t
respect borders. Conflicts spill into cyberspace, data sovereignty rules
tighten, and critical components can become chokepoints overnight. ... We won’t
repel every attack in 2026. But we can decide to bend rather than break.
Resilience comes of age when it stops being a slogan and becomes a practiced
capability—where governance, operations, technology, and people move as one.
Will there be a technology policy epiphany in 2026?
The UK government still seems implacably opposed to bringing forward any
cross-sector, comprehensive AI legislation. Its one-liner in the 2024 King’s
Speech said the government “will seek to establish the appropriate legislation
to place requirements on those working to develop the most powerful artificial
intelligence models.” That seemed sparing at the time, and now seems
extraordinarily overblown. ... Turning to crypto-asset regulation, 2026 will
continue the journey from draft legislation being published on 15 December last
year through to 25 October 2027- yes, that’s meant to say 2027 - for the current
“go live” date. Already we have seen some definitional clarification and the
arrival of new provisions related to market abuse, public offers and
disclosures. ... A critical thread to all of this is cyber. The Cyber Security
Bill receives its second reading in the Commons today, 6 January. I’m very much
looking forward to the bill arriving in the Lords later in the Spring and would
welcome your thoughts on what’s in and what currently is not. If that wasn’t
enough for week one of 2026, we have the committee stage of the Crime and
Policing Bill in the Lords tomorrow, Wednesday 7 January. ... By contrast, there
is much chat on digital ID. A consultation is said to be coming this month with
a draft bill in May’s speech. This has hardly been helped by the government last
year hanging its digital ID coat all around illegal immigration - a more than
unfortunate decision.The Big Shift: Five Trends Show Why 2026 is About Getting to Value
The conversation shifts from “What can this AI do?” to “What problem does it solve, and how much value does it unlock?”—and the technology that wins won’t be the most sophisticated. Still, the one that directly accelerates revenue, reduces friction in customer-facing workflows, or demonstrably improves employee productivity within a 12-month payback window. Crawford says this is “getting back to brass tacks. “Organizations will carefully define their business objectives, whether customer engagement, revenue growth, employee productivity, or whatever it needs to be, before selecting a technology,” he says. ... In 2026, if your digital transformation project can’t demonstrate meaningful return within twelve months, it competes for oxygen with projects that can, and many won’t survive that fight, Batista says. This compression of payback expectations reflects a fundamental shift in how CFOs and boards view technology investments. Still, initiatives based on regulatory or compliance requirements—things mandated by law, for example—still justify longer timelines, but discretionary projects face much stricter scrutiny, Batista says. ... When it comes to limiting factors in scaling successful AI deployments, Crawford says the top issue will be failures in AI governance. “AI governance will be the bottleneck that constrains an enterprise’s ability to scale AI, not AI capability itself. And enterprises rushing to deploy autonomous agents without governance infrastructure will face either painful reworks or serious operational issues.Why CES 2026 Signals The End Of ‘AI As A Tool’
The idea of AI as a coordinating layer or “ambient background” across entire
ecosystems of tools and devices was also prominent this year. Samsung outlined
its vision of AI companions for everyday life, demonstrating how smart
appliances will form an intelligent background fabric to our day-to-day
activities. As well as in the home, Samsung is a key player in industrial
technology, where the same principle will see AI coordinating and optimizing
operations across smart, connected enterprise systems. ... First, it’s clear
that today’s leading manufacturers and developers believe that the future of AI
lies in agentic, always-on systems, rather than free-standing, isolated tools
and applications. Just as consumer AI now coordinates home and entertainment
technology, enterprise AI will orchestrate workflows, schedules, documents, data
and codebases, anticipating business needs and proactively solving problems
before they occur. Another thing that can’t be overlooked is that consumer
technology clearly shapes our expectations and tolerances of enterprise
technology. Workplace AI that doesn’t live up to the seamless, friction-free
experiences provided by consumer AI will quickly cause frustration, limiting
adoption and buy-in. ... As this AI infrastructure becomes more capable, the
role of employees will shift, too, from executing routine tasks to supervising
automated processes, as well as applying uniquely human skills to challenges
that machines still can’t tackle.
Build Resilient cloudops That Shrug Off 99.95% Outages
If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration.
We encode risk controls in Terraform so they’re enforced before a resource even
exists. Tagging, encryption, backup retention, network egress—these are all
policy. We don’t rely on code reviews to catch missing encryption on a bucket;
the pipeline fails the plan. That’s how cloudops scales across teams without nag
threads. ... If you’re starting from scratch, standardize on OpenTelemetry
libraries for services and send everything through a collector so you can change
backends without code churn. Sampling should be responsive to pain—raise trace
sampling when p95 latency jumps or error rates spike. Reducing cardinality in
labels (looking at you, per-user IDs) will keep storage and costs sane. Most
teams benefit from a small set of “stop asking, here it is” dashboards: request
volume and latency by endpoint, error rate by version, resource saturation by
service, and database health with connection pools and slow query counts. ... We
don’t win medals for shipping fast; we win trust for shipping safely.
Progressive delivery lets us test the actual change, in production, on a small
slice before we blast everyone. We like canaries and feature flags together:
canary catches systemic issues; flags let us disable risky code paths within a
version. ... Reliability with no cost controls is just a nicer way to miss your
margin. We give cost the same respect as latency: we define a monthly budget per
product and a change budget per release.
No comments:
Post a Comment