Quote for the day:
“When opportunity comes, it’s too late to prepare.” -- John Wooden
All in the Data: The State of Data Governance in 2026
For years, Non-Invasive Data Governance was treated as the “nice” approach —
the softer way to apply discipline without disruption. But 2026 has rewritten
that narrative. Now, NIDG is increasingly seen as the only sustainable way to
govern data in a world of continuous transformation. Traditional “assign
people to be stewards” approaches simply cannot keep up with agentic AI, edge
analytics, real-time data products, and the modern demand for organizational
agility. ... Governance becomes the spark that ignites faster value, safer AI,
more confident decision-making, and a culture that welcomes transformation
instead of bracing for it. This catalytic effect is why organizations that
embrace “The Data Catalyst³” in 2026 are not merely improving — they are
accelerating, compounding their gains, and outpacing peers who still treat
governance as a slow, procedural necessity rather than the engine of modern
data excellence. ... This year, metadata is no longer an afterthought. It is
the bloodstream of governance. Organizations are finally acknowledging that
without shared understanding, consistent definitions, and a reliable inventory
of where data comes from and who touches it, AI will hallucinate confidently
while leaders make decisions blindly. ... Perhaps the greatest evolution in
2026 is the rise of governance that keeps pace with AI. Organizations can no
longer review policies once a year or update data inventories only during
budget cycles. Decision cycles are compressing. Change windows are
shrinking. The Next Two Years of Software Engineering
AI unlocks massive demand for developers across every industry, not just tech.
Healthcare, agriculture, manufacturing, and finance all start embedding
software and automation. Rather than replacing developers, AI becomes a force
multiplier that spreads development work into domains that never employed
coders. We’d see more entry-level roles, just different ones: “AI-native”
developers who quickly build automations and integrations for specific niches.
... Position yourself as the guardian of quality and complexity. Sharpen your
core expertise: architecture, security, scaling, domain knowledge. Practice
modeling systems with AI components and think through failure modes. Stay
current on vulnerabilities in AI-generated code. Embrace your role as mentor
and reviewer: define where AI use is acceptable and where manual review is
mandatory. Lean into creative and strategic work; let the junior+AI combo
handle routine API hookups while you decide which APIs to build. ... Lean into
leadership and architectural responsibilities. Shape the standards and
frameworks that AI and junior team members follow. Define code quality
checklists and ethical AI usage policies. Stay current on compliance and
security topics for AI-produced software. Focus on system design and
integration expertise; volunteer to map data flows across services and
identify failure points. Get comfortable with orchestration platforms. Double
down on your role as technical mentor: more code reviews, design discussions,
technical guidelines.
The IT organization will become the keeper of the journal in terms of business
value, and a lot of organizations haven't developed those muscles yet. ...
Technical complexity remains a huge challenge. Back-end systems are becoming
more complicated, requiring stronger architecture frameworks, faster design
cycles and reliable data access to support emerging agentic AI frameworks. ...
"Many IT organizations have taken the easy way," said de la Fe, referring to
cloud and application service providers. As a result, their data is spread
across different environments. Organizations may technically own their data,
he said, but "it isn't with them -- or architected in a manner where they can
access and use it as they may need to." ... "They believe it's a period of
architectural redux because applications are becoming more heterogeneous,"
Vohra said. "Their architecture must be more modular and open, but they can't
simply say no to core applications, because the business will demand them.
They must be more responsive to the business than ever before." ... Without
business-IT alignment, IT cannot deliver the business impact the organization
now expects. CIOs are under increasing pressure from senior leadership and
boards to improve efficiency and deliver business value, as measured in
business KPIs rather than traditional IT KPIs. On the technology side, CIOs
also need to ensure they are architecting for the future.
What will IT transformation look like in 2026, and how do you know if you're on the right track?
The IT organization will become the keeper of the journal in terms of business
value, and a lot of organizations haven't developed those muscles yet. ...
Technical complexity remains a huge challenge. Back-end systems are becoming
more complicated, requiring stronger architecture frameworks, faster design
cycles and reliable data access to support emerging agentic AI frameworks. ...
"Many IT organizations have taken the easy way," said de la Fe, referring to
cloud and application service providers. As a result, their data is spread
across different environments. Organizations may technically own their data,
he said, but "it isn't with them -- or architected in a manner where they can
access and use it as they may need to." ... "They believe it's a period of
architectural redux because applications are becoming more heterogeneous,"
Vohra said. "Their architecture must be more modular and open, but they can't
simply say no to core applications, because the business will demand them.
They must be more responsive to the business than ever before." ... Without
business-IT alignment, IT cannot deliver the business impact the organization
now expects. CIOs are under increasing pressure from senior leadership and
boards to improve efficiency and deliver business value, as measured in
business KPIs rather than traditional IT KPIs. On the technology side, CIOs
also need to ensure they are architecting for the future. Why CISOs Must Adopt the Chief Risk Officer Playbook
As the threat landscape becomes increasingly complex due to AI acceleration,
shifting regulations, and geopolitical volatility, the role of the security
leader is evolving. For CISOs and their teams, the McKinsey research provides
a blueprint for transforming from technical gatekeepers into strategic risk
leaders. ... A common question in the industry is whether a company needs both
a Chief Risk Officer and a Chief Information Security Officer (CISO). ...
Understanding the difference in what these two leaders look for is key to
collaboration. Primary goal for CRO: Protect the organization's financial
health and long-term viability. Primary goal for the CISO: Protect the
confidentiality, integrity, and availability of digital assets. Key metric for
CRO: Risk-adjusted return on capital and insurance premium outcomes. Key
metric for CISO: Mean time to detect (MTTD), threat actor activity, and
control effectiveness. Focus area for CRO: Market shifts, credit risk,
geopolitical crises, and supply chain fragility. Focus area for CISO:
Vulnerabilities, phishing campaigns, ransomware, and insider threats. Outcome
for CRO: Ensuring the business can survive any "bad day," financial or
otherwise. Outcome for CISO: Ensuring the digital infrastructure is resilient
against constant attack. ... The next generation of cybersecurity leaders will
not just be the ones who can write the best code or configure the tightest
firewall. They will be the ones who can walk into a boardroom, speak the
language of the CRO, and explain how a specific technical risk impacts the
organization's bottom line.
Passwords are where PCI DSS compliance often breaks down
CISOs often ask where password managers fit within the PCI DSS language. The
standard does not mandate specific technologies, but it defines outcomes that
password managers help achieve. Requirement 8 focuses on identifying users and
authenticating access. Unique credentials and protection of authentication
factors are core expectations. Requirement 12.6 addresses security awareness.
Training must reflect real risks and employee responsibilities. Demonstrating
that employees are trained to use approved credential management tools
strengthens assessment evidence. Self-assessment questionnaires reinforce this
operational focus. They ask how credentials are handled, how access is reviewed,
and how training is documented, pushing organizations to demonstrate process
rather than policy. ... “Security leaders want to know who accessed what and
when. That visibility turns password management from a convenience feature into
a control.” ... Culture shows up in small choices. Whether employees ask before
sharing access. Whether they trust approved tools. Whether security feels like
support or friction. PCI DSS 4.x pushes organizations to take those signals
seriously. Passwords sit at the center of that shift because they touch every
system and every user. Training alone does not change behavior. Tools alone do
not create understanding.
AI Demand and Policy Shifts Redraw Europe’s Data Center Map for 2026
Rising demand for AI, particularly large language models (LLMs) and generative
AI, is driving the need for large-scale GPU clusters and advanced
infrastructure. The EU's forthcoming Cloud and AI Development Act aims to triple
the region's data center processing capacity within five to seven years, with
streamlined approvals and public funding for energy-efficient facilities
expected to stimulate growth. ... “We expect to see a strategic bifurcation,”
Lamb said, with FLAP-D metros continuing to attract latency-sensitive enterprise
and inference workloads that require proximity to end users, while large-scale
AI training deployments gravitate toward regions with abundant, cost-effective
renewable energy. ... Despite abundant renewables and favorable cool conditions,
the Nordics have not scaled as quickly as anticipated. Thorpe reported steady
but slower growth, citing municipal moratoriums – particularly in Sweden – and
lower fiber density. Even so, AI training workloads are renewing interest in
Norway and Finland. “The northern part of Norway is a good example,” Thorpe
said, noting OpenAI’s planned Stargate facility powered entirely by
hydroelectric energy. “They are able to achieve much lower PUE [power usage
effectiveness] because of the cooler climate.” ... Meanwhile, stricter
energy-efficiency requirements are complicating the planning process.
Top cyber threats to your AI systems and infrastructure
Multiple attack types against AI systems are arising. Some attacks, such as data
poisoning, occur during training. Others, such as adversarial inputs, happen
during inference. Still others, such as model theft, occur during deployment.
... Here, the attack goes after the model itself, seeking to produce inaccurate
results by tampering with the model’s architecture or parameters. Some
definitions of model poisoning models also include attacks where the model’s
training data has been corrupted through data poisoning. ... “With prompt
injection, you can change what the AI agent is supposed to do,” says Fabien
Cros ... Model owners and operators use perturbed data to test models for
resiliency, but hackers use it to disrupt. In an adversarial input attack,
malicious actors feed deceptive data to a model with the goal of making the
model output incorrect. ... Like other software systems, AI systems are built
with a combination of components that can include open-source code, open-source
models, third-party models, and various sources of data. Any security
vulnerability in the components can show up in the AI systems. This makes AI
systems vulnerable to supply chain attacks, where hackers can exploit
vulnerabilities within the components to launch an attack. ... Also called model
jailbreaking, attackers’ goal here is to get AI systems — primarily through
engaging with LLMs — to disregard the guardrails that confine their actions and
behavior, such as safeguards to prevent harmful, offensive, or unethical
outputs.
The future of authentication in 2026: Insights from Yubico’s experts
As we look ahead to the future of authentication and identity, 2026 will be a pivotal year as the industry intensifies its focus on the standardization work required to make post-quantum cryptography (PQC) viable at scale as we near a post-quantum future. ... The proven, most effective solution to combat stolen and fake identities is the use of verifiable credentials – specifically, strong authentication combined with digital identity verification. The good news is countries around the world are taking action, with the EU moving forward with a bold plan over the next year: By late December 2026, each Member State must make at least one EUDI wallet available. ... AI's usefulness has rapidly improved over the years, and I anticipate that it will eventually help the general public in a meaningful way. In 2026, the cybersecurity industry should focus more efforts globally on accelerating the adoption of digital content transparency and authenticity standards to help everyone discern fact from fiction and continue the phishing-resistant MFA journey to minimize some of the impact of scams. ... In 2026, there will be a pivotal shift in the digital identity landscape as the industry moves beyond a narrow, consumer-centric focus to one focused on the enterprise. While the public conversation around digital identities has historically centered on consumer-facing scenarios like age verification, the coming year will bring a realisation that robust digital identity truly belongs in the heart of businesses.7 changes to the CIO role in 2026
As AI transforms how people do their jobs, CIOs will be expected to step up and
help lead the effort.“A lot of the conversations are about implementing AI solutions, how to make solutions work, and how they add value,” says Ryan Downing. “But the reality is with the transformation AI is bringing into the workplace right now, there’s a fundamental change in how everyone will be working.” ... This year, the build or buy decisions for AI will have dramatically bigger impacts than they did before. In many cases, vendors can build AI systems better, quicker, and cheaper than a company can do it themselves. And if a better option comes along, switching is a lot easier than when you’ve built something internally from scratch. ... The key is to pick platforms that have the ability to scale, but are decoupled, he says, so enterprises can pivot quickly, but still get business value. “Right now, I’m prioritizing flexibility,” he says. Bret Greenstein, chief AI officer at management consulting firm West Monroe Partners, recommends CIOs identify aspects of AI that are stable, and those that change rapidly, and make their platform selections accordingly. ... “In the past, IT was one level away from the customer,” he says. “They enabled the technology to help business functions sell products and services. Now with AI, CIOs and IT build the products, because everything is enabled by technology. They go from the notion of being services-oriented to product-oriented.”
Agentic AI scaling requires new memory architecture
To avoid recomputing an entire conversation history for every new word
generated, models store previous states in the KV cache. In agentic workflows,
this cache acts as persistent memory across tools and sessions, growing linearly
with sequence length. This creates a distinct data class. Unlike financial
records or customer logs, KV cache is derived data; it is essential for
immediate performance but does not require the heavy durability guarantees of
enterprise file systems. General-purpose storage stacks, running on standard
CPUs, expend energy on metadata management and replication that agentic
workloads do not require. The current hierarchy, spanning from GPU HBM (G1) to
shared storage (G4), is becoming inefficient ... The industry response involves
inserting a purpose-built layer into this hierarchy. The ICMS platform
establishes a “G3.5” tier—an Ethernet-attached flash layer designed explicitly
for gigascale inference. This approach integrates storage directly into the
compute pod. By utilising the NVIDIA BlueField-4 data processor, the platform
offloads the management of this context data from the host CPU. The system
provides petabytes of shared capacity per pod, boosting the scaling of agentic
AI by allowing agents to retain massive amounts of history without occupying
expensive HBM. The operational benefit is quantifiable in throughput and energy.
No comments:
Post a Comment