Quote for the day:
"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone
How CIOs can get a better handle on budgets as AI spend soars
Everyone wants to become AI-centric or AI-native, says West Monroe’s Greenstein.
“But nobody has extra buckets of money to do this unless it’s existential to
their company,” he says. So moving money from legacy projects to AI is a popular
strategy. “It’s a shift of priorities within companies,” he says. “They look at
their investments and ask how many are no longer needed because of AI, or how
many can be done with AI. Plus, they’re putting pressure on vendors to drive
down costs. They’re definitely squeezing existing suppliers.” Even large,
tech-forward companies might have to do this kind of juggling. ... “AI is in a
self-funding model at the moment,” he says. “We’re shifting investment from
legacy technologies to AI.” ... Another challenge to budgeting is the demands
that AI places on people, systems, and data. One of the most significant
challenges to managing AI costs is talent, says Principal’s Arora. “Skill gaps
and cross-team dependencies can slow deliveries and drive up costs,” he says.
Then there’s the problem of evolving regulations, and the need to continuously
adapt governance frameworks to stay resilient in the face of these changes.
Organizations also often underestimate how much money will be needed to train
employees, and to bring data and other foundational systems in line with what’s
needed for AI. “Legacy environments add complexity and expense,” he adds. “These
one-time costs are heavy but essential to avoid long-term inefficiencies.”AI agent evaluation replaces data labeling as the critical path to production deployment
It's a fundamental shift in what enterprises need validated: not whether their
model correctly classified an image, but whether their AI agent made good
decisions across a complex, multi-step task involving reasoning, tool usage and
code generation. If evaluation is just data labeling for AI outputs, then the
shift from models to agents represents a step change in what needs to be
labeled. Where traditional data labeling might involve marking images or
categorizing text, agent evaluation requires judging multi-step reasoning
chains, tool selection decisions and multi-modal outputs — all within a single
interaction. "There is this very strong need for not just human in the loop
anymore, but expert in the loop," Malyuk said. He pointed to high-stakes
applications like healthcare and legal advice as examples where the cost of
errors remains prohibitively high. ... The challenge with evaluating agents
isn't just the volume of data, it's the complexity of what needs to be assessed.
Agents don't produce simple text outputs; they generate reasoning chains, make
tool selections, and produce artifacts across multiple modalities. ... While
monitoring what AI systems do remains important, observability tools measure
activity, not quality. Enterprises require dedicated evaluation infrastructure
to assess outputs and drive improvement. These are distinct problems requiring
different capabilities.How IT leaders can build successful AI strategies — the VC view
It’s clear now that AI is transforming existing business structures, operational
layers, organizational charts, and processes. “As a CIO, if you look at long
term, you get better visibility of the outcomes of AI,” said Sandhya
Venkatachalam, founder and partner at Axiom Partners. “Today, a lot of these net
new capabilities are taking the form of AI performing the work or producing the
outcomes that humans do, versus emulating or automating software tools,”
Venkatachalam said. The shift will inevitably displace legacy systems and
processes. She cited customer support as an early area ripe for upheaval. ...
VCs typically don’t look at what buyers need right now; they look ahead.
Similarly, IT leaders should look at how AI can transform their industry in the
future. The real value of AI is in displacing legacy stacks and processes, and
short wins or scattered AI initiatives mean nothing, Venkatachalam said. Adding
AI to existing workflows — like building an internal large language model (LLM)
— is often a waste. Enterprises are also wasting time building proprietary tools
and infrastructures, which duplicates work already commoditized by big research
labs, Venkatachalam said. ... AI strategies link IT directly to core products,
which dictates market survival. IT decision-makers should align AI strategies to
their verticals markets. Physical AI is considered the next big AI
technology after agents in some areas. Could AI transparency backfire for businesses?
Work is underway to devise common ways to disclose the use of AI in content creation. The British Standards Institute’s (BSI) common standard (BS ISO/IEC 42001:2023) provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system (AIMS), ensuring AI applications are developed and operated ethically, transparently, and in alignment with regulatory standards. It helps manage AI-specific risks such as bias and lack of transparency. Mark Thirwell, the BSI’s global digital director, says that such standards are critical for building trust in AI. For his part, Thirwell is mainly focused on improving the transparency of underlying training data over whether content is disclosed as AI-generated. “You wouldn’t buy a toaster if someone hadn’t checked it to make sure it wasn’t going to set the kitchen on fire,” he argues. Thirwell posits that common standards can, and must, interrogate the trustworthiness of AI. Does it do what it says it’s going to do? Does it do that every time? Does it not do anything else – as hallucination and misinformation become increasingly problematic? Does it keep your data secure? Does it have integrity? And unique to AI, is it ethical? “If it’s detecting cancers or sifting through CVs,” he says, “is there going to be a bias based on the data it holds?” This is where transparency of the underlying data becomes key.The Importance of Having and Maintaining a Data Asset List and how to create one
The explosive growth of structured and unstructured data has made it
increasingly difficult for organizations to track what information they hold
across networks, devices, SaaS applications, and cloud platforms. Without clear
visibility, businesses face higher risks, including security gaps, audit
failures, regulatory penalties, and rising storage costs. ... Before we get into
how to build a data asset inventory, it’s important to understand why regulators
now expect organizations to maintain one. The compliance landscape in 2025 is
more demanding than ever, and nearly every major framework explicitly or
implicitly requires data mapping and data inventory management. ... A data asset
inventory is a structured, centralized record of all the data types and systems
that power your organization. The goal is to gain full visibility into what data
exists, where it’s stored, who manages it, and how it flows, while also
capturing any compliance obligations tied to that data. ... Many organizations
rely on third-party providers to manage or process sensitive data, which can
improve efficiency but also introduce new risks. External partnerships expand
your organization’s digital footprint, increase the potential attack surface,
and add complexity to data governance. ... A data asset inventory isn’t a
one-time task, it’s a living, evolving document. As your organization adopts new
tools, expands into new markets, or grows its teams, your inventory should
evolve to reflect these changes.
Building and Implementing Cyber Resilience Strategies
Currently, there is no unified standard for managing cyber resilience. Although
many vendors offer their own solutions and some general standardization efforts
are underway, a clear and consistent framework has yet to be established. As a
result, organizations are forced to develop their own methods based on internal
priorities and interpretations. The main challenge is that cyberattacks have
become unavoidable and frequent. Traditional protective measures alone are no
longer sufficient to fight modern threats. Another problem is the lack of
coordination between IT, information security, and business units. ... In
practice, however, its implementation largely depends on the organization’s
maturity, scale, and specific infrastructure characteristics. The main
difference lies in the level of detail: as a company grows, its infrastructure
becomes more complex, the number of stakeholders increases, and each stage of
analysis requires greater depth. In small organizations, identifying critical
services is relatively quick, while in large enterprises, the process may
involve analyzing hundreds of interconnected operations. Likewise, the scope of
security measures varies—from basic hardening of key systems to multi-layered
protection across distributed environments. At the same time, core principles
such as threat analysis, incident response planning, and regular audits remain
largely unchanged across all organizations.
Security researchers develop first-ever functional defense against cyberattacks on AI models
Researchers now warn that the most advanced of these attacks, called
cryptanalytic extraction, can rebuild a model by asking it thousands of
carefully chosen questions. Each answer helps reveal tiny clues about the
model’s internal structure. Over time, those clues form a detailed map that
exposes the model’s weights and biases. These attacks work surprisingly well
when used on neural networks that rely on ReLU activation functions. Because
these networks behave like piecewise linear systems, attackers can hunt for
points where a neuron’s output flips between active and inactive and use those
moments to uncover the neuron’s signature. ... Early methods could only recover
partial information, but newer techniques can figure out both the size and the
direction of the weights. Some even work using nothing more than the model’s
predicted labels. All rely on the same core assumption. Neurons in a given layer
behave differently enough that their signals can be separated. When that is
true, the attack can cluster each neuron’s critical points and rebuild the
entire network with surprising accuracy. ... The team tested this defense on
neural networks that previous studies had broken in just a few hours. One of the
clearest results comes from a model trained on the MNIST digit dataset with two
small hidden layers.
Draft Trump executive order signals new battle ahead over state AI powers
By eliminating that federal framework, the Trump White House positions itself
not simply as preempting state authority, but also as reversing its immediate
federal predecessor’s regulatory approach. The draft EO further states that the
U.S. must sustain AI leadership through a “balanced, minimal regulatory
environment,” language that signals a clear ideological orientation against
safety-first or rights-protective models of AI governance. The administration
wants the Department of Justice to challenge state AI laws it views as
obstructive; the Department of Commerce to catalogue and publicly criticize
state statutes deemed “burdensome;” and agencies like the Federal Communications
Commission (FCC) and Federal Trade Commission (FTC) to establish national
standards that would override state requirements. ... The move immediately
raises questions not only about the future of AI governance but also about the
structure of American federalism. For years, states have been the primary actors
experimenting with AI regulation. They have advanced bills aimed at biometric
privacy, algorithmic fairness, deepfake disclosure, automated decision-making
transparency, and even restrictions on government use of facial recognition.
These experiments, often more aggressive than anything contemplated in Congress,
have become the country’s de facto laboratories of AI oversight. Engineering the Perfect Product Launch: Lessons from Prototype to Production
Rushing a product to market without a strong quality framework is a gamble most companies regret. Recalls, warranty claims and reputational damage cost far more than investing in quality upfront. The smarter approach is to build quality into the process from the start rather than bolting it in the end. ... During the product rollout I supported, we built proactive quality checkpoints at every stage of assembly. This meant small defects were caught early, long before they reached final testing. In one instance, a supplier batch with a minor material inconsistency was identified at the first inspection step, preventing what could have been a costly recall. Conversely, I’ve also seen how skipping just one validation step resulted in weeks of rework. ... When all three elements: Development, quality and ERP work in harmony, product launches move faster and run smoothly. Costs are kept in check because inefficiencies are addressed early. Time-to-market accelerates because bottlenecks are anticipated. Manufacturing excellence becomes the standard from the first unit shipped, not something achieved after painful trial and error. ... Engineering a product launch is about orchestrating dozens of small, interconnected decisions across design, quality and enterprise systems. The companies that consistently succeed treat the launch as an engineering challenge, not just a marketing deadline.Organisations struggle with non-human identity risks & AI demands
Growth in digital identities-both human and non-human-continues to strain legacy
identity and access management practices. This identity sprawl raises the risk
of credential-based threats and increases the attack surface for cybercriminals.
"With organizations struggling to govern an expanding mesh of digital identities
across human, machine, and AI entities, over-permissioned roles, shadow
identities, and disconnected IAM systems will continue to expose organizations
to credential-based attacks and lateral movement. AI will also reshape
traditional social engineering: synthetic voices, deepfakes, and adaptive
phishing will erode the reliability of static authentication, forcing
organizations to adopt continuous and context-aware verification as the new
baseline," said Benoit Grange ... "The NIS2 directive has ushered in stricter
cybersecurity measures and reporting for a wider range of critical
infrastructure and essential services across the European Union. For industries
newly brought under this directive, including manufacturing, logistics and
certain digital services, 2026 will bring new growing pains. The sectors, many
long accustomed to minimal compliance oversight, now face strict governance and
reporting requirements. In contrast, mature sectors like finance and healthcare
will adapt more smoothly. The disparity will expose structural weaknesses in
organizations unfamiliar with continuous compliance, making them attractive
targets for attackers exploiting regulatory confusion," said Niels Fenger.
No comments:
Post a Comment