Quote for the day:
"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer
AI cost overruns are adding up — with major implications for CIOs
Many organizations appear to be “flying blind” while deploying AI, adds John
Pettit, CTO at Google Workspace professional services firm Promevo. If a CIO-led
AI project misses budget by a huge margin, it reflects on the CIO’s credibility,
he adds. “Trust is your most important currency when leading projects and
organizations,” he says. “If your AI initiative costs 50% more than forecast,
the CFO and board will hesitate before approving the next one.” ... Beyond
creating distrust in IT leadership, missed cost estimates also hurt the
company’s bottom line, notes Farai Alleyne, SVP of IT operations at accounts
payable software vendor Billtrust. “It is not just an IT spending issue, but it
could materialize into an overall business financials issue,” he says. ...
enterprise leaders often assume AI coding assistants or no-code/low-code tools
can take care of most of the software development needed to roll out a new AI
tool. These tools can be used to create small prototypes, but for
enterprise-grade integrations or multi-agent systems, the complexity creates
additional costs, he says. ... In addition, organizations often underestimate
the cost of operating an AI project, he says. Token usage for vectorization and
LLM calls can cost tens of thousands of dollars per month, but hosting your own
models isn’t cheap, either, with on-premises infrastructure costs potentially
running into the thousands of dollars per month.AI-Powered Digital Transformation: A C-Suite Blueprint For The Future Of Business
At its core, digital transformation is a strategic endeavor, not a technological
one. To succeed, it should be at the forefront of the organizational strategy.
This means moving beyond simply automating existing processes and instead asking
how AI enables new ways of creating value. The shift is from operational
efficiency to business model innovation. ... True digital leaders possess a
visionary mindset and the critical competencies to guide their teams through
change. They must be more than tech-savvy; they must be emotionally intelligent
and capable of inspiring trust. This demands an intentional effort to develop
leaders who can bridge the gap between deep business acumen and digital fluency.
... With the strategic, cultural and data foundations in place, organizations
can focus on building a scalable and secure digital infrastructure. This may
involve adopting cloud computing to provide flexible resources needed for big
data processing and AI model deployment. It can also mean investing in a range
of complementary technologies that, when integrated, create a cohesive and
intelligent ecosystem. ... Digital transformation is a complex, continuous
journey, not a single destination. This framework provides a blueprint, but its
success requires leadership. The challenge is not technological; it's a test of
leadership, culture and strategic foresight. Why Automation Fails Without the Right QA Mindset
Automation alone doesn’t guarantee quality — it is only as effective as the
tests it is scripted to run. If the requirements are misunderstood, automated
tests may pass while critical issues remain undetected. I have seen failures
where teams relied solely on automation without involving proper QA practices,
leading to tests that validated incorrect behavior. Automation frequently fails
to detect new or unexpected issues introduced by system upgrades. It often
misses critical problems such as faulty data mapping, incomplete user interface
(UI) testing and gaps in test coverage due to outdated scripts. Lack of
adaptability is another common obstacle that I’ve repeatedly seen undermine
automation testing efforts. When UI elements are tightly coupled, even minor
changes can disrupt test cases. With the right QA mindset, this challenge is
anticipated — promoting modular, maintainable automation strategies capable of
adapting to frequent UI and logic changes. Automation lacks the critical
analysis required to validate business logic and perform true end-to-end
testing. From my experience, the human QA mindset proved essential during the
testing of a mortgage loan calculation system. While automation handled standard
calculations and data validation, it could not assess whether the logic aligned
with real-world lending rules.Stop Feeding AI Junk: A Systematic Approach to Unstructured Data Ingestion
Worse, bad data reduces accuracy. Poor quality data not only adds noise, but it
also leads to incorrect outputs that can erode trust in AI systems. The result
is a double penalty: wasted money and poor performance. Enterprises must
therefore treat data ingestion as a discipline in its own right, especially for
unstructured data. Many current ingestion methods are blunt instruments. They
connect to a data source and pull in everything, or they rely on copy-and-sync
pipelines that treat all data as equal. These methods may be convenient, but
they lack the intelligence to separate useful information from irrelevant
clutter. Such approaches create bloated AI pipelines that are expensive to
maintain and impossible to fine-tune. ... Once data is classified, the next step
is to curate it. Not all data is equal. Some information may be outdated,
irrelevant, or contradictory. Curating data means deliberately filtering for
quality and relevance before ingestion. This ensures that only useful content is
fed to AI systems, saving compute cycles and improving accuracy. This also
ensures that RAG and LLM solutions can utilize their context windows on tokens
for relevant data and not get cluttered up with irrelevant junk. ... Generic
ingestion pipelines often lump all data into a central bucket. A better approach
is to segment data based on specific AI use cases. Five critical API security flaws developers must avoid
Developers might assume that if an API endpoint isn’t publicly advertised,
it’s inherently secure, a dangerous myth known as “security by obscurity.”
This mistake manifests in a few critical ways: developers may use easily
guessable API keys or leave critical endpoints entirely unprotected, allowing
anyone to access them without proving their identity. ... You must treat all
incoming data as untrusted, meaning all input must be validated on the
server-side. Your developers should implement comprehensive server-side checks
for data types, formats, lengths, and expected values. Instead of trying to
block everything that is bad, it is more secure to define precisely what is
allowed. Finally, before displaying or using any data that comes back from the
API, ensure it is properly sanitized and escaped to prevent injection attacks
from reaching end-users. ... Your teams must adhere to the “only what’s
necessary” principle by designing API responses to return only the absolute
minimum data required by the consuming application. For production
environments, configure systems to suppress detailed error messages and stack
traces, replacing them with generic errors while logging the specifics
internally for your team. ... Your security strategy must incorporate rate
limiting to apply strict controls on the number of requests a client can make
within a given timeframe, whether tracked by IP address, authenticated user,
or API key.Disaster recovery and business continuity: How to create an effective plan
If your disaster recovery and business continuity plan has been gathering dust
on the shelf, it’s time for a full rebuild from the ground up. Key components
include strategies such as minimum viable business (MVB); emerging
technologies such as AI and generative AI; and tactical processes and
approaches such as integrated threat hunting, automated data discovery and
classification, continuous backups, immutable data, and gamified tabletop
testing exercises. Backup-as-a-service (BaaS) and disaster
recovery-as-a-service (DRaaS) are also becoming more popular, as enterprises
look to take advantage of the scalability, cloud storage options, and
ease-of-use associated with the “as-a-service” model. ... Accenture’s Whelan
says that rather than try to restore the entire business in the event of a
disaster, a better approach might be to create a skeletal replica of the
business, an MVB, that can be spun up immediately to keep mission-critical
processes going while traditional backup and recovery efforts are under
way. ... The two additional elements are: one offline, immutable, or
air-gapped backup that will enable organizations to get back on their feet in
the event of a ransomware attack, and a goal of zero errors. Immutable data is
“the gold standard,” Whelan says, but there are complexities associated with
proper implementation. Building Intelligence into the Database Layer
At the core of this evolution is the simple architectural idea of the database
as an active intelligence engine. Rather than simply recording and serving
historical data, an intelligent database interprets incoming signals,
transforms them in real-time, and triggers meaningful actions directly from
within the database layer. From a developer’s perspective, it still looks like
a database, but under the hood, it’s something more: a programmable,
event-driven system designed to act on high-velocity data streams with intense
precision in real-time. ... Built-in processing engines unlock features like
anomaly detection, forecasting, downsampling, and alerting in true real-time.
These embedded engines enable real-time computation directly inside the
database. Instead of moving data to external systems for analysis or
automation, developers can run logic where the data already lives. ... Active
intelligence doesn’t just enable faster reactions; it opens the door to
proactive strategies. By continuously analyzing streaming data and comparing
it to historical trends, systems can anticipate issues before they escalate.
For example, gradual changes in sensor behavior can signal the early stages of
a failure, giving teams time to intervene. ... Developers need more than just
storage and query, they need tools that think. Embedding intelligence into the
database layer represents a shift toward active infrastructure: systems that
monitor, analyze, and respond at the edge, in the cloud, and across
distributed environments.AI Cybersecurity Arms Race: Are Companies Ready?
Security operations centers were already overwhelmed before AI became mainstream. Human analysts, drowning in alerts, can’t possibly match the velocity of machine-generated threats. Detection tools, built on static signatures and rules, simply can’t keep up with attacks that mutate continuously. The vendor landscape isn’t much more reassuring. Every security company now claims its product is “AI-powered,” but too many of these features are black boxes, immature, or little more than marketing gloss. ... That doesn’t mean defenders are standing still. AI is beginning to reshape cybersecurity on the defensive side, too, and the potential is enormous. Anomaly detection, fueled by machine learning, is allowing organizations to spot unusual behavior across networks, endpoints, and cloud environments far faster than humans ever could. In security operations centers, agentic AI assistants are beginning to triage alerts, summarize incidents, and even kick off automated remediation workflows. ... The AI arms race isn’t something the CISO can handle alone; it belongs squarely in the boardroom. The challenge isn’t just technical — it’s strategic. Budgets must be allocated in ways that balance proven defenses with emerging AI tools that may not be perfect but are rapidly becoming necessary. Security teams must be retrained and upskilled to govern, tune, and trust AI systems. Policies need to evolve to address new risks such as AI model poisoning or unintended bias.Agentic AI needs stronger digital certificates
The consensus among practitioners is that existing technologies can handle
agentic AI – if, that is, organisations apply them correctly from the start.
“Agentic AI fits into well-understood security best practices and paradigms,
like zero trust,” Wetmore emphasises. “We have the technology available to us –
the protocols and interfaces and infrastructure – to do this well, to automate
provisioning of strong identities, to enforce policy, to validate least
privilege access.” The key is approaching AI agents with security-by-design
principles rather than bolting on protection as an afterthought. Sebastian Weir,
executive partner and AI Practice Leader at IBM UK&I, sees this shift
happening in his client conversations. ... Perhaps the most critical insight
from security practitioners is that managing agentic AI isn’t primarily about
new technology – it’s about governance and orchestration. The same platforms and
protocols that enable modern DevOps and microservices can support AI agents, but
only with proper oversight. “Your ability to scale is about how you create
repeatable, controllable patterns in delivery,” Weir explains. “That’s where
capabilities like orchestration frameworks come in – to create that common plane
of provisioning agents anywhere in any platform and then governance layers to
provide auditability and control.”Learning from the Inevitable
Currently, too many organizations follow a “nuke and pave” approach to IR,
opting to just reimage computers because they don’t have the people to properly
extract the wisdom from an incident. In the short term, this is faster and
cheaper but has a detrimental impact on protecting against future threats. When
you refuse to learn from past mistakes, you are more prone to repeating them.
Conversely, organizations may turn to outsourcing. Experts in managed security
services and IR have realized consulting gives them a broader reach and impact
over the problem — but none of these are long-term solutions. This kind of
short-sighted IR creates a false sense of security. Organizations are solving
the problem for the time being, but what about the future? Data breaches are
going to happen, and reliance on reactive problem-solving creates a flimsy IR
program that leaves an organization vulnerable to threats. ... Knowledge-sharing
is the best way to go about this. Sharing key learnings from previous attacks is
how these teams can grow and prevent future disasters. The problem is that while
plenty of engineers agree they learn the most when something “breaks” and that
incidents are a treasure trove of knowledge for security teams, these
conversations are often restricted to need-to-know channels. Openness about
incidents is the only way to really teach teams how to address them.
No comments:
Post a Comment