Quote for the day:
“The only real mistake is the one from
which we learn nothing.” -- Henry Ford

“AI promptly evaluates product documentation, reviews, and market reports,
cutting the time it takes to evaluate vendors from weeks to days and unearthing
compatibility problems that go unnoticed by human reviewers,” he says. Like 8×8,
Thrive uses a “trust but verify” approach that treats AI output as inputs for
its decision-making processes, not final answers, Whittaker says. “AI is great
for comparing technical specs, but it can’t help you much with assessing
non-technical aspects such as quality of support, cultural fit, etc.” Thrive
plans to enhance its future AI models to predict defects in products, foresee
deployment challenges, and monitor vendor performance, Whittaker says. ... “When
you are negotiating a contract, let’s say you received an order form, or you
received a large legal contract, and it’s all unstructured data,” he says. “AI
is really good at guiding you on what kind of commercial terms you should be
careful with. It can look at your existing contracts and compare them with this
new one and say, ‘This one has some anomalies.’” The company’s use of AI is
giving the IT team time to work on other priorities instead of spending extra
time researching potential products, Johar says. “If you look at how an IT
organization works, we are buying software all the time, and sometimes it leaves
you very little time to focus on real evaluation and piloting the software,
because you just end up spending so much time on all these RFP processes, legal
processes, and research,” he says.

"The deepfake threat landscape looks, above all else, dynamic," he says. "While
email threats and static imagery are still the most commonly encountered
vectors, there is a wide diversity of other forms of deepfakes that are quickly
growing in prevalence. In fact, we're seeing more and more of every kind of
deepfake in the wild." ... Attackers are using a variety of AI techniques to
enhance their attack pipeline. Human digital twins can be trained on public
information about a person to help create more realistic phishing attacks,
which, combined with voice samples, could create convincing audio deepfakes.
Concerns over misuse of AI caused Microsoft to mostly scuttle a voice cloning
technology feature that it could have integrated into various apps, such as
Teams, and allow a user — or an attacker — to hijack someone's voice for all
kinds of fraud attempts. ... "The challenge now is that AI can be used to reduce
the skill barrier to entry and speed up production to a higher quality," she
says. "Since the sophistication of deepfakes are getting harder to detect, it is
imperative to turn to AI-augmented tools for detection, as people alone cannot
be the last line of defense." Companies should continue to train their employees
and create good policies that reduce the impact that one person — even a top
executive — can have for the company, says Ironscales' Benishti. "Develop
policies that make it impossible for a single employee's bad decision to result
in compromise," he says.

“Demand for electricity around the world from data centres is on course to
double over the next five years, as information technology becomes more
pervasive in our lives,” Birol said in a statement released with the IEA’s 2024
Energy and AI report. “The impact will be especially strong in some countries —
in the United States, data centres are projected to account for nearly half of
the growth in electricity demand; in Japan, over half; and in Malaysia,
one-fifth.” ... Unlike older mainframe workloads that spiked and dropped with
changing demand, modern AI systems operate close to full capacity for days or
even weeks at a time. ... It’s not a benchmark like FLOPS, but it now influences
nearly every design decision. Chipmakers promote performance per watt as their
most important competitive edge, because speed doesn’t matter if the grid can’t
handle it. ... That dynamic is also reshaping the economics of AI. Cloud
providers are starting to charge for workloads based not just on runtime but on
the power they draw, forcing developers to optimize for energy throughput rather
than latency. Data center architects now design around megawatt budgets instead
of square footage, while governments from the U.S. to Japan are issuing new
rules for energy-efficient AI systems.

Understanding the journey of data—where it originates, how it transforms, and
who accesses it—is critical for both governance and compliance. Generative AI
excels at mapping data lineage by automatically tracing data flows across
systems, applications, and processes. Consider a scenario where an organisation
needs to demonstrate how customer information moves from collection to storage
and reporting. AI-powered lineage tools can generate visual maps showing every
touchpoint, transformation, and user interaction. This automation not only
accelerates audits and compliance reporting but also provides actionable
insights to improve data handling practices. ... Organisations often grapple
with choosing between centralised and autonomous (decentralised) data management
models. Centralised approaches offer uniformity and control, while autonomous
models empower individual teams with flexibility. Generative AI supports both
paradigms. In centralised settings, AI enforces global policies, ensures
consistency, and manages data assets from a single point of control. In
autonomous environments, AI agents can be embedded within business units,
tailoring governance and security measures to local needs while maintaining
alignment with overarching standards. This hybrid capability ensures
organisations remain agile without compromising data integrity or compliance.

Concentration risk from a cloud customer can be a challenge for hyperscalers.
This is especially true when key customers concentrate their load in a single
region; they can saturate the shared physical resources faster than the
hyperscaler’s auto-scaling can respond. ... At hyperscale, observability
requires keeping vast telemetry data like logs, metrics and traces usable and
cost-efficient. Storing it under one roof in an accessible, scalable and
performant fashion lets organizations run AI and analytics directly from their
telemetry data, spotting anomalies, problem areas and threats while
future-proofing their infrastructure for data-intensive workloads. ... The
complexity of managing microservices doesn’t scale linearly with the number of
microservices—it scales exponentially. Mitigation requires a multipronged
strategy: Limit the number of microservices; use traditional approaches where
a sufficient observability strategy should be robust, yet lightweight;
democratize observability-based ops, tools and skills in the organization; and
exploit AI for heavy lifting and ops automation. ... One challenge is
ephemeral dependency drift. At hyperscale, microservices vanish fast, breaking
dependency maps and hiding failure roots. It’s like chasing ghosts in a storm.
Fix it with real-time dependency snapshots and AI to predict drift patterns.
Teams see the true service web, catch issues early and keep apps humming, no
matter how wild the cloud gets.

The stakes couldn't be higher. The World Economic Forum surveyed over 1,000
global employers and found that nearly half of them said they’ll reduce their
workforce in the next five years and replace those jobs with AI. However,
paradoxically, the same technologies could create 2.73 million jobs by 2028 in
India alone. It depends entirely on how well organisations manage the
transition. It's not just about having the right technology; it's about having
the right human strategy to deploy it. Consider the emergence of "cobots",
which are collaborative robots designed to work alongside humans rather than
replace them. ... Perhaps the most insidious challenge is AI bias, which can
perpetuate discrimination based on race, gender, age, etc. and erode the trust
that is essential for successful human-machine collaboration. When AI systems
reflect historical prejudices or systemic inequalities, they undermine the
very foundations of inclusive workplaces that Industry 5.0 promises to create.
HR leaders must become guardians of algorithmic fairness, ensuring that AI
systems used in recruitment, performance evaluation, and career development
are transparent, equitable, and regularly audited. This requires building
diverse AI development teams, implementing robust data governance frameworks,
and maintaining human oversight in critical decision-making processes.
The substitution myth refers to the flawed assumption that automation can
simply replace human functions in a system without fundamentally altering how
the system or human work operates. This misconception is built on assumptions
like HABA-MABA ("Humans Are Better At / Machines Are Better At"), which assume
that human and machine strengths are fixed, and system design is merely a
matter of allocating tasks accordingly ... When an automated system fails, the
amount of knowledge required to make things right again is likely greater than
that required during normal operations. This creates immediate, new, and
numerous items of work. Because the designers of automation can’t fully
automate the human "parts", the human is left to cope with what’s left after
the automated parts don’t behave as expected, leaving more complexity in their
wake. ... In highly interdependent tasks like software operations, we can only
plan our actions effectively when we can accurately anticipate the actions of
others. Skilled teams achieve this predictability through shared knowledge and
their own coordination mechanisms that are developed over time through
extensive collaboration. Despite the common refrain of "human error" in
incidents, in general, humans are quite predictable in their work, and we have
established means for checking if something seems unpredictable

To understand why observability is taking off, it is important to see the
difference with traditional monitoring. Whereas traditional monitoring has
been limited for years to servers, networks, and memory statistics,
observability goes a step further. Monitoring mainly records what is
happening, while observability shows why it is happening. It establishes
connections between systems, shows how components interact with each other,
and provides insight into the impact on the end user. ... The complexity of
modern IT environments requires knowledge and capacity that is not available
everywhere. Jean-Bastien outlines the dilemma. “If a customer had to employ
someone full-time to manage everything, a capacity and knowledge problem would
quickly arise. Many organizations therefore call on us to ensure continuity.”
With a team of dozens of engineers, Cegeka can easily scale up, even during
peak loads or holidays. In this way, they take care of the operational side of
things, while customers retain the insight and reporting they need to bring
their IT and business together. ... Nevertheless, there are limits to what
observability can achieve. Legacy systems, such as monolithic applications in
C++ or COBOL on mainframes, are difficult to instrument with modern agents.
This poses a challenge in some sectors, particularly for banks that still rely
heavily on older core systems.

“The skills shortage creates a paradox that limits AI’s potential in
cybersecurity,” asserted Tim Freestone, chief strategy officer for Kiteworks,
a provider of a secure platform for exchanging private data, in San Mateo,
Calif. “Organizations lack personnel with the expertise needed to properly
deploy, manage, and optimize AI-powered security tools, meaning the very
solution designed to alleviate staffing pressures remains underutilized,” he
told TechNewsWorld. “This gap is particularly acute because effective AI
implementation requires dual competencies — both operating AI systems and
defending against AI-powered attacks — skills that are in even shorter supply
than traditional cybersecurity expertise. “Without trained professionals who
can configure AI tools appropriately, interpret their outputs accurately, and
integrate them effectively into security operations, organizations risk
deploying AI systems that fail to reach their defensive potential or, worse,
introduce new vulnerabilities through improper management,” he said. ...
“Certifications can provide reassurance that candidates meet a certain
standard and help organizations demonstrate credibility to clients and
regulators,” she told TechNewsWorld, “but the reliance on credentials also has
drawbacks.

The CIA triad is both too broad and too narrow. It lacks the vocabulary and
context to handle today’s realities. In trying to retrofit authenticity,
accountability, privacy, and safety into its rigid structure, we leave gaps that
attackers exploit. ... Treating ransomware as a simple “availability” failure
misses the point. Being “up” or “down” is irrelevant when your systems are
locked and business halted. What matters is resilience: the engineered ability
to absorb damage, fail gracefully, and restore from immutable backups.
Availability is binary; resilience is survival. Without it, you’re unprepared.
... A fraudulent deepfake of your CEO authorizing a wire transfer may have
perfect technical integrity — checksums intact, file unaltered. But its
authenticity is destroyed. The CIA triad has no language to capture this
breakdown, leaving organizations exposed to fraud and reputational chaos. ... A
successful model must explicitly encompass the principles that the triad
overlooked — such are authenticity, accountability, and resilience. Those
principles must be added as foundational pillars. Furthermore, the model should
have the capability to help CISOs and their teams navigate the veritable forest
of frameworks, harmonize regulatory demands, and eliminate duplicate work, while
also giving them a way to speak to their boards in terms of resilience,
accountability, and trust, rather than just uptime and firewalls.
No comments:
Post a Comment