Quote for the day:
“People are not lazy. They simply have
important goals – that is, goals that do not inspire them.” --
Tony Robbins

Despite its promise, AI introduces new challenges, including security risks and
trust deficits. Threat actors leverage the same AI advancements, targeting
systems with more precision and, in some cases, undermining AI-driven defenses.
In the Zerto–IDC survey mentioned earlier, for instance, only 41% of respondents
felt that AI is “very” or “somewhat” trustworthy; 59% felt that it is “not very”
or “not at all” trustworthy. To mitigate these risks, organizations must adopt
AI responsibly. For example, combining AI-driven monitoring with robust
encryption and frequent model validation ensures that AI systems deliver
consistent and secure performance. Furthermore, organizations should emphasize
transparency in AI operations to maintain trust among stakeholders. Successful
AI deployment in DR/CR requires cross-functional alignment between ITOps and
management. Misaligned priorities can delay response times during crises,
exacerbating data loss and downtime. Additionally, the ongoing IT skills
shortage is still very much underway, with a different recent IDC study
predicting that 9 out of 10 organizations will feel an impact by 2026, at a cost
of $5.5 trillion in potential delays, quality issues, and revenue loss across
the economy. Integrating AI-driven automation can partially mitigate these
impacts by optimizing resource allocation and reducing dependency on manual
intervention.
Whether its API’s, middleware, firmware embedded devices or operational
technology, they’re all built on the same outdated encryption and systems of
trust. One of the biggest threats from quantum computing will be on all this
unseen machinery that powers global digital trade. These systems handle the
backend of everything from routing to cargo to scheduling deliveries and
clearing large shipments, but they were never designed to withstand the threat
of quantum. Attackers will be able to break in quietly — injecting malicious
code into control software, ERP systems or impersonating suppliers to
communicate malicious information and hijack digital workflows. Quantum
computing won’t necessarily affect the industries on its own, but it will
corrupt the systems that power the global economy. ... Some of the most
dangerous attacks are being staged today, with many nation-states and bad actors
storing encrypted data, from procurement orders to shipping records. When
quantum computers are finally able to break those encryption schemes, attackers
will be able to decrypt them in what’s coined a Harvest Now Decrypt Later (HNDL)
attack. These attacks, although retroactive in nature, represent one of the
biggest threats to the integrity of cross-border commerce. Global trade depends
on digital provenance or handling goods and proving where they came
from.

Aside from susceptibility to advanced techniques, tactics, and procedures (TTPs)
such as thermal manipulation and magnetic fields, more common vulnerabilities
associated with air-gapped environments include factors such as unpatched
systems going unnoticed, lack of visibility into network traffic, potentially
malicious devices coming on the network undetected, and removable media being
physically connected within the network. Once the attack is inside OT systems,
the consequences can be disastrous regardless of whether there is an air gap or
not. However, it is worth considering how the existence of the air gap can
affect the time-to-triage and remediation in the case of an incident. ... This
incident reveals that even if a sensitive OT system has complete digital
isolation, this robust air gap still cannot fully eliminate one of the greatest
vulnerabilities of any system—human error. Human error would still hold if an
organization went to the extreme of building a faraday cage to eliminate
electromagnetic radiation. Air-gapped systems are still vulnerable to social
engineering, which exploits human vulnerabilities, as seen in the tactics that
Dragonfly and Energetic Bear used to trick suppliers, who then walked the
infection right through the front door. Ideally, a technology would be able to
identify an attack regardless of whether it is caused by a compromised supplier,
radio signal, or electromagnetic emission.
_Frank_Peters_Alamy.jpg?width=1280&auto=webp&quality=80&format=jpg&disable=upscale)
A core feature of no-code development, third-party connectors allow applications
to interact with cloud services, databases, and enterprise software. While these
integrations boost efficiency, they also create new entry points for
adversaries. ... Another emerging threat involves dependency confusion attacks,
where adversaries exploit naming collisions between internal and public software
packages. By publishing malicious packages to public repositories with the same
names as internally used components, attackers could trick the platform into
downloading and executing unauthorized code during automated workflow
executions. This technique allows adversaries to silently insert malicious
payloads into enterprise automation pipelines, often bypassing traditional
security reviews. ... One of the most challenging elements of securing no-code
environments is visibility. Security teams struggle with asset discovery and
dependency tracking, particularly in environments where business users can
create applications independently without IT oversight. Applications and
automations built outside of IT governance may use unapproved connectors and
expose sensitive data, since they often integrate with critical business
workflows.

Supply chain Levels for Software Artifacts (SLSA) is a comprehensive
framework designed to protect the integrity of software artifacts, including
AI models. SLSA provides a set of standards and practices to secure the
software supply chain from source to deployment. By implementing SLSA,
organizations can ensure that their AI models are built and maintained with
the highest levels of security, reducing the risk of tampering and ensuring
the authenticity of their outputs. ... Sigstore is an open-source project
that aims to improve the security and integrity of software supply chains by
providing a transparent and secure way to sign and verify software
artifacts. Using cryptographic signatures, Sigstore ensures that AI models
and other software components are authentic and have not been tampered with.
This system allows developers and organizations to trace the provenance of
their AI models, ensuring that they originate from trusted sources. ... The
most valuable takeaway for ensuring model authenticity is the implementation
of robust verification mechanisms. By utilizing frameworks like SLSA and
tools like Sigstore, organizations can create a transparent and secure
supply chain that guarantees the integrity of their AI models. This approach
helps build trust with stakeholders and ensures that the models deployed in
production are reliable and free from malicious alterations.
AI accelerators are highly sensitive to power quality. Sub-cycle power
fluctuations can cause bit errors, data corruption, or system instability.
Older uninterruptible power supply (UPS) systems may struggle to handle the
dynamic loads AI can produce, often involving three MW sub-cycle swings or
more. Updating the electrical distribution system (EDS) is an opportunity
that includes replacing dated UPS technology, which often cannot handle the
dynamic AI load profile, redesigning power distribution for redundancy, and
ensuring that power supply configurations meet the demands of high-density
computing. ... With the high cost of AI downtime, risk mitigation becomes
paramount. Energy and power management systems (EPMS) are capable of
high-resolution waveform capture, which allows operators to trace and
address electrical anomalies quickly. These systems are essential for
identifying the root cause of power quality issues and coordinating fast
response mechanisms. ... No two mission-critical facilities are the same
regarding space, power, and cooling. Add the variables of each AI
deployment, and what works for one facility may not be the best fit for
another. That said, there are some universal truths about retrofitting for
AI. You will need engineers who are well-versed in various equipment
configurations, including cooling and electrical systems connected to the
network.

Enterprises often still have some kind of a cloud-first policy, he outlined,
but they have realized they need some form of private cloud too, typically
due to the fact that some workloads do not meet the needs, mainly around
cost, complexity and compliance. However the problem is that because public
cloud has taken priority, infrastructure has not grown in the right way - so
increasingly, Broadcom’s conversations are now with customers realizing they
need to focus on both public and private cloud, and some on-prem, Baguley
says, as they're realizing, “we need to make sure we do it right, we're
doing it in a cost-effective way, and we do it in a way that's actually
going to be strategically sensible for us going forward.” "In essence -
they've realised they need to build something on-prem that can not only
compete with public cloud, but actually be better in various categories,
including cost, compliance and complexity.” ... In order to help with these
concerns, Broadcom has released VMware Cloud Foundation (VCF) 9.0, the
latest edition of its platform to help customers get the most out of private
cloud. Described by Baguely as, “the culmination of 25 years work at
VMware”, VCF 9.0 offers users a single platform with one SKU - giving them
improved visibility while supporting all applications with a consistent
experience across the private cloud environment.
This is an issue impacting many multinational organizations, driving the
growth for regional- and even industry clouds. These offer specific tailored
compliance, security, and performance options. As organizations try to
architect infrastructure that supports their future states, with a blend of
cloud and on-prem, data sovereignty is an increasingly large issue. I hear a
lot from IT leaders about how they must consider local and regional
regulations, which adds a consideration to the simple concept of migration
to the cloud. ... Sustainability was always the hidden cost of
connected computing. Hosting data in the cloud consumes a lot of energy.
Financial cost is most top of mind when IT leaders talk about driving
efficiency through the cloud right now. It’s also at the root of a lot of
talk about moving to the edge and using AI-infused end user devices. But
expect sustainability to become an increasingly important factor in cloud:
geo political instability, the cost of energy, and the increasing demands of
AI will see to that. ... The AI PC pitch from hardware vendors is that
organizations will be able to build small ‘clouds’ of end user devices.
Specific functions and roles will work on AI PCs and do their computing at
the edge. The argument is compelling: better security and efficient modular
scalability. Not every user or function needs all capabilities and access to
all data.

When platform teams focus exclusively on technical excellence while
neglecting a communication strategy, they create an invisible barrier
between the platform’s capability and its business impact. Users can’t adopt
what they don’t understand, and leadership won’t invest in what they can’t
measure. ... To overcome engineers’ skepticism of new tools that may
introduce complexity, your communication should clearly articulate how the
platform simplifies their work. Highlight its ability to reduce cognitive
load, minimize context switching, enhance access to documentation and
accelerate development cycles. Present these advantages as concrete
improvements to daily workflows, rather than abstract concepts. ... Tap into
the influence of respected technical colleagues who have contributed to the
platform’s development or were early adopters. Their endorsements are more
impactful than any official messaging. Facilitate opportunities for these
champions to demonstrate the platform’s capabilities through lightning
talks, recorded demos or pair programming sessions. These peer-to-peer
interactions allow potential users to observe practical applications
firsthand and ask candid questions in a low-pressure environment.
Data sovereignty has broad reaching implications with potential impact on
many areas of a business extending beyond the IT department. One of the most
obvious examples is for the legal and finance departments, where GDPR and
similar legislation require granular control over how data is stored and
handled. The harsh reality is that any gaps in compliance could result in
legal action, substantial fines and subsequent damage to longer term
reputation. Alongside this, providing clarity on data governance
increasingly factors into trust and competitive advantage, with customers
and partners keen to eliminate grey areas around data sovereignty. ... One
way that many companies are seeking to gain more control and visibility of
their data is by repatriating specific data sets from public cloud
environments over to on-premise storage or private clouds. This is not about
reversing cloud technology; instead, repatriation is a sound way of
achieving compliance with local legislation and ensuring there is no scope
for questions over exactly where data resides. In some instances,
repatriating data can improve performance, reduce cloud costs and it can
also provide assurance that data is protected from foreign government
access. Additionally, on-premise or private cloud setups can offer the
highest levels of security from third-party risks for the most sensitive or
proprietary data.
No comments:
Post a Comment