Quote for the day:
“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins
AI in Disaster Recovery: Mapping Technical Capabilities to Real Business Value
The Quantum Supply Chain Risk: How Quantum Computing Will Disrupt Global Commerce
Whether its API’s, middleware, firmware embedded devices or operational technology, they’re all built on the same outdated encryption and systems of trust. One of the biggest threats from quantum computing will be on all this unseen machinery that powers global digital trade. These systems handle the backend of everything from routing to cargo to scheduling deliveries and clearing large shipments, but they were never designed to withstand the threat of quantum. Attackers will be able to break in quietly — injecting malicious code into control software, ERP systems or impersonating suppliers to communicate malicious information and hijack digital workflows. Quantum computing won’t necessarily affect the industries on its own, but it will corrupt the systems that power the global economy. ... Some of the most dangerous attacks are being staged today, with many nation-states and bad actors storing encrypted data, from procurement orders to shipping records. When quantum computers are finally able to break those encryption schemes, attackers will be able to decrypt them in what’s coined a Harvest Now Decrypt Later (HNDL) attack. These attacks, although retroactive in nature, represent one of the biggest threats to the integrity of cross-border commerce. Global trade depends on digital provenance or handling goods and proving where they came from.Securing OT Systems: The Limits of the Air Gap Approach
Aside from susceptibility to advanced techniques, tactics, and procedures (TTPs)
such as thermal manipulation and magnetic fields, more common vulnerabilities
associated with air-gapped environments include factors such as unpatched
systems going unnoticed, lack of visibility into network traffic, potentially
malicious devices coming on the network undetected, and removable media being
physically connected within the network. Once the attack is inside OT systems,
the consequences can be disastrous regardless of whether there is an air gap or
not. However, it is worth considering how the existence of the air gap can
affect the time-to-triage and remediation in the case of an incident. ... This
incident reveals that even if a sensitive OT system has complete digital
isolation, this robust air gap still cannot fully eliminate one of the greatest
vulnerabilities of any system—human error. Human error would still hold if an
organization went to the extreme of building a faraday cage to eliminate
electromagnetic radiation. Air-gapped systems are still vulnerable to social
engineering, which exploits human vulnerabilities, as seen in the tactics that
Dragonfly and Energetic Bear used to trick suppliers, who then walked the
infection right through the front door. Ideally, a technology would be able to
identify an attack regardless of whether it is caused by a compromised supplier,
radio signal, or electromagnetic emission.
How to Lock Down the No-Code Supply Chain Attack Surface
A core feature of no-code development, third-party connectors allow applications
to interact with cloud services, databases, and enterprise software. While these
integrations boost efficiency, they also create new entry points for
adversaries. ... Another emerging threat involves dependency confusion attacks,
where adversaries exploit naming collisions between internal and public software
packages. By publishing malicious packages to public repositories with the same
names as internally used components, attackers could trick the platform into
downloading and executing unauthorized code during automated workflow
executions. This technique allows adversaries to silently insert malicious
payloads into enterprise automation pipelines, often bypassing traditional
security reviews. ... One of the most challenging elements of securing no-code
environments is visibility. Security teams struggle with asset discovery and
dependency tracking, particularly in environments where business users can
create applications independently without IT oversight. Applications and
automations built outside of IT governance may use unapproved connectors and
expose sensitive data, since they often integrate with critical business
workflows.
Securing Your AI Model Supply Chain
Supply chain Levels for Software Artifacts (SLSA) is a comprehensive
framework designed to protect the integrity of software artifacts, including
AI models. SLSA provides a set of standards and practices to secure the
software supply chain from source to deployment. By implementing SLSA,
organizations can ensure that their AI models are built and maintained with
the highest levels of security, reducing the risk of tampering and ensuring
the authenticity of their outputs. ... Sigstore is an open-source project
that aims to improve the security and integrity of software supply chains by
providing a transparent and secure way to sign and verify software
artifacts. Using cryptographic signatures, Sigstore ensures that AI models
and other software components are authentic and have not been tampered with.
This system allows developers and organizations to trace the provenance of
their AI models, ensuring that they originate from trusted sources. ... The
most valuable takeaway for ensuring model authenticity is the implementation
of robust verification mechanisms. By utilizing frameworks like SLSA and
tools like Sigstore, organizations can create a transparent and secure
supply chain that guarantees the integrity of their AI models. This approach
helps build trust with stakeholders and ensures that the models deployed in
production are reliable and free from malicious alterations.
Data center retrofit strategies for AI workloads
AI accelerators are highly sensitive to power quality. Sub-cycle power fluctuations can cause bit errors, data corruption, or system instability. Older uninterruptible power supply (UPS) systems may struggle to handle the dynamic loads AI can produce, often involving three MW sub-cycle swings or more. Updating the electrical distribution system (EDS) is an opportunity that includes replacing dated UPS technology, which often cannot handle the dynamic AI load profile, redesigning power distribution for redundancy, and ensuring that power supply configurations meet the demands of high-density computing. ... With the high cost of AI downtime, risk mitigation becomes paramount. Energy and power management systems (EPMS) are capable of high-resolution waveform capture, which allows operators to trace and address electrical anomalies quickly. These systems are essential for identifying the root cause of power quality issues and coordinating fast response mechanisms. ... No two mission-critical facilities are the same regarding space, power, and cooling. Add the variables of each AI deployment, and what works for one facility may not be the best fit for another. That said, there are some universal truths about retrofitting for AI. You will need engineers who are well-versed in various equipment configurations, including cooling and electrical systems connected to the network.Is it time for a 'cloud reset'? New study claims public and private cloud balance is now a major consideration for companies across the world
Enterprises often still have some kind of a cloud-first policy, he outlined,
but they have realized they need some form of private cloud too, typically
due to the fact that some workloads do not meet the needs, mainly around
cost, complexity and compliance. However the problem is that because public
cloud has taken priority, infrastructure has not grown in the right way - so
increasingly, Broadcom’s conversations are now with customers realizing they
need to focus on both public and private cloud, and some on-prem, Baguley
says, as they're realizing, “we need to make sure we do it right, we're
doing it in a cost-effective way, and we do it in a way that's actually
going to be strategically sensible for us going forward.” "In essence -
they've realised they need to build something on-prem that can not only
compete with public cloud, but actually be better in various categories,
including cost, compliance and complexity.” ... In order to help with these
concerns, Broadcom has released VMware Cloud Foundation (VCF) 9.0, the
latest edition of its platform to help customers get the most out of private
cloud. Described by Baguely as, “the culmination of 25 years work at
VMware”, VCF 9.0 offers users a single platform with one SKU - giving them
improved visibility while supporting all applications with a consistent
experience across the private cloud environment.Cloud in the age of AI: Six things to consider
This is an issue impacting many multinational organizations, driving the growth for regional- and even industry clouds. These offer specific tailored compliance, security, and performance options. As organizations try to architect infrastructure that supports their future states, with a blend of cloud and on-prem, data sovereignty is an increasingly large issue. I hear a lot from IT leaders about how they must consider local and regional regulations, which adds a consideration to the simple concept of migration to the cloud. ... Sustainability was always the hidden cost of connected computing. Hosting data in the cloud consumes a lot of energy. Financial cost is most top of mind when IT leaders talk about driving efficiency through the cloud right now. It’s also at the root of a lot of talk about moving to the edge and using AI-infused end user devices. But expect sustainability to become an increasingly important factor in cloud: geo political instability, the cost of energy, and the increasing demands of AI will see to that. ... The AI PC pitch from hardware vendors is that organizations will be able to build small ‘clouds’ of end user devices. Specific functions and roles will work on AI PCs and do their computing at the edge. The argument is compelling: better security and efficient modular scalability. Not every user or function needs all capabilities and access to all data.Creating a Communications Framework for Platform Engineering
When platform teams focus exclusively on technical excellence while
neglecting a communication strategy, they create an invisible barrier
between the platform’s capability and its business impact. Users can’t adopt
what they don’t understand, and leadership won’t invest in what they can’t
measure. ... To overcome engineers’ skepticism of new tools that may
introduce complexity, your communication should clearly articulate how the
platform simplifies their work. Highlight its ability to reduce cognitive
load, minimize context switching, enhance access to documentation and
accelerate development cycles. Present these advantages as concrete
improvements to daily workflows, rather than abstract concepts. ... Tap into
the influence of respected technical colleagues who have contributed to the
platform’s development or were early adopters. Their endorsements are more
impactful than any official messaging. Facilitate opportunities for these
champions to demonstrate the platform’s capabilities through lightning
talks, recorded demos or pair programming sessions. These peer-to-peer
interactions allow potential users to observe practical applications
firsthand and ask candid questions in a low-pressure environment.
No comments:
Post a Comment