Quote for the day:
"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France
Outsmarting Data Center Outage Risks in 2026
Even the most advanced and well-managed facilities are not immune to
disruptions. Recent incidents, such as outages at AWS, Cloudflare, and Microsoft
Azure, serve as reminders that no data center can guarantee 100% uptime. This
highlights the critical importance of taking proactive steps to mitigate data
center outage risks, regardless of how reliable your facility appears to be. ...
Overheating events can cause servers to shut down, leading to outages. To
prevent an outage, you must detect and address excess heat issues proactively,
before they become severe enough to trigger failures. A key consideration in
this regard is to monitor data center temperatures granularly – meaning that
instead of just deploying sensors that track the overall temperature of the
server room, you monitor the temperatures of individual racks and servers. This
is important because heat can accumulate in small areas, even if it remains
normal across the data center. ... But from the perspective of data center
uptime, physical security, which protects against physical attacks, is arguably
a more important consideration. Whereas cybersecurity attacks typically target
only a handful of servers or workloads, physical attacks can easily disable an
entire data center. To this end, it’s critical to invest in multi-layered
physical security controls – from the data center perimeter through to locks on
individual server cabinets – to protect against intrusion. ... To mitigate
outage risks, data center operators must take proactive steps to prevent fires
from starting in the first place. Deploying AI agents is not your typical software launch - 7 lessons from the trenches
Across the industry, there is agreement that agents require new considerations
beyond what we've become accustomed to in traditional software development. In
the process, new lessons are being learned. Industry leaders shared some of
their own lessons with ZDNET as they moved forward into an agentic AI future.
... Kale urges AI agent proponents to "grant autonomy in proportion to
reversibility, not model confidence. Irreversible actions across multiple
domains should always have human oversight, regardless of how confident the
system appears." Observability is also key, said Kale. "Being able to see how
a decision was reached matters as much as the decision itself." ... "AI works
well when it has quality data underneath," said Oleg Danyliuk, CEO at Duanex,
a marketing agency that built an agent to automate the validation of leads of
visitors to its site. "In our example, in order to understand if the lead is
interesting for us, we need to get as much data as we can, and the most
complex is to get the social network's data, as it is mostly not accessible to
scrape. That's why we had to implement several workarounds and get only the
public part of the data." ... "AI agents do not succeed on model capability
alone," said Martin Bufi, a principal research director at Info-Tech Research
Group. His team designed and developed AI agent systems for enterprise-level
functions, including financial analysis, compliance validation, and document
processing. What helped these projects succeed was the employment of
"AgentOps" (agent operations), which focuses on managing the entire agent
lifecycle. What enterprises think about quantum computing
Quantum computers’ qubits are incredibly fragile, so even setting or reading
qubits has to be incredibly precise or it messes everything up. Environmental
conditions can also mess things up, because qubits can get entangled with the
things around them. Qubits can even leak away in the middle of something. So,
here we have a technology that most people don’t understand and that is
incredibly finicky, and we’re supposed to bet the business on it? How many
enterprises would? None, according to the 352 who commented on the topic. How
many think their companies will use it eventually? All of them—but they don’t
know where or when, as an old song goes. And by the way, quantum theory is
older than that song, and we still don’t have a handle on it. ... First and
foremost, this isn’t the technology for general business applications. The
quantum geeks emphasize that good quantum applications are where you have some
incredibly complex algorithm, some math problem, that is simply not solvable
using digital computers. Some suggest that it’s best to think of a quantum
computer as a kind of analog computer. ... Even where quantum computing can
augment digital, you’ll have to watch ROI according to the second point. The
cost of quantum computing is currently prohibitive for most applications, even
the stuff it’s good for, so you need find applications that have massive
benefits, or think of some “quantum as a service” for solving an occasional
complex problem.Beyond the hype: 4 critical misconceptions derailing enterprise AI adoption
Leaders frequently assume AI adoption is purely technological when it
represents a fundamental transformation that requires comprehensive change
management, governance redesign and cultural evolution. The readiness illusion
obscures human and organizational barriers that determine success. ... Leaders
frequently assume AI can address every business challenge and guarantee
immediate ROI, when empirical evidence demonstrates that AI delivers
measurable value only in targeted, well-defined and precise use cases. This
expectation reality gap contributes to pilot paralysis, in which companies
undertake numerous AI experiments but struggle to scale any to production. ...
Executives frequently claim their enterprise data is already clean or assume
that collecting more data will ensure AI success — fundamentally
misunderstanding that quality, stewardship and relevance matter exponentially
more than raw quantity — and misunderstanding that the definition of clean
data changes when AI is introduced. ... AI systems are probabilistic and
require continuous lifecycle management. MIT research demonstrates
manufacturing firms adopting AI frequently experience J-curve trajectories,
where initial productivity declines but is then followed by longer-term gains.
This is because AI deployment triggers organizational disruption requiring
adjustment periods. Companies failing to anticipate this pattern abandon
initiatives prematurely. The fallacy manifests in inadequate deployment
management, including planning for model monitoring, retraining, governance
and adaptation.Inside the Growing Problem of Identity Sprawl
For years, identity governance relied on a set of assumptions tied closely to
human behavior. Employees joined organizations, moved roles and eventually
left. Even when access reviews lagged or controls were imperfect, identities
persisted long enough to be corrected. That model no longer reflects reality.
The difference between human and machine identities isn't just scale. "With
human identities, if people are coming into your organizations as employees,
you onboard them. They work, and by the time they leave, you can deprovision
them," said Haider Iqbal ... "Organizations are using AI today, whether they
know it or not, and most organizations don't even know that it's deployed in
their environment," said Morey Haber, chief security advisor at BeyondTrust.
That lack of awareness is not limited to AI. Many security teams struggle to
maintain a reliable inventory of non-human identities, especially when those
identities are created dynamically by automation or cloud services. Visibility
gaps don't stop access from being granted, but they do prevent teams from
confidently enforcing policy. "Without integration … I don't know what it's
doing, and then I got to go figure it out. When you unify together, then you
have all the AI visibility," Haber said, describing the operational impact of
fragmented tooling. ... Modern enterprise environments rely on elevated access
for cloud orchestration, application integration and automated workflows.
Service accounts and application programming interfaces often require broad
permissions to function reliably.The Timeless Architecture: Enterprise Integration Patterns That Exceed Technology Trends
A strange reality is often encountered by enterprise technology leaders:
everything seems to change, yet many things remain the same. New technologies
emerge — from COBOL to Java to Python, from mainframes to the cloud — but the
fundamental problems persist. Organizations still need to connect incompatible
systems, convert data between different formats, maintain reliability when
components fail, and scale to meet increasing demand. ... Synchronous
request-response communication creates tight coupling and can lead to
cascading failures. Asynchronous messaging has appeared across all eras — on
mainframes via MQ, in SOA through ESB platforms, in cloud environments via
managed messaging services such as SQS and Service Bus, and in modern
event-streaming platforms like Kafka. ... A key architectural question is
how to coordinate complex processes that span multiple systems. Two primary
approaches exist. Orchestration relies on a centralized coordinator to control
the workflow, while choreography allows systems to react to events in a
decentralized manner. Both approaches existed during the mainframe era and
remain relevant in microservices architectures today. Each has advantages:
orchestration provides control and visibility, while choreography offers
resilience and loose coupling. ... Organizations that treat security as a mere
technical afterthought often accumulate significant technical debt. In
contrast, enterprises that embed security patterns as foundational
architectural elements are better equipped to adapt as technologies evolve.
From distributed monolith to composable architecture on AWS: A modern approach to scalable software
A distributed monolith is a system composed of multiple services or components,
deployed independently but tightly coupled through synchronous dependencies such
as direct API calls or shared databases. Unlike a true microservices
architecture, where services are autonomous and loosely coupled, distributed
monoliths share many pitfalls of monoliths ... Composable architecture embraces
modularity and loose coupling by treating every component as an independent
building block. The focus lies in business alignment and agility rather than
just code decomposition. ... Start by analyzing the existing application to find
natural business or functional boundaries. Use Domain-Driven Design to define
bounded contexts that encapsulate specific business capabilities. ... Refactor
the code into separate repositories or modules, each representing a bounded
context or microservice. This clear separation supports independent deployment
pipelines and ownership. ... Replace direct code or database calls with API
calls or events. For example: Use REST or GraphQL APIs via API Gateway. Emit
business events via EventBridge or SNS for asynchronous processing. Use SQS for
message queuing to handle transient workloads. ... Assign each microservice its
own DynamoDB table or data store. Avoid cross-service database joins or queries.
Adopt a single-table design in DynamoDB to optimize data retrieval patterns
within each service boundary. This approach improves scalability and performance
at the data layer.
Firmware scanning time, cost, and where teams run EMBA
Security teams that deal with connected devices often end up running long
firmware scans overnight, checking progress in the morning, and trying to
explain to colleagues why a single image consumed a workday of compute time.
That routine sets the context for a new research paper that examines how the
EMBA firmware analysis tool behaves when it runs in different environments. ...
Firmware scans often stretch into many hours, especially for medium and large
images. The researchers tracked scan durations down to the second and repeated
runs to measure consistency. Repeated executions on the same platform produced
nearly identical run times and findings. That behavior matters for teams that
depend on repeatable results during testing, validation, or research work. It
also supports the use of EMBA in environments where scans need to be rerun with
the same settings over time. The data also shows that firmware size alone does
not explain scan duration. Internal structure, compression, and embedded
components influenced how long individual modules ran. Some smaller images
triggered lengthy analysis steps, especially during deep inspection stages. ...
Nuray said cloud based EMBA deployments fit well into large scale scanning
activity. He described cloud execution as a practical option for parallel
analysis across many firmware images. Local systems, he added, support detailed
investigation where teams need tight control over execution conditions and
repeatability.
'Most Severe AI Vulnerability to Date' Hits ServiceNow
Authentication issues in ServiceNow potentially opened the door for arbitrary
attackers to gain full control over the entire platform and access to the
various systems connected to it. ... Costello's first major discovery was that
ServiceNow shipped the same credential to every third-party service that
authenticated to the Virtual Agent application programming interface (API). It
was a simple, obvious string — "servicenowexternalagent" — and it allowed him to
connect to ServiceNow as legitimate third-party chat apps do. To do anything of
significance with the Virtual Agent, though, he had to impersonate a specific
user. Costello's second discovery, then, was quite convenient. He found that as
far as ServiceNow was concerned, all a user needed to prove their identity was
their email address — no password, let alone multifactor authentication (MFA),
was required. ... An attacker could use this information to create tickets and
manage workflows, but the stakes are now higher, because ServiceNow decided to
upgrade its virtual agent: it can now also engage the platform's shiny new "Now
Assist" agentic AI technology. ... "It's not just a compromise of the platform
and what's in the platform — there may be data from other systems being put onto
that platform," he notes, adding, "If you're any reasonably-sized organization,
you are absolutely going to have ServiceNow hooked up to all kinds of other
systems. So with this exploit, you can also then ... pivot around to Salesforce,
or jump to Microsoft, or wherever."
Cybercrime Inc.: When hackers are better organized than IT
Cybercrime has transformed from isolated incidents into an organized industry.
The large groups operate according to the same principles as international
corporations. They have departments, processes, management levels, and KPIs.
They develop software, maintain customer databases, and evaluate their success
rates. ... Cybercrime now functions like a service chain. Anyone planning an
attack today can purchase all the necessary components — from initial access
credentials to leak management. Access brokers sell access to corporate
networks. Botnet operators provide computing power for attacks. Developers
deliver turnkey exploits tailored to known vulnerabilities. Communication
specialists handle contact with the victims. ... What makes cybercrime so
dangerous today is not just the technology itself, but the efficiency of its
use. Attackers are flexible, networked, and eager to experiment. They test,
discard, and improve — in cycles that are almost unimaginable in a corporate
setting. Recruitment is handled like in startups. Job offers for developers,
social engineers, or language specialists circulate in darknet forums. There are
performance bonuses, training, and career paths. The work methods are agile,
communication is decentralized, and financial motivation is clearly defined. ...
Given this development, absolute security is unattainable. The crucial factor is
the ability to quickly regain operational capability after an attack. Cyber
resilience describes this competence — not only to survive crises but also to
learn from them.
No comments:
Post a Comment