Quote for the day:
"The most powerful leadership tool you have is your own personal example." -- John Wooden
The day the cloud went dark
This week, the impossible happened—again. Amazon Web Services, the backbone of
the digital economy and the world’s largest cloud provider, suffered a
large-scale outage. If you work in IT or depend on cloud services, you didn’t
need a news alert to know something was wrong. Productivity ground to a halt,
websites failed to load, business systems stalled, and the hum of global
commerce was silenced, if only for a few hours. The impact was immediate and
severe, affecting everything from e-commerce giants to startups, including my
own consulting business. ... Some businesses hoped for immediate remedies from
AWS’s legendary service-level agreements. Here’s the reality: SLA credits are
cold comfort when your revenue pipeline is in freefall. The truth that every
CIO has faced at least once is that even industry-leading SLAs rarely
compensate for the true cost of downtime. They don’t make up for lost
opportunities, damaged reputations, or the stress on your teams. ... This
outage is a wake-up call. Headlines will fade, and AWS (and its competitors)
will keep promising ever-improving reliability. Just don’t forget the lesson:
No matter how many “nines” your provider promises, true business resilience
starts inside your own walls. Enterprises must take matters into their own
hands to avoid existential risk the next time lightning strikes.Application Modernization Pitfalls: Don't Let Your Transformation Fail
Modernizing legacy applications is no longer a luxury — it’s a strategic
imperative. Whether driven by cloud adoption, agility goals, or technical
debt, organizations are investing heavily in transformation. Yet, for all its
potential, many modernization projects stall, exceed budgets, or fail to
deliver the expected business value. Why? The transition from a monolithic
legacy system to a flexible, cloud-native architecture is a complex
undertaking that involves far more than just technology. It's a strategic,
organizational, and cultural shift. And that’s where the pitfalls lie. ...
Application modernization is not just a technical endeavor — it’s a strategic
transformation that touches every layer of the organization. From legacy code
to customer experience, from cloud architecture to compliance posture, the
ripple effects are profound. Yet, the most overlooked ingredient in successful
modernization isn’t technology — it’s leadership: Leadership that frames
modernization as a business enabler, not a cost center; Leadership that
navigates complexity with clarity, acknowledging legacy constraints while
championing innovation; Leadership that communicates with empathy, recognizing
that change is hard and adoption is earned, not assumed. Modernization efforts
fail not because teams lack skill, but because they lack alignment. CIOs will be on the hook for business-led AI failures
While some business-led AI projects include CIO input, AI experts have seen
many organizations launch AI projects without significant CIO or IT team
support. When other departments launch AI projects without heavy IT
involvement, they may underestimate the technical work needed to make the
projects successful, says Alek Liskov, chief AI officer at data refinery
platform provider Datalinx AI. ... “Start with the tech folks in the room
first, before you get much farther,” he says. “I still see many organizations
where there’s either a disconnect between business and IT, or there’s lack of
speed on the IT side, or perhaps it’s just a lack of trust.” Despite the
doubts, IT leaders need to be involved from the beginning of all AI projects,
adds Bill Finner, CIO at large law firm Jackson Walker. “AI is just another
technology to add to the stack,” he says. “Better to embrace it and help the
business succeed then to sit back and watch from the bench.” ... “It’s a great
opportunity for CIOs to work closely with all the practice areas both on the
legal and business professional side to ensure we’re educating everyone on the
capabilities of the applications and how they can enhance their day-to-day
workflows by streamlining processes,” Finner says. “CIOs love to help the
business succeed, and this is just another area where they can show their
value.”Three Questions That Help You Build a Better Software Architecture
You don’t want to create an architecture for a product that no one needs. And
in validating the business ideas, you will test assumptions that drive quality
attributes like scalability and performance needs. To do this, the MVP has to
be more than a Proof of Concept - it needs to be able to scale well enough and
perform well enough to validate the business case, but it does not need to
answer all questions about scalability and performance ... yet. ... Achieving
good performance while scaling can also mean reworking parts of the solution
that you’ve already built; solutions that perform well with a few users may
break down as load is increased. On the other hand, you may never need to
scale to the loads that cause those failures, so overinvesting too early can
simply be wasted effort. Many scaling issues also stem from a critical
bottleneck, usually related to accessing a shared resource. Spotting these
early can inform the team about when, and under what conditions, they might
need to change their approach. ... One of the most important architectural
decisions that teams must make is to decide how they will know that technical
debt has risen too far for the system to be supportable and maintainable in
the future. The first thing they need to know is how much technical debt they
are actually incurring. One way they can do this is by recording decisions
that incur technical debt in their Architectural Decision Record (ADR).Ransomware recovery perils: 40% of paying victims still lose their data
Decryptors are frequently slow and unreliable, John adds. “Large-scale
decryption across enterprise environments can take weeks and often fails on
corrupted files or complex database systems,” he explains. “Cases exist where
the decryption process itself causes additional data corruption.” Even when
decryptor tools are supplied, they may contain bugs, or leave files corrupted
or inaccessible. Many organizations also rely on untested — and vulnerable —
backups. Making matters still worse, many ransomware victims discover that
their backups were also encrypted as part of the attack. “Criminals often use
flawed or incompatible encryption tools, and many businesses lack the
infrastructure to restore data cleanly, especially if backups are patchy or
systems are still compromised,” says Daryl Flack, partner at UK-based managed
security provider Avella Security and cybersecurity advisor to the UK
Government. ... “Setting aside funds to pay a ransom is increasingly viewed as
problematic,” Tsang says. “While payment isn’t illegal in itself, it may
breach sanctions, it can fuel further criminal activity, and there is no
guarantee of a positive outcome.” A more secure legal and strategic position
comes from investing in resilience through strong security measures,
well-tested recovery plans, clear reporting protocols, and cyber insurance,
Tsang advises.In IoT Security, AI Can Make or Break
Ironically, the same techniques that help defenders also help attackers. Criminals are automating reconnaissance, targeting exposed protocols common in IoT, and accelerating exploitation cycles. Fortinet recently highlighted a surge in AI-driven automated scanning (tens of thousands of scans per second), where IoT and Session Initiation Protocol (SIP) endpoints are probed earlier in the kill chain. That scale turns "long-tail" misconfigurations into early footholds. Worse, AI itself is susceptible to attack. Adversarial ML (machine learning) can blind or mislead detection models, while prompt injection and data poisoning can repurpose AI assistants connected to physical systems. ... Move response left. Anomaly detection without orchestration just creates work. It's important to pre-stage responses such as quarantine VLANs, Access Control List (ACL) updates, Network Access Control (NAC) policies, and maintenance window tickets. This way, high-confidence detections contain first and ask questions second. Finally, run purple-team exercises that assume AI is the target and the tool. This includes simulating prompt injection against your assistants and dashboards; simulating adversarial noise against your IoT Intrusion Detection System (IDS); and testing whether analysts can distinguish "model weirdness" from real incidents under time pressure.Cyber attack on Jaguar Land Rover estimated to cost UK economy £1.9 billion
Most of the estimated losses stem from halted vehicle production and reduced
manufacturing output. JLR’s production reportedly dropped by around 5,000
vehicles per week during the shutdown, translating to weekly losses of
approximately £108 million. The shock has cascaded across hundreds of suppliers
and service providers. Many firms have faced cash-flow pressures, with some
taking out emergency loans. To mitigate the fallout, JLR has reportedly cleared
overdue invoices and issued advance payments to critical suppliers. ... The
CMC’s Technical Committee urged businesses and policymakers to prioritise
resilience against operational disruptions, which now pose the greatest
financial risk from cyberattacks. The committee recommended identifying critical
digital assets, strengthening segmentation between IT and operational systems,
and ensuring robust recovery plans. It also called on manufacturers to review
supply-chain dependencies and maintain liquidity buffers to withstand prolonged
shutdowns. Additionally, it advised insurers to expand cyber coverage to include
large-scale supply chain disruption, and urged the government to clarify
criteria for financial support in future systemic cyber incidents.
Thinking Machines challenges OpenAI's AI scaling strategy: 'First superintelligence will be a superhuman learner'
To illustrate the problem with current AI systems, Rafailov offered a scenario
familiar to anyone who has worked with today's most advanced coding assistants.
"If you use a coding agent, ask it to do something really difficult — to
implement a feature, go read your code, try to understand your code, reason
about your code, implement something, iterate — it might be successful," he
explained. "And then come back the next day and ask it to implement the next
feature, and it will do the same thing." The issue, he argued, is that these
systems don't internalize what they learn. "In a sense, for the models we have
today, every day is their first day of the job," Rafailov said. ... "Think about
how we train our current generation of reasoning models," he said. "We take a
particular math problem, make it very hard, and try to solve it, rewarding the
model for solving it. And that's it. Once that experience is done, the model
submits a solution. Anything it discovers—any abstractions it learned, any
theorems—we discard, and then we ask it to solve a new problem, and it has to
come up with the same abstractions all over again." That approach misunderstands
how knowledge accumulates. "This is not how science or mathematics works," he
said. ... The objective would fundamentally change: "Instead of rewarding their
success — how many problems they solved — we need to reward their progress,
their ability to learn, and their ability to improve."
Demystifying Data Observability: 5 Steps to AI-Ready Data
Data observability ensures data pipelines capture representative data, both the
expected and the messy. By continuously measuring drift, outliers, and
unexpected changes, observability creates the feedback loop that allows AI/ML
models to learn responsibly. In short, observability is not an add-on; it is a
foundational practice for AI-ready data. ... Rather than relying on manual
checks after the fact, observability should be continuous and automated. This
turns observability from a reactive safety net into a proactive accelerator for
trusted data delivery. As a result, every new dataset or transformation can
generate metadata about quality, lineage, and performance, while pipelines can
include regression tests and alerting as standard practice. ... The key is
automation. Rather than policies that sit in binders, observability enables
policies as code. In this way, data contracts and schema checks that are
embedded in pipelines can validate that inputs remain fit for purpose. Drift
detection routines, too, can automatically flag when training data diverges from
operational realities while governance rules, from PII handling to lineage, are
continuously enforced, not applied retroactively. ... It’s tempting to measure
observability in purely technical terms such as the number of alerts generated,
data quality scores, or percentage of tables monitored. But the real measure of
success is its business impact. Rather than numbers, organizations should ask if
it resulted in fewer failed AI deployments.
AI heavyweights call for end to ‘superintelligence’ research
Superintelligence isn’t just hype. It’s a strategic goal determined by a
privileged few, and backed by hundreds of billions of dollars in investment,
business incentives, frontier AI technology, and some of the world’s best
researchers. ... Human intelligence has reshaped the planet in profound ways. We
have rerouted rivers to generate electricity and irrigate farmland, transforming
entire ecosystems. We have webbed the globe with financial markets, supply
chains, air traffic systems: enormous feats of coordination that depend on our
ability to reason, predict, plan, innovate and build technology.
Superintelligence could extend this trajectory, but with a crucial difference.
People will no longer be in control. The danger is not so much a machine that
wants to destroy us, but one that pursues its goals with superhuman competence
and indifference to our needs. Imagine a superintelligent agent tasked with
ending climate change. It might logically decide to eliminate the species that’s
producing greenhouse gases. ... For years, efforts to manage AI have focused on
risks such as algorithmic bias, data privacy, and the impact of automation on
jobs. These are important issues. But they fail to address the systemic risks of
creating superintelligent autonomous agents. The focus has been on applications,
not the ultimate stated goal of AI companies to create superintelligence.










/dq/media/media_files/2025/10/17/cloud-sovereignty-2025-10-17-10-54-28.jpg)





















