Daily Tech Digest - October 25, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


The day the cloud went dark

This week, the impossible happened—again. Amazon Web Services, the backbone of the digital economy and the world’s largest cloud provider, suffered a large-scale outage. If you work in IT or depend on cloud services, you didn’t need a news alert to know something was wrong. Productivity ground to a halt, websites failed to load, business systems stalled, and the hum of global commerce was silenced, if only for a few hours. The impact was immediate and severe, affecting everything from e-commerce giants to startups, including my own consulting business. ... Some businesses hoped for immediate remedies from AWS’s legendary service-level agreements. Here’s the reality: SLA credits are cold comfort when your revenue pipeline is in freefall. The truth that every CIO has faced at least once is that even industry-leading SLAs rarely compensate for the true cost of downtime. They don’t make up for lost opportunities, damaged reputations, or the stress on your teams. ... This outage is a wake-up call. Headlines will fade, and AWS (and its competitors) will keep promising ever-improving reliability. Just don’t forget the lesson: No matter how many “nines” your provider promises, true business resilience starts inside your own walls. Enterprises must take matters into their own hands to avoid existential risk the next time lightning strikes.


Application Modernization Pitfalls: Don't Let Your Transformation Fail

Modernizing legacy applications is no longer a luxury — it’s a strategic imperative. Whether driven by cloud adoption, agility goals, or technical debt, organizations are investing heavily in transformation. Yet, for all its potential, many modernization projects stall, exceed budgets, or fail to deliver the expected business value. Why? The transition from a monolithic legacy system to a flexible, cloud-native architecture is a complex undertaking that involves far more than just technology. It's a strategic, organizational, and cultural shift. And that’s where the pitfalls lie. ... Application modernization is not just a technical endeavor — it’s a strategic transformation that touches every layer of the organization. From legacy code to customer experience, from cloud architecture to compliance posture, the ripple effects are profound. Yet, the most overlooked ingredient in successful modernization isn’t technology — it’s leadership: Leadership that frames modernization as a business enabler, not a cost center; Leadership that navigates complexity with clarity, acknowledging legacy constraints while championing innovation; Leadership that communicates with empathy, recognizing that change is hard and adoption is earned, not assumed. Modernization efforts fail not because teams lack skill, but because they lack alignment. 


CIOs will be on the hook for business-led AI failures

While some business-led AI projects include CIO input, AI experts have seen many organizations launch AI projects without significant CIO or IT team support. When other departments launch AI projects without heavy IT involvement, they may underestimate the technical work needed to make the projects successful, says Alek Liskov, chief AI officer at data refinery platform provider Datalinx AI. ... “Start with the tech folks in the room first, before you get much farther,” he says. “I still see many organizations where there’s either a disconnect between business and IT, or there’s lack of speed on the IT side, or perhaps it’s just a lack of trust.” Despite the doubts, IT leaders need to be involved from the beginning of all AI projects, adds Bill Finner, CIO at large law firm Jackson Walker. “AI is just another technology to add to the stack,” he says. “Better to embrace it and help the business succeed then to sit back and watch from the bench.” ... “It’s a great opportunity for CIOs to work closely with all the practice areas both on the legal and business professional side to ensure we’re educating everyone on the capabilities of the applications and how they can enhance their day-to-day workflows by streamlining processes,” Finner says. “CIOs love to help the business succeed, and this is just another area where they can show their value.”


Three Questions That Help You Build a Better Software Architecture

You don’t want to create an architecture for a product that no one needs. And in validating the business ideas, you will test assumptions that drive quality attributes like scalability and performance needs. To do this, the MVP has to be more than a Proof of Concept - it needs to be able to scale well enough and perform well enough to validate the business case, but it does not need to answer all questions about scalability and performance ... yet. ... Achieving good performance while scaling can also mean reworking parts of the solution that you’ve already built; solutions that perform well with a few users may break down as load is increased. On the other hand, you may never need to scale to the loads that cause those failures, so overinvesting too early can simply be wasted effort. Many scaling issues also stem from a critical bottleneck, usually related to accessing a shared resource. Spotting these early can inform the team about when, and under what conditions, they might need to change their approach. ... One of the most important architectural decisions that teams must make is to decide how they will know that technical debt has risen too far for the system to be supportable and maintainable in the future. The first thing they need to know is how much technical debt they are actually incurring. One way they can do this is by recording decisions that incur technical debt in their Architectural Decision Record (ADR).


Ransomware recovery perils: 40% of paying victims still lose their data

Decryptors are frequently slow and unreliable, John adds. “Large-scale decryption across enterprise environments can take weeks and often fails on corrupted files or complex database systems,” he explains. “Cases exist where the decryption process itself causes additional data corruption.” Even when decryptor tools are supplied, they may contain bugs, or leave files corrupted or inaccessible. Many organizations also rely on untested — and vulnerable — backups. Making matters still worse, many ransomware victims discover that their backups were also encrypted as part of the attack. “Criminals often use flawed or incompatible encryption tools, and many businesses lack the infrastructure to restore data cleanly, especially if backups are patchy or systems are still compromised,” says Daryl Flack, partner at UK-based managed security provider Avella Security and cybersecurity advisor to the UK Government. ... “Setting aside funds to pay a ransom is increasingly viewed as problematic,” Tsang says. “While payment isn’t illegal in itself, it may breach sanctions, it can fuel further criminal activity, and there is no guarantee of a positive outcome.” A more secure legal and strategic position comes from investing in resilience through strong security measures, well-tested recovery plans, clear reporting protocols, and cyber insurance, Tsang advises.


In IoT Security, AI Can Make or Break

Ironically, the same techniques that help defenders also help attackers. Criminals are automating reconnaissance, targeting exposed protocols common in IoT, and accelerating exploitation cycles. Fortinet recently highlighted a surge in AI-driven automated scanning (tens of thousands of scans per second), where IoT and Session Initiation Protocol (SIP) endpoints are probed earlier in the kill chain. That scale turns "long-tail" misconfigurations into early footholds. Worse, AI itself is susceptible to attack. Adversarial ML (machine learning) can blind or mislead detection models, while prompt injection and data poisoning can repurpose AI assistants connected to physical systems. ... Move response left. Anomaly detection without orchestration just creates work. It's important to pre-stage responses such as quarantine VLANs, Access Control List (ACL) updates, Network Access Control (NAC) policies, and maintenance window tickets. This way, high-confidence detections contain first and ask questions second. Finally, run purple-team exercises that assume AI is the target and the tool. This includes simulating prompt injection against your assistants and dashboards; simulating adversarial noise against your IoT Intrusion Detection System (IDS); and testing whether analysts can distinguish "model weirdness" from real incidents under time pressure.


Cyber attack on Jaguar Land Rover estimated to cost UK economy £1.9 billion

Most of the estimated losses stem from halted vehicle production and reduced manufacturing output. JLR’s production reportedly dropped by around 5,000 vehicles per week during the shutdown, translating to weekly losses of approximately £108 million. The shock has cascaded across hundreds of suppliers and service providers. Many firms have faced cash-flow pressures, with some taking out emergency loans. To mitigate the fallout, JLR has reportedly cleared overdue invoices and issued advance payments to critical suppliers. ... The CMC’s Technical Committee urged businesses and policymakers to prioritise resilience against operational disruptions, which now pose the greatest financial risk from cyberattacks. The committee recommended identifying critical digital assets, strengthening segmentation between IT and operational systems, and ensuring robust recovery plans. It also called on manufacturers to review supply-chain dependencies and maintain liquidity buffers to withstand prolonged shutdowns. Additionally, it advised insurers to expand cyber coverage to include large-scale supply chain disruption, and urged the government to clarify criteria for financial support in future systemic cyber incidents.


Thinking Machines challenges OpenAI's AI scaling strategy: 'First superintelligence will be a superhuman learner'

To illustrate the problem with current AI systems, Rafailov offered a scenario familiar to anyone who has worked with today's most advanced coding assistants. "If you use a coding agent, ask it to do something really difficult — to implement a feature, go read your code, try to understand your code, reason about your code, implement something, iterate — it might be successful," he explained. "And then come back the next day and ask it to implement the next feature, and it will do the same thing." The issue, he argued, is that these systems don't internalize what they learn. "In a sense, for the models we have today, every day is their first day of the job," Rafailov said. ... "Think about how we train our current generation of reasoning models," he said. "We take a particular math problem, make it very hard, and try to solve it, rewarding the model for solving it. And that's it. Once that experience is done, the model submits a solution. Anything it discovers—any abstractions it learned, any theorems—we discard, and then we ask it to solve a new problem, and it has to come up with the same abstractions all over again." That approach misunderstands how knowledge accumulates. "This is not how science or mathematics works," he said. ... The objective would fundamentally change: "Instead of rewarding their success — how many problems they solved — we need to reward their progress, their ability to learn, and their ability to improve."


Demystifying Data Observability: 5 Steps to AI-Ready Data

Data observability ensures data pipelines capture representative data, both the expected and the messy. By continuously measuring drift, outliers, and unexpected changes, observability creates the feedback loop that allows AI/ML models to learn responsibly. In short, observability is not an add-on; it is a foundational practice for AI-ready data. ... Rather than relying on manual checks after the fact, observability should be continuous and automated. This turns observability from a reactive safety net into a proactive accelerator for trusted data delivery. As a result, every new dataset or transformation can generate metadata about quality, lineage, and performance, while pipelines can include regression tests and alerting as standard practice. ... The key is automation. Rather than policies that sit in binders, observability enables policies as code. In this way, data contracts and schema checks that are embedded in pipelines can validate that inputs remain fit for purpose. Drift detection routines, too, can automatically flag when training data diverges from operational realities while governance rules, from PII handling to lineage, are continuously enforced, not applied retroactively. ... It’s tempting to measure observability in purely technical terms such as the number of alerts generated, data quality scores, or percentage of tables monitored. But the real measure of success is its business impact. Rather than numbers, organizations should ask if it resulted in fewer failed AI deployments. 


AI heavyweights call for end to ‘superintelligence’ research

Superintelligence isn’t just hype. It’s a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world’s best researchers. ... Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that’s producing greenhouse gases. ... For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence.

Daily Tech Digest - October 23, 2025


Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale



Leadership lessons from NetForm founder Karen Stephenson

Co-creation is a hot buzzword encouraging individuals to integrate and create with each other, but the simplest way to integrate and create is in the mind of one person — if they’re willing to push forward and do it. Even further, what can an integrated team of diverse minds accomplish when they co-create? ... In the age of AI, humans will need to focus on what humans do well. At the moment, at least, that’s making novel connections, thinking by analogy and creating the new. Our single-field approach to learning, qualifications and career ladders makes it hard for us to compete with machines that are often smarter than we are in any given discipline. For that creative spark and to excel at what messy, forgetful, slow, imperfect humans do best, we need to work, think and live differently. In fact, the founders of five of the largest companies in the world are (or were) polymaths — mentally diverse people skilled in multiple disciplines — Bill Gates, Steve Jobs, Warren Buffett, Larry Page and Jeff Bezos. They learn because they’re curious and want to solve problems, not for a career ladder. It’s easier than ever, today, to learn with AI and online materials and to collaborate with tech and humans around the world. All you need to do is open inward to your talents and desires, explore, collect and fuse.


Why cloud and AI projects take longer and how to fix the holdups

In the case of the cloud, the problem is that senior management thinks that the cloud is always cheaper, that you can always cut costs by moving to the cloud. This is despite the recent stories on “repatriation,” or moving cloud applications back into the data center. In the case of cloud projects, most enterprise IT organizations now understand how to assess a cloud project for cost/benefit, so most of the cases where impossible cost savings are promised are caught in the planning phase. For AI, both senior management and line department management have high expectations with respect to the technology, and in the latter case may also have some experience with AI in the form of as-a-service generative AI models available online. About a quarter of these proposals quickly run afoul of governance policies because of problems with data security, and half of this group dies at this point. For the remaining proposals, there is a whole set of problems that emerge. Most enterprises admit that they really don’t understand what AI can do, which obviously makes it hard to frame a realistic AI project. The biggest gap identified is between an AI business goal and a specific path leading to it. One CIO calls the projects offered by user organizations as “invitations to AI fishing trips” because the goal is usually set in business terms, and these would actually require a project simply to identify how the stated goal could be achieved.


Who pays when a multi-billion-dollar data center goes down?

While the Lockton team is looking at everything from immersion cooling to drought, there are a handful of risks where it feels the industry isn't adequately preparing. “The big thing that isn't getting on people's radars in a growing way is customer equipment," Hayhow says “Looking at this through the lens of the data center owner or developer, it's often very difficult. “It's a bit of an unspoken conversation that the equipment in the white space belongs to the customer. Often you don't have custody over it, you don't have visibility over it, and it’s highly proprietary. But the value of it is growing.” Per square meter of white space, the Lockton partner suggests that the value of the equipment five years from now will be exponentially larger than the value of the equipment five years ago, as more data centers invest in expensive GPUs and other equipment for AI use cases. “Leases have become clearer in terms of placing responsibility for damage to customer equipment more squarely on the shoulders of the owner, developer,” Hayhow says. “We're having that conversation in the US, where the halls are larger, the value of the equipment is greater, and some of the hyperscale customers are being much more prescriptive in terms of wanting to address the topic of damage to our equipment … if you lose 20 megawatts worth of racks of Nvidia chips, the lead time to get those replaced, unless you're building elsewhere, is quite significant.”


AI Agents Need Security Training – Just Like Your Employees

“It may not be as candid as what humans would do during those sessions, but AI agents used by your workforce do need to be trained. They need to understand what your company policies are, including what is acceptable behavior, what data they're allowed to access, what actions they're allowed to take,” Maneval explained. ... “Most AI tools are just trained to do the same thing over and over and so it means decisions are based on assumptions from limited information,” she explained to Infosecurity. “Additionally, most AI tools solve real problems but also create real risks and each solve different problems and creates different risks.” While some cybersecurity experts argue that auditing AI tools is no different to auditing any other software or application, Maneval disagrees. ... Maneval’s said her “rule of thumb” is that whether you’re dealing with traditional machine learning algorithms, generative AI applications of AI agents, “treat them like any other employees.” This not only means that AI-powered agents should be trained on security policies but should also be forced to respect security controls that the staff have to respect, such as role-based access controls (RBAC). “You should look at how you treat your humans and apply those same controls to the AI. You probably do a background check before anyone is hired. Do the same thing with your AI agent. ..."


Why must CISOs slay a cyber dragon to earn business respect?

Why should a security leader need to experience a major cyber incident to earn business colleagues’ respect? Jeff Pollard, VP and principal analyst at Forrester, says this enterprise perception problem is “just part of human nature. If we don’t see the bad thing happening, we don’t appreciate all of the things that were done to prevent that bad thing from happening.” Of course, if an attack turns into an incident and defense goes poorly, “it can easily turn from a hero moment to a scapegoat moment,” Pollard says. Oberlaender, who now works as a cybersecurity consultant, is among those who believe hard-earned experience should be rewarded, but that’s not what he’s seeing in the market today. ... CISOs “feel that they need to fight off an attack to show value, but there are many other successes they can do and show,” says Erik Avakian, technical counselor at Info-Tech Research Group. “Building KPIs is a powerful way to show their value.” ... Chris Jackson, a senior cybersecurity specialist with tech education vendor Pluralsight, reinforces the frustration that many enterprise CISOs feel about the lack of appropriate respect from their colleagues and bosses. “CISOs are a lot like pro sports coaches. It doesn’t matter how well they performed during the season or how many games they won. If they don’t win the championship, it’s seen as a failure, and the coach is often the first to go,” Jackson says. 


The next cyber crisis may start in someone else’s supply chain

Organizations have improved oversight of their direct partners, but few can see beyond the first layer. This limited view leaves blind spots that attackers can exploit, particularly through third-party software or service providers. “We’re in a new generation of risk, one where cyber, geopolitical, technology, political risk, and other factors are converging and reshaping the landscape. The impact on markets and operations is unfolding faster than many organizations can keep up,” said Jim Wetekamp, CEO of Riskonnect. ... Third-party and nth-party risks continue to expose companies to disruption. Most organizations have business continuity plans for supplier disruptions, but their monitoring often stops at direct partners. Only a small fraction can monitor risks across multiple tiers of their supply chain, and some cannot track their critical technology providers at all. Organizations still underestimate how dependent they are on third parties and continue to rely on paper-based continuity plans that offer a false sense of security. ... More companies now have a chief risk officer, but funding for technology and tools has barely moved. Most risk leaders say their budgets have stayed the same even as they are asked to cover more ground. Many are turning to automation and specialized software to do more with what they already have.


Boardroom to War Room: Translating AI-Driven Cyber Risk into Action

Great CISOs today combine strategic leadership, financial knowledge, technological skills, and empathy to turn cybersecurity from a burden on operations into a strong enabler. This change happens faster with artificial intelligence. AI has a lot of potential, but it also makes things more uncertain. It can do things like forecast threats and automate orchestration. CISOs need to see AI problems as more than just technological problems; they need to see them as business risks that need clear communication, openness, and quick response. ... Not storytelling, but data and graphics win over executives. Suggested metrics include: Predictive accuracy - The percentage of risks that AI flagged before a breach compared to the percentage of threats that AI flagged after it happened; Speed of reaction - The average time it took for AI-enabled confinement to work compared to manual reaction; False positive rate - Tech teams employed AI to improve alerts and cut down on alert fatigue from X to Y; Third-party model risk - The number of outside model calls that were looked at and accepted; Visual callout suggestion - A mock-up of a dashboard that illustrates AI risk KPIs, a trendline of predictive value, and a drop in incidences. ... Change from being an IT responder who reacts to problems to a strategic AI-enabled risk leader. Take ownership of your AI risk story, keep an eye on third-party models, provide your board clear information, and make sure your war room functions quickly.


Govt. faces questions about why US AWS outage disrupted UK tax office and banking firms

“The narrative of bigger is better and biggest is best has been shown for the lie it always has been,” Owen Sayers, an independent security architect and data protection specialist with a long history of working in the public sector, told Computer Weekly. “The proponents of hyperscale cloud will always say they have the best engineers, the most staff and the greatest pool of resources, but bigger is not always better – and certainly not when countries rely on those commodity global services for their own national security, safety and operations. “Nationally important services must be recognised as best delivered under national control, and as a minimum, the government should be knocking on AWS’s door today and asking if they can in fact deliver a service that guarantees UK uptime,” he said. “Because the evidence from this week’s outage suggests that they cannot.” ... “In light of today’s major outage at Amazon Web Services … why has HM Treasury not designated Amazon Web Services or any other major technology firm as a CTP for the purposes of the Critical Third Parties Regime,” asked Hillier, in the letter. “[And] how soon can we expect firms to be brought into this regime?” Hillier also asked HM Treasury for clarification about whether or not it is concerned about the fact that “seemingly key parts of our IT infrastructure are hosted abroad” given the outage originated from a US-based AWS datacentre region but impacted the activities of Lloyds Bank and also HMRC.


Quantum work, federated learning and privacy: Emerging frontiers in blockchain research

It is possible to have a future in which the field of quantum computation could serve as the foundation for blockchain consensus. The future is alluring; quantum algorithms can provide solutions to the issues that classical computers find difficult and the method may be more effective and resistant to brute-force attacks. The danger, however, is significant: when quantum computers are sufficiently robust, existing encryption standards can be compromised. ... Federated learning is another upcoming element of blockchain studies, a machine learning model training technique that avoids data centralisation. Federated learning enables various devices or nodes to feed into a standard model instead of storing sensitive data in a central server inaccessible to third parties. ... The issue of privacy is of specific importance today due to the increased regulatory pressure on exchanges and cryptocurrency companies. A compromise between user privacy and regulatory openness could prove to be the key to success. Studies of privacy-saving instruments provide a competitive advantage to blockchain developers and for exchanges interested in increasing their influence on the global economy. ... The decade of blockchain research to come will not be characterised by fast transactions or cheaper costs. It will redraw the borders of trust, calculation, and privacy in digitally based economies. 


Ransomware groups surge as automation cuts attack time to 18 mins

The ransomware group LockBit has recently introduced "LockBit 5.0", reportedly incorporating artificial intelligence for attack randomisation and enhanced targeting options, with a focus on regaining its previous position atop the ransomware ecosystem. Medusa, by contrast, was noted to have fallen behind due in part to lacking widespread automated and customisable features, despite previous activity levels. ReliaQuest's analysis predicts the rise of new groups through the lens of its three-factor model, specifically naming "The Gentlemen" and "DragonForce" as likely to become major threats due to their adoption of advanced technical capabilities. The Gentlemen, for instance, has listed over 30 victims on its data-leak site within its first month of activity, underpinned by automation, prioritised encryption, and endpoint discovery for rapid lateral movement. Conversely, groups such as "Chaos" and "Nova" are likely to remain minor players, lacking the integral features associated with higher victim counts and affiliate recruitment. ... RaaS groups now use automation to reduce breakout times to as little as 18 minutes, making manual intervention too slow. Implement automated containment and response plays to keep pace with attackers. These workflows should automatically isolate hosts, block malicious files, and disable compromised accounts quickly after a critical detection, containing the threat before ransomware can be deployed.

Daily Tech Digest - October 22, 2025


Quote for the day:

"Good content isn't about good storytelling. It's about telling a true story well." -- Ann Handley



When yesterday’s code becomes today’s threat

A striking new supply chain attack is sending shockwaves through the developer community: a worm-style campaign dubbed “Shai-Hulud” has compromised at least 187 npm packages, including the tinycolor package that has 2 million hits weekly, and spreading to other maintainers' packages. The malicious payload modifies package manifests, injects malicious files, repackages, and republishes — thereby infecting downstream projects. This incident underscores a harsh reality: even code released weeks, months, or even years ago can become dangerous once a dependency in its chain has been compromised. ... Sign your code: All packages/releases should use cryptographic signing. This allows users to verify the origin and integrity of what they are installing. Verify signatures before use: When pulling in dependencies, CI/CD pipelines, and even local dev setups, include a step to check that the signature matches a trusted publisher and that the code wasn’t tampered with. SBOMs are your map of exposure: If you have a Software Bill of Materials for your project(s), you can query it for compromised packages. Find which versions/packages have been modified — even retroactively — so you can patch, remove, or isolate them. Continuous monitoring of risk posture: It's not enough to secure when you ship. You need alerts when any dependency or component’s risk changes: new vulnerabilities, suspicious behavior, misuse of credentials, or signs that a trusted package may have been modified after release.


Cloud Sovereignty: Feature. Bug. Feature. Repeat!

Cloud sovereignty isn’t just a buzzword anymore, argues Kushwaha. “It’s a real concern for businesses across the world. The pattern is clear. The cloud isn’t a one-size-fits-all solution anymore. Companies are starting to realise that sometimes control, cost, and compliance matter more than convenience.” ... Cloud sovereignty is increasingly critical due to the evolving geopolitical scenario, government and industry-specific regulations, and vendor lock-ins with heavy reliance on hyperscalers. The concept has gained momentum and will continue to do so because technology has become pervasive and critical for running a state/country and any misuse by foreign actors can cause major repercussions, the way Bavishi sees it. Prof. Bhatt captures that true digital sovereignty is a distant dream and achieving this requires a robust ecosystem for decades. This isn’t counterintuitive; it’s evolution, as Kushwaha epitomises. “The cloud’s original promise was one of freedom. Today, when it comes to the cloud, freedom means more control. Businesses investing heavily in digital futures can’t afford to ignore the fine print in hyperscaler contracts or the reach of foreign laws. Sovereignty is the foundation for building safely in a fragmented world.” ... Organisations have recognised the risks of digital dependencies and are looking for better options. There is no turning back, Karlitschek underlines.


Securing AI to Benefit from AI

As organizations begin to integrate AI into defensive workflows, identity security becomes the foundation for trust. Every model, script, or autonomous agent operating in a production environment now represents a new identity — one capable of accessing data, issuing commands, and influencing defensive outcomes. If those identities aren't properly governed, the tools meant to strengthen security can quietly become sources of risk. The emergence of Agentic AI systems make this especially important. These systems don't just analyze; they may act without human intervention. They triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. ... AI systems are capable of assisting human practitioners like an intern that never sleeps. However, it is critical for security teams to differentiate what to automate from what to augment. Some tasks benefit from full automation, especially those that are repeatable, measurable, and low-risk if an error occurs. ... Threat enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes where consistency outperforms creativity. By contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully grasp. Here, AI should assist by surfacing indicators, suggesting next steps, or summarizing findings while practitioners retain decision authority. Finding that balance requires maturity in process design. 


The Unkillable Threat: How Attackers Turned Blockchain Into Bulletproof Malware Infrastructure

When EtherHiding emerged in September 2023 as part of the CLEARFAKE campaign, it introduced a chilling reality: attackers no longer need vulnerable servers or hackable domains. They’ve found something far better—a global, decentralized infrastructure that literally cannot be shut down. ... When victims visit the infected page, the loader queries a smart contract on Ethereum or BNB Smart Chain using a read-only function call. ... Forget everything you know about disrupting cybercrime infrastructure. There is no command-and-control server to raid. No hosting provider to subpoena. No DNS to poison. The malicious code exists simultaneously everywhere and nowhere, distributed across thousands of blockchain nodes worldwide. As long as Ethereum or BNB Smart Chain operates—and they’re not going anywhere—the malware persists. Traditional law enforcement tactics, honed over decades of fighting cybercrime, suddenly encounter an immovable object. You cannot arrest a blockchain. You cannot seize a smart contract. You cannot compel a decentralized network to comply. ... The read-only nature of payload retrieval is perhaps the most insidious feature. When the loader queries the smart contract, it uses functions that don’t create transactions or blockchain records. 


New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning

Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs to engage in lengthy reasoning without incurring the prohibitive computational costs that currently limit such tasks. The team’s implementation, an environment named Delethink, structures the reasoning chain into fixed-size chunks, breaking the scaling problem that plagues very long LLM responses. Initial estimates show that for a 1.5B parameter model, this method can cut the costs of training by more than two-thirds compared to standard approaches. ... The researchers compared this to models trained with the standard LongCoT-RL method. Their findings indicate that the model trained with Delethink could reason up to 24,000 tokens, and matched or surpassed a LongCoT model trained with the same 24,000-token budget on math benchmarks. On other tasks like coding and PhD-level questions, Delethink also matched or slightly beat its LongCoT counterpart. “Overall, these results indicate that Delethink uses its thinking tokens as effectively as LongCoT-RL with reduced compute,” the researchers write. The benefits become even more pronounced when scaling beyond the training budget. 


The dazzling appeal of the neoclouds

While their purpose-built design gives them an advantage for AI workloads, neoclouds also bring complexities and trade-offs. Enterprises need to understand where these platforms excel and plan how to integrate them most effectively into broader cloud strategies. Let’s explore why this buzzword demands your attention and how to stay ahead in this new era of cloud computing. ... Neoclouds, unburdened by the need to support everything, are outpacing hyperscalers in areas like agility, pricing, and speed of deployment for AI workloads. A shortage of GPUs and data center capacity also benefits neocloud providers, which are smaller and nimbler, allowing them to scale quickly and meet growing demand more effectively. This agility has made them increasingly attractive to AI researchers, startups, and enterprises transitioning to AI-powered technologies. ... Neoclouds are transforming cloud computing by offering purpose-built, cost-effective infrastructure for AI workloads. Their price advantages will challenge traditional cloud providers’ market share, reshape the industry, and change enterprise perceptions, fueled by their expected rapid growth. As enterprises find themselves at the crossroads of innovation and infrastructure, they must carefully assess how neoclouds can fit into their broader architectural strategies. 


Wi-Fi 8 is coming — and it’s going to make AI a lot faster

Unlike previous generations of Wi-Fi that competed on peak throughput numbers, Wi-Fi 8 prioritizes consistent performance under challenging conditions. The specification introduces coordinated multi-access point features, dynamic spectrum management, and hardware-accelerated telemetry designed for AI workloads at the network edge. ... A core part of the Wi-Fi 8 architecture is an approach known as Ultra High Reliability (UHR). This architectural philosophy targets the 99th percentile user experience rather than best-case scenarios. The innovation addresses AI application requirements that demand symmetric bandwidth, consistent sub-5-millisecond latency and reliable uplink performance. ... Wi-Fi 8 introduces Extended Long Range (ELR) mode specifically for IoT devices. This feature uses lower data rates with more robust coding to extend coverage. The tradeoff accepts reduced throughput for dramatically improved range. ELR operates by increasing symbol duration and using lower-order modulation. This improves the link budget for battery-powered sensors, smart home devices and outdoor IoT deployments. ... Wi-Fi 8 enhances roaming to maintain sub-millisecond handoff latency. The specification includes improved Fast Initial Link Setup (FILS) and introduces coordinated roaming decisions across the infrastructure. Access points share client context information before handoff. 


Life, death, and online identity: What happens to your online accounts after death?

Today, we lack the tools (protocols) and the regulations to enable digital estate management at scale. Law and regulation can force a change in behavior by large providers. However, lacking effective protocols to establish a mechanism to identify the decedent’s chosen individuals who will manage their digital estate, every service will have to design their own path. This creates an exceptional burden on individuals planning their digital estate, and on individuals who manage the digital estates of the deceased. ... When we set out to write this paper, we wanted to influence the large technology and social media platforms, politicians, regulators, estate planners, and others who can help change the status quo. Further, we hoped to influence standards development organizations, such as the OpenID Foundation and the Internet Engineering Task Force (IETF), and their members. As standards developers in the realm of identity, we have an obligation to the people we serve to consider identity from birth to death and beyond, to ensure every human receives the respect they deserve in life and in death. Additionally, we wrote the planning guide to help individuals plan for their own digital estate. By giving people the tools to help describe, document, and manage their digital estates proactively, we can raise more awareness and provide tools to help protect individuals at one of the most vulnerable moments of their lives.


5 steps to help CIOs land a board seat

Serving on a board isn’t an extension of an operational role. One issue CIOs face is not understanding the difference between executive management and governance, Stadolnik says. “They’re there to advise, not audit or lead the current company’s CIO,” he adds. In the boardroom, the mandate is to provide strategy, governance, and oversight, not execution. That shift, Stadolnik says, can be jarring for tech leaders who’ve spent their careers driving operational results. ... “There were some broad risk areas where having strong technical leadership was valuable, but it was hard for boards to carve out a full seat just for that, which is why having CIO-plus roles was very beneficial,” says Cullivan. The issue of access is another uphill battle for CIOs. As Payne found, the network effect can play a huge role in seeking a board role. But not every IT leader has the right kind of network that can open the door to these opportunities. ... Boards expect directors to bring scope across business disciplines and issues, not just depth in one functional area. Stadolnik encourages CIOs to utilize their strategic orientation, results focus, and collaborative and influence skills to set themselves up for additional responsibilities like procurement, supply chain, shared services, and others. “It’s those executive leadership capabilities that will unlock broader roles,” he says. Experience in those broader roles bolsters a CIO’s board résumé and credibility.


Microservices Without Meltdown: 7 Pragmatic Patterns That Stick

A good sniff test: can we describe the service’s job in one short sentence, and does a single team wake up if it misbehaves? If not, we’ve drawn mural art, not an interface. Start with a small handful of services you can name plainly—orders, payments, catalog—then pressure-test them with real flows. When a request spans three services just to answer a simple question, that’s a hint we’ve sliced too thin or coupled too often. ... Microservices live and die by their contracts. We like contracts that are explicit, versioned, and backwards-friendly. “Backwards-friendly” means old clients keep working for a while when we add fields or new behaviors. For HTTP APIs, OpenAPI plus consistent error formats makes a huge difference. ... We need timeouts and retries that fit our service behavior, or we’ll turn small hiccups into big outages. For east-west traffic, a service mesh or smart gateway helps us nudge traffic safely and set per-route policies. We’re fans of explicit settings instead of magical defaults. ... Each service owns its tables; cross-service read needs go through APIs or asynchronous replication. When a write spans multiple services, aim for a sequence of local commits with compensating actions instead of distributed locks. Yes, we’re describing sagas without the capes: do the smallest thing, record it durably, then trigger the next hop. 

Daily Tech Digest - October 21, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone


The teacher is the new engineer: Inside the rise of AI enablement and PromptOps

Enterprises should onboard AI agents as deliberately as they onboard people — with job descriptions, training curricula, feedback loops and performance reviews. This is a cross-functional effort across data science, security, compliance, design, HR and the end users who will work with the system daily. ... Don’t let your AI’s first “training” be with real customers. Build high-fidelity sandboxes and stress-test tone, reasoning and edge cases — then evaluate with human graders. ... As onboarding matures, expect to see AI enablement managers and PromptOps specialists in more org charts, curating prompts, managing retrieval sources, running eval suites and coordinating cross-functional updates. Microsoft’s internal Copilot rollout points to this operational discipline: Centers of excellence, governance templates and executive-ready deployment playbooks. These practitioners are the “teachers” who keep AI aligned with fast-moving business goals. ... In a future where every employee has an AI teammate, the organizations that take onboarding seriously will move faster, safer and with greater purpose. Gen AI doesn’t just need data or compute; it needs guidance, goals, and growth plans. Treating AI systems as teachable, improvable and accountable team members turns hype into habitual value.


How CIOs Can Unlock Business Agility with Modular Cloud Architectures

A modular cloud architecture is one that makes a variety of discrete cloud services available on demand. The services are hosted across multiple cloud platforms, and different units within the business can pick and choose among specific services to meet their needs. ... At a high level, the main challenge stemming from a modular cloud architecture is that it adds complexity to an organization's cloud strategy. The more cloud services the CIO makes available, the harder it becomes to ensure that everyone is using them in a secure, efficient, cost-effective way. This is why a pivot toward a modular cloud strategy must be accompanied by governance and management practices that keep these challenges in check. ... As they work to ensure that the business can consume a wide selection of cloud services efficiently and securely, IT leaders may take inspiration from a practice known as platform engineering, which has grown in popularity in recent years. Platform engineering is the establishment of approved IT solutions that a business's internal users can access on a self-service basis, usually via a type of portal known as an internal developer platform. Historically, organizations have used platform engineering primarily to provide software developers with access to development tools and environments, not to manage cloud services. But the same sort of approach could help to streamline access to modular, composable cloud solutions.


8 platform engineering anti-patterns

Establishing a product mindset also helps drive improvement of the platform over time. “Start with a minimum viable platform to iterate and adapt based on feedback while also considering the need to measure the platform’s impact,” says Platform Engineering’s Galante. ... Top-down mandates for new technologies can easily turn off developers, especially when they alter existing workflows. Without the ability to contribute and iterate, the platform drifts from developer needs, prompting workarounds. ... “The feeling of being heard and understood is very important,” says Zohar Einy, CEO at Port, provider of a developer portal. “Users are more receptive to the portal once they know it’s been built after someone asked about their problems.” By performing user research and conducting developer surveys up front, platform engineers can discover the needs of all stakeholders and create platforms that mesh better with existing workflows and benefit productivity. ... Although platform engineering case studies from large companies, like Spotify, Expedia, or American Airlines, look impressive on paper, it doesn’t mean their strategies will transfer well to other organizations, especially those with mid-size or small-scale environments. ... Platform engineering requires more energy beyond a simple rebrand. “I’ve seen teams simply being renamed from operations or infrastructure teams to platform engineering teams, with very little change or benefit to the organization,” says Paula Kennedy


How Ransomware’s Data Theft Evolution is Rewriting Cyber Insurance Risk Models

Traditional cyber insurance risk models assume ransomware means encrypted files and brief business interruptions. The shift toward data theft creates complex claim scenarios that span multiple coverage lines and expose gaps in traditional policy structures. When attackers steal data rather than just encrypting it, the resulting claims can simultaneously trigger business interruption coverage, professional liability protection, regulatory defense coverage and crisis management. Each coverage line may have different limits, deductibles and exclusions, creating complicated interactions that claims adjusters struggle to parse. Modern business relationships are interconnected, which amplifies complications. A data breach at one organization can trigger liability claims from business partners, regulatory investigations across multiple jurisdictions, and contractual disputes with vendors and customers. Dependencies on third-party services create cascading exposures that traditional risk models fail to capture. ... The insurance implications are profound. Manual risk assessment processes cannot keep pace with the volume and sophistication of AI-enhanced attacks. Carriers still relying on traditional underwriting approaches face a fundamental mismatch of human-speed risk evaluation against machine-speed threat deployment.


Network security devices endanger orgs with ’90s era flaws“

Attackers are not trying to do the newest and greatest thing every single day,” watchTowr’s Harris explains. “They will do what works at scale. And we’ve now just seen that phishing has become objectively too expensive or too unsuccessful at scale to justify the time investment in deploying mailing infrastructure, getting domains and sender protocols in place, finding ways to bypass EDR, AV, sandboxes, mail filters, etc. It is now easier to find a 1990s-tier vulnerability in a border device where EDR typically isn’t deployed, exploit that, and then pivot from there.” ... “Identifying a command injection that is looking for a command string being passed to a system in some C or C++ code is not a terribly difficult thing to find,” Gross says. “But I think the trouble is understanding a really complicated appliance like these security network appliances. It’s not just like a single web application and that’s it.” This can also make it difficult for product developers themselves to understand the risks of a feature they add on one component if they don’t have a full understanding of the entire product architecture. ... Another problem? These appliances have a lot of legacy code, some that is 10 years or older. Plus, products and code bases inherited through acquisitions often means the developers who originally wrote the code might be long gone.


When everything’s connected, everything’s at risk

Treat OT changes as business changes (because they are). Involve plant managers, safety managers, and maintenance leadership in risk decisions. Be sure to test all changes in a development environment that adequately models the production environment where possible. Schedule changes during planned downtime with rollbacks ready. Build visibility passively with read-only collectors and protocol-aware monitoring to create asset and traffic maps without requiring PLC access. ... No one can predict the future. However, if the past is an indicator of the future, adversaries will continue to increasingly bypass devices and hijack cloud consoles, API tokens and remote management platforms to impact businesses on an industrial scale. Another area of risk is the firmware supply chain. Tiny devices often carry third-party code that we can’t easily patch. We’ll face more “patch by replacement” realities, where the only fix is swapping hardware. Additionally, machine identities at the edge, such as certificates and tokens, will outnumber humans by orders of magnitude. The lifecycle and privileges of those identities are the new perimeter. From a threat perspective, we will see an increasing number of ransomware attacks targeting physical disruption to increase leverage for the threat actors, as well as private 5G/smart facilities that, if misconfigured, propagate risk faster than any LAN ever has.


Software engineering foundations for the AI-native era

As developers begin composing software instead of coding line by line, they will need API-enabled composable components and services to stitch together. Software engineering leaders should begin by defining a goal to achieve a composable architecture that is based on modern multiexperience composable applications, APIs and loosely coupled API-first services. ... Software engineering leaders should support AI-ready data by organizing enterprise data assets for AI use. Generative AI is most useful when the LLM is paired with context-specific data. Platform engineering and internal developer portals provide the vehicles by which this data can be packaged, found and integrated by developers. The urgent demand for AI-ready data to support AI requires evolutionary changes to data management and upgrades to architecture, platforms, skills and processes. Critically, Model Context Protocol (MCP) needs to be considered. ... Software engineers can become risk-averse unless they are given the freedom, psychological safety and environment for risk taking and experimentation. Leaders must establish a culture of innovation where their teams are eager to experiment with AI technologies. This also applies in software product ownership, where experiments and innovation lead to greater optimization of the value delivered to customers.


What Does a 'Sovereign Cloud' Really Mean?

First, a sovereign cloud could be approached as a matter of procurement: Canada could shift its contract from US tech companies that currently dominate the approved list to non-American alternatives. At present, eight cloud service providers (CSPs) are approved for use by the Canadian government, seven of which are American. Accordingly, there is a clear opportunity to diversify procurement, particularly towards European CSPs, as suggested by the government’s ongoing discussions with France’s OVH Cloud. ... Second, a sovereign cloud could be defined as cloud infrastructure that is not only located in Canada and insulated from foreign legal access, but also owned by Canadian entities. Practically speaking, this would mean procuring services from domestic companies, a step the government has already taken with ThinkOn, the only non-American company CSP on the government’s approved list. ... Third, perhaps true cloud sovereignty might require more direct state intervention and a publicly built and maintained cloud. The Canadian government could develop in-house capacities for cloud computing and exercise the highest possible degree of control over government data. A dedicated Crown corporation could be established to serve the government’s cloud computing needs. ... No matter how we approach it, cloud sovereignty will be costly. 


Big Tech’s trust crisis: Why there is now the need for regulatory alignment

When companies deploy AI features primarily to establish market position rather than solve user problems, they create what might be termed ‘trust debt’ – a technical and social liability that compounds over time. This manifests in several ways, including degraded user experience, increased attack surfaces, and regulatory friction that ultimately impacts system performance and scalability. ... The emerging landscape of AI governance frameworks, from the EU AI Act to ISO 42001, shows an attempt to codify engineering best practices for managing algorithmic systems at scale. These standards address several technical realities, including bias in training data, security vulnerabilities in model inference, and intellectual property risks in data processing pipelines. Organisations implementing robust AI governance frameworks achieve regulatory compliance while adopting proven system design patterns that reduce operational risk. ... The technical implementation of trust requires embedding privacy and security considerations throughout the development lifecycle – what security engineers call ‘shifting left’ on governance. This approach treats regulatory compliance as architectural requirements that shape system design from inception. Companies that successfully integrate governance into their technical architecture find that compliance becomes a byproduct of good engineering practices which, over time, creates a series of sustainable competitive advantages.


The most sustainable data center is the one that’s already built: The business case for a ‘retrofit first’ mandate

From a sustainability standpoint, reusing and retrofitting legacy infrastructure is the single most impactful step our industry can take. Every megawatt of IT load that’s migrated into an existing site avoids the manufacturing, transport, and installation of new chillers, pumps, generators, piping, conduit, and switchgear and prevents the waste disposal associated with demolition. Sectors like healthcare, airports, and manufacturing have long proven that, with proper maintenance, mechanical and electrical systems can operate reliably for 30–50 years, and distribution piping can last a century. The data center industry – known for redundancy and resilience – can and should follow suit. The good news is that most data centers were built to last. ... When executed strategically, retrofits can reduce capital costs by 30–50 percent compared to greenfield construction, while accelerating time to market by months or even years. They also strengthen ESG reporting credibility, proving that sustainability and profitability can coexist. ... At the end of the day, I agree with Ms. Kass – the cleanest data center is the one that does not need to be built. For those that are already built, reusing and revitalizing the infrastructure we already have is not just a responsible environmental choice, it’s a sound business strategy that conserves capital, accelerates deployment, and aligns our industry’s growth with society’s expectations.

Daily Tech Digest - October 19, 2025


; Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How CIOs Can Close the IT Workforce Skills Gap for an AI-First Organization

Deliberately building AI skills among existing talent, rather than searching outside the organization for new hires or leaving skills development to chance, can help develop the desired institutional knowledge and build an IT-resilient workforce. AI-first is a strategic approach that guides the use of AI technology within an enterprise or a unit within it, with the intention of maximizing the benefits from AI. IT organizations must maintain ongoing skills development to be successful as an AI-first organization. ... In developing the future-state competency map, CIOs must include AI-specific skills and competencies, ensuring each role has measurable expectations aligned with the company’s strategic objectives related to AI. CIO must also partner with HR to design and establish AI literacy programs. While HR leaders are experts in scaling learning initiatives and standardizing tools, CIOs have more insight into foundational AI skills, training, and technical support required in the enterprise. CIOs should regularly review whether their teams’ AI capabilities contribute to faster product launches or improved customer insights. ... Addressing employees’ key concerns is a critical step for any AI change management initiative to be successful. AI is fundamentally changing traditional workplace operating models by democratizing access to technology, generating insights, and changing the relationship between people and technology.


20 Strategies To Strengthen Your Crisis Management Playbook

The regular review and refinement of protocols ensures alignment when a scenario arises. At our company, we centralize contacts, prepare for a range of scenarios and set outreach guidelines. This enables rapid response, timely updates and meaningful support, which safeguards trust and strengthens relationships with employees, stakeholders and clients. ... Unintended consequences often arise when stakeholder expectations are left out of crisis planning. Leaders should bake audience insights into their playbooks early—not after headlines hit. Anticipating concerns builds trust and gives you the clarity and credibility to lead through the tough moments. ... Know when to do nothing. Sometimes the instinct to respond immediately leads to increased confusion and puts your brand even further under the microscope. The best crisis managers know when to stop, see how things play out and respond accordingly (if at all), all while preparing for a variety of scenarios behind the scenes. ... Act like a board of directors. A crisis is not an event; it's a stress test of brand, enterprise and reputation infrastructure and resilience. Crisis plans must align with business continuity, incident response and disaster recovery plans. Marketing and communications must co-lead with the exec team, legal, ops and regulatory to guide action before commercial, brand equity and reputation risk escalates.


Abstract or die: Why AI enterprises can't afford rigid vector stacks

Without portability, organizations stagnate. They have technical debt from recursive code paths, are hesitant to adopt new technology and cannot move prototypes to production at pace. In effect, the database is a bottleneck rather than an accelerator. Portability, or the ability to move underlying infrastructure without re-encoding the application, is ever more a strategic requirement for enterprises rolling out AI at scale. ... Instead of having application code directly bound to some specific vector backend, companies can compile against an abstraction layer that normalizes operations like inserts, queries and filtering. This doesn't necessarily eliminate the need to choose a backend; it makes that choice less rigid. Development teams can start with DuckDB or SQLite in the lab, then scale up to Postgres or MySQL for production and ultimately adopt a special-purpose cloud vector DB without having to re-architect the application. ... What's happening in the vector space is one example of a bigger trend: Open-source abstractions as critical infrastructure; In data formats: Apache Arrow; In ML models: ONNX; In orchestration: Kubernetes; In AI APIs: Any-LLM and other such frameworks. These projects succeed, not by adding new capability, but by removing friction. They enable enterprises to move more quickly, hedge bets and evolve along with the ecosystem. Vector DB adapters continue this legacy, transforming a high-speed, fragmented space into infrastructure that enterprises can truly depend on. ...


AWS's New Security VP: A Turning Point for AI Cybersecurity Leadership?

"As we move forward into 2026, the breadth and depth of AI opportunities, products, and threats globally present a paradigm shift in cyber defense," Lohrmann said. He added that he was encouraged by AWS's recognition of the need for additional focus and attention on these cyberthreats. ... "Agentic AI attackers can now operate with a 'reflection loop' so they are effectively self-learning from failed attacks and modifying their attack approach automatically," said Simon Ratcliffe, fractional CIO at Freeman Clarke. "This means the attacks are faster and there are more of them … putting overwhelming pressure on CISOs to respond." ... "I think the CISO's role will evolve to meet the broader governance ecosystem, bringing together AI security specialists, data scientists, compliance officers, and ethics leads," she said, adding cybersecurity's mantra that AI security is everyone's business. "But it demands dedicated expertise," she said. "Going forward, I hope that organizations treat AI governance and assurance as integral parts of cybersecurity, not siloed add-ons." ... In Liebig's opinion, the future of cybersecurity leadership looks less hierarchical than it does now. "As for who owns that risk, I believe the CISO remains accountable, but new roles are emerging to operationalize AI integrity -- model risk officers, AI security architects, and governance engineers," he explained. "The CISO's role should expand horizontally, ensuring AI aligns to enterprise trust frameworks, not stand apart from them."


The Top 5 Technology Trends For 2026

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world.  ... Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. While this trend might not appear to noticeably affect us in our day-to-day lives, the impact on business, industry and science will begin to take shape in noticeable ways.


How Successful CTOs Orchestrate Business Results at Every Stage

As companies mature, their technical needs shift from building for the present to a long-term vision, strategic partnerships, and leveraging technology to drive business goals. The Strategist CTO combines deep technical acumen with business acumen and a deep understanding of the customer journey. This leader collaborates with other executives on strategic planning, but always through the lens of where customers are heading, not strictly where technology is going.  ... For large enterprises with complex ecosystems and large customer bases, stability, security, and operational efficiency are paramount. This is where the Guardian CTO safeguards the customer experience through technical excellence.This leader oversees all aspects of technical infrastructure, ensuring the reliability, security, and availability of core technology assets with a clear understanding that every decision directly impacts customer trust. ... While these operational models often align with company growth stages, they aren't rigid. A company's needs can shift rapidly due to market conditions, competitive pressures, or unexpected challenges, and customer expectations can evolve just as quickly. ... The most successful companies create environments where technical leadership evolves in response to changing business needs, empowering technical leaders to pivot their focus from building to strategizing, or from innovating to safeguarding, as circumstances demand.


Financial services seek balance of trust, inclusion through face biometrics advances

Advances in the flexibility of face biometric liveness, deepfake detection and cross-sectoral collaboration represent the latest measures against fraud in remote financial services. A digital bank in the Philippines is integrating iProov’s face biometrics and liveness detection, OneConnect and a partner are entering a sandbox to work on protecting against deepfakes, and an event held by Facephi in Mexico explored the challenges of financial services trying to maintain digital trust while advancing inclusion. ... The Philippine digital bank will deploy advanced liveness detection tools as part of a new risk-based authentication strategy. “Our mission is to uplift the lives of all Filipinos through a secure, trusted, and accessible digital bank for all Filipinos, and that requires deploying resilient infrastructure capable of addressing sophisticated fraud,” said Russell Hernandez, chief information security officer at UnionDigital Bank. “As we shift toward risk-based authentication, we need a flexible and future-ready solution. iProov’s internationally proven ability to deliver ease of use, speed, and high security assurance – backed by reliable vendor support – ensures we can evolve our fraud defenses while sustaining customer trust and confidence.” ... The Mexican government has launched several initiatives to standardize digital identity infrastructure, including Llave MX — a single sign-on platform for public services — and the forthcoming National Digital Identity Document, designed to harmonize verification across sectors.


Why context, not just data, will define the future of AI in finance

Raw intelligence in AI and its ability to crunch numbers and process data is only one part of the equation. What it fundamentally lacks is wisdom, which comes from context. In areas like personal finance, building powerful models with deep domain knowledge is critical. The challenges range from misinterpretation of data to regulatory oversights that directly affect value for customers. That’s why at Intuit, we put “context at the core of AI.” This means moving beyond generic datasets to build specialised Financial Large Language Models (LLMs) trained on decades of anonymised financial expertise. It’s about understanding the interconnected journey of our customers across our ecosystem—from the freelancer managing invoices in QuickBooks to that same individual filing taxes with TurboTax, to them monitoring their financial health on Credit Karma. ... In the age of GenAI, craftsmanship in engineering is being redefined. It’s no longer just about writing every line of code or building models from scratch, but about architecting robust, extensible systems that empower others to innovate. The very soul of engineering is transcending code to become the art of architecture. The measure of excellence is no longer found in the meticulous construction of every model, but in the visionary design of systems that empower domain experts to innovate. With tools like GenStudio and GenUX abstracting complexity, the engineer’s role isn’t diminished but elevated. They evolve from builders of applications to architects of innovation ecosystems. 


The modernization mirage: CIOs must see through it to play the long game

Enterprise architecture, in too many organizations, has been reduced to frameworks: TOGAF, Zachman, FEAF. These models provide structure but rarely move capital or inspire investor trust. Boards don’t want frameworks. They want influence. That’s why I developed the Architecture Influence Flywheel — a practical model I use in board and transformation discussions. It rests on three pivots - Outcomes: Every architectural choice must tie directly to board-level priorities — growth, resilience, efficiency. ... Relationships: CIOs must serve as business-technology translators. Express progress not in technical jargon, but in investor language — return on capital, return on innovation, margin expansion and risk mitigation. ... Visible wins: Influence grows through undeniable demonstrations. A system that cuts onboarding time by 40%, an AI model that reduces fraud losses or an audit process that clears in half the time — these visible wins build momentum. ... Technologies rise and fall. Frameworks evolve. Titles shift. But one principle endures: What leaders tolerate defines their legacy. Playing the long game requires CIOs to ask uncomfortable questions:Will we tolerate AI models we cannot explain to regulators? Will we tolerate unchecked cloud sprawl without financial discipline? Will we tolerate compliance as a box-ticking exercise rather than a growth enabler? 


What Is Cybersecurity Platformization?

Cybersecurity platformization is a strategic response to this complexity. It’s the move from a collection of disparate point solutions to a single, unified platform that integrates multiple security functions. Dickson describes it as the “canned integration of security tools so that they work together holistically to make the installation, maintenance and operation easier for the end customer across various tools in the security stack.” ... The most significant hidden cost of a fragmented, multitool security strategy is labor. Managing disconnected tools is a resource strain on an organization, as it requires individuals with specialized skills for each tool. This includes the labor-intensive task of managing API integrations and manually coding “shims,” or integrations to translate data between different tools, which often have separate protocols and proprietary interfaces, Dukes says. Beyond the cost of personnel, there’s the operational complexity.  ... One of the most immediate benefits of adopting a platform approach is cost reduction. This includes not only the reduction in licensing fees but also a reduction in the operational complexity and the number of specialized employees needed. ... Another key benefit is the well-worn concept of a “single pane of glass,” a single dashboard that enables IT security teams to have easier management and reporting. Instead of multiple tools with different interfaces and data formats, a unified platform streamlines everything into a single, cohesive view.