Daily Tech Digest - October 27, 2025


Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford


AWS Outage Is Just the Latest Internet Glitch Banks Must Insulate Against

If clouds fail or succumb to cyberattacks, the damage can be enormous, measured only by the maliciousness and creativity of the hacker and the redundancy and resilience of the defenses that users have in place. ... As I describe in The Unhackable Internet, we are already way down the rabbit hole of cyber insecurity. It would take a massive coordinated global effort to secure the current internet. That is unlikely to happen. Therefore, the most realistic business strategy is to assume the inevitable: A glitch, human error or a successful breach or cloud failure will occur. That means systems must be in place to distribute patches, resume operations, reconstruct networks, and recover lost data. Redundancy is a necessary component to get back online, but how much redundancy is feasible or economically sustainable? And will those backstops actually work? ... Given these ever-increasing challenges and cyber incursions in the financial services business, I have argued for a fundamental change in regulation — one that will keep regulators on the cutting edge of digital and cybersecurity developments. To accomplish that, regulation should be a more collaborative experience that invests the financial industry in its own oversight and systemic security. This effort should include industry executives and their staffs. Their expertise in the oversight process would enrich the quality of regulation, particularly from the perspective of strengthening the cyber defenses of the industry.


The 10 biggest issues CISOs and cyber teams face today

“It’s not finger-pointing; we’re all learning,” Lee says. “Business is now expected to embrace and move quickly with AI. Boards and C-level executives are saying, ‘We have to lean into this more’ and then they turn to security teams to support AI. But security doesn’t fully understand the risk. No one has this down because it’s moving so fast.” As a result, many organizations skip security hardening in their rush to embrace AI. But CISOs are catching up. ... Moreover, Todd Moore, global vice president of data security at Thales, says CISOs are facing a torrent of AI-generated data — generally unstructured data such as chat logs — that needs to be secured. “In some aspects, AI is becoming the new insider threat in organizations,” he says. “The reason why I say it’s a new insider threat is because there’s a lot of information that’s being put in places you never expected. CISOs need to identify and find that data and be able to see if that data is critical and then be able to protect it.” ... “We’re now getting to the stage where no one is off-limits,” says Simon Backwell, head of information security at tech company Benifex and a member of ISACA’s Emerging Trends Working Group. “Attack groups are getting bolder, and they don’t care about the consequences. They want to cause mass destruction.”


The AI Inflection Point Isn’t in the Cloud, It’s at the Edge

Beyond the screen, there is a need for agentic applications that specifically reduce latency and improve throughput. “You need an agentic architecture with several things going on,” Shelby said about using models to analyze the packaging of pharmaceuticals, for instance. “You might need to analyze the defects. Then you might need an LLM with a RAG behind it to do manual lookup. That’s very complex. It might need a lot of data behind it. It might need to be very large. You might need 100 billion parameters.” The analysis, he noted, may require integration with a backend system to perform another task, necessitating collaboration among several agents. AI appliances are then necessary to manage multiagent workflows and larger models. ... The nature of LLMs, Shelby said, requires a person to tell you if the LLM’s output is correct, which in turn impacts how to judge the relevancy of LLMs in edge environments. It’s not like you can rely on an LLM to provide an answer to a prompt. Consider a camera in the Texas landscape, focusing on an oil pump, Shelby said. “The LLM is like, ‘Oh, there are some campers cooking some food,’ when really there’s a fire” at the oil pump. So, how do you make the process testable in a way that engineers expect, Shelby asked. It requires end-to-end guard rails. And that’s why random, cloud-based LLMs do not yet apply to industrial environments.


Scaling Identity Security in Cloud Environments

One significant challenge organizations face is the disconnect between security and research and development (R&D) teams. This gap can lead to vulnerabilities being overlooked during the development phase, resulting in potential security risks once new systems are operational in cloud environments. To bridge this gap, a collaborative approach involving both teams is essential. Creating a secure cloud environment necessitates an understanding of the specific needs and challenges faced by each department. ... The journey to achieving scalable identity security in cloud environments is ongoing and requires constant vigilance. By integrating NHI management into their cybersecurity strategies, organizations can reduce risks, increase efficiencies, and ensure compliance with regulatory requirements. With security continue to evolve, staying informed and adaptable remains key. To gain further insights into cybersecurity, you might want to read about some cybersecurity predictions for 2025 and how they may influence your strategies surrounding NHI management. The integration of effective NHI and secrets management into cloud security controls is not just recommended but necessary for safeguarding data. It’s an invaluable part of a broader cybersecurity strategy aimed at minimizing risk and ensuring seamless, secure operations across all sectors.


Owning the Fallout: Inside Blameless Culture

For an organization to truly own the fallout after an incident, there must be a cultural shift from blame to inquiry. A ‘blameless culture’ doesn’t mean it’s a free-for-all, with no accountability. Instead, it’s a circumstance where the first question after an incident isn’t “Who screwed up?” it’s “What failed — and why?” As Gustavo Razzetti describes, “blame is a sign of an unhealthy culture,” and the goal is to replace it with curiosity. In a blameless postmortem, you break down what happened, map the contributing systemic factors, and focus on where processes, tooling, or assumptions broke down. This mindset aligns with the concept of just culture, which balances accountability and systems thinking. After an incident, the focus is to ask how things went wrong, not whom to punish — unless egregious misconduct is involved. ... The most powerful learning happens in the moment when incident patterns redirect strategic priorities. For example, during post-mortems, a team could discover that under-monitored dependencies cause high-severity incidents. With a resilience mindset, that insight can become an objective: “Build automated dependency-health dashboards by Q2.” When feedback and insights flow into OKRs, teams internalize resilience as part of delivery, not an afterthought. Resilient teams move beyond damage control to institutional learning. 


Can your earbuds recognize you? Researchers are working on it

Each person’s ear canal produces a distinct acoustic signature, so the researchers behind EarID designed a method that allows earbuds to identify their wearer by using sound. The earbuds emit acoustic signals into the user’s ear canal, and the reflections from that sound reveal patterns shaped by the ear’s structure. What makes this study stand out is that the authentication process happens entirely on the earbuds themselves. The device extracts a unique binary key based on the user’s ear canal shape and then verifies that key on the paired mobile device. By working with binary keys instead of raw biometric data, the system avoids sending sensitive information over Bluetooth. This helps prevent interception or replay attacks that could expose biometric data. ... A key part of the research is showing that earbuds can handle biometric processing without large hardware or cloud support. EarID runs on a small microcontroller comparable to those found in commercial earbuds. The researchers measured performance on an Arduino platform with an 80 MHz chip and found that it could perform the key extraction in under a third of a second. For comparison, traditional machine learning classifiers took three to ninety times longer to train and process data. This difference could make a real impact if ear canal authentication ever reaches consumer devices, since users expect quick and seamless authentication.


What It 'Techs' to Run Real-Time Payments at Scale

Beyond hosting applications, the architecture is designed for scale, reuse and rapid provisioning. APIs and services support multiple verticals including lending, insurance, investments and even quick commerce through a shared infrastructure-as-a-service model. "Every vertical uses the same underlying infra, and we constantly evaluate whether something can be commoditized for the group and then scaled centrally. It's easier to build and scale one accounting stack than reinvent it every time," Nigam said. Early investments in real-time compute systems and edge analytics enable rapid anomaly detection and insights, cutting operational downtime by 30% and improving response times to under 50 milliseconds. A recent McKinsey report on financial infrastructure in emerging economies underscores the importance of edge computation and near-real-time monitoring for high-volume payments networks - a model increasingly being adopted by global fintech leaders to ensure both speed and reliability. ... Handling spikes and unexpected surges is another critical consideration. India's payments ecosystem experiences predictable peaks - including festival seasons or IPL weekends - and unpredictable surges triggered by government announcements or regulatory deadlines. When a payments platform is built for population scale, any single merchant or use case does not create a surge at this level. 


Who’s right — the AI zoomers or doomers?

Earlier this week, the Emory Wheel editorial board published an opinion column claiming that without regulation, AI will soon outpace humanity’s ability to control it. The post said AI’s uncontrolled evolution threatens human autonomy, free expression, and democracy, stressing that the technical development is faster than what lawmakers can handle. ... Both zoomers and doomers agree that humanity’s fate will be decided when the industry releases AGI or superintelligent AI. But there’s strong disagreement on when that will happen. From OpenAI’s Sam Altman to Elon Musk, Eric Schmidt, Demis Hassabis, Dario Amodei, Masayoshi Son, Jensen Huang, Ray Kurzweil, Louis Rosenberg, Geoffrey Hinton, Mark Zuckerberg, Ajeya Cotra, and Jürgen Schmidhuber — all predict AGI by later this year to later this decade. ... Some say we need strict global rules, maybe like those for nuclear weapons. Others say strong laws would slow progress, stop new ideas, and give the benefits of AI to China. ... AI is already causing harms. It contributes to privacy invasion, disinformation and deepfakes, surveillance overreach, job displacement, cybersecurity threats, child and psychological harms, environmental damage, erosion of human creativity and autonomy, economic and political instability, manipulation and loss of trust in media, unjust criminal justice outcomes, and other problems.


Powering Data in the Age of AI: Part 3 – Inside the AI Data Center Rebuild

You can’t design around AI the way data centers used to handle general compute. The loads are heavier, the heat is higher, and the pace is relentless. You start with racks that pull more power than entire server rooms did a decade ago, and everything around them has to adapt. New builds now work from the inside out. Engineers start with workload profiles, then shape airflow, cooling paths, cable runs, and even structural supports based on what those clusters will actually demand. In some cases, different types of jobs get their own electrical zones. That means separate cooling loops, shorter throw cabling, dedicated switchgear — multiple systems, all working under the same roof. Power delivery is changing, too. In a conversation with BigDATAwire, David Beach, Market Segment Manager at Anderson Power, explained, “Equipment is taking advantage of much higher voltages and simultaneously increasing current to achieve the rack densities that are necessary. This is also necessitating the development of components and infrastructure to properly carry that power.” ... We know that hardware alone doesn’t move the needle anymore. The real advantage comes from pushing it online quickly, without getting bogged down by power, permits, and other obstacles. That’s where the cracks are beginning to open.


Strategic Domain-Driven Design: The Forgotten Foundation of Great Software

The strategic aspect of DDD is often overlooked because many people do not recognize its importance. This is a significant mistake when applying DDD. Strategic design provides context for the model, establishes clear boundaries, and fosters a shared understanding between business and technology. Without this foundation, developers may focus on modeling data rather than behavior, create isolated microservices that do not represent the domain accurately, or implement design patterns without a clear purpose. ... The first step in strategic modeling is to define your domain, which refers to the scope of knowledge and activities that your software intends to address. Next, we apply the age-old strategy of "divide and conquer," a principle used by the Romans that remains relevant in modern software development. We break down the larger domain into smaller, focused areas known as subdomains. ... Once the language is aligned, the next step is to define bounded contexts. These are explicit boundaries that indicate where a particular model and language apply. Each bounded context encapsulates a subset of the ubiquitous language and establishes clear borders around meaning and responsibilities. Although the term is often used in discussions about microservices, it actually predates that movement. 

Daily Tech Digest - October 26, 2025


Quote for the day:

"Everywhere is within walking distance if you have the time." -- Steven Wright


AI policy without proof is just politics

History shows us that regulation without verification rarely works. Imagine if Wall Street firms were allowed to audit their own books, or if pharmaceutical companies could approve their own drugs. The risks would be obvious and unacceptable. Yet, in AI today, much of the information policymakers see about model performance and safety comes straight from the companies developing those systems, leaving regulators dependent on the very firms they are meant to oversee. Self-reporting, intentionally or not, creates structural blind spots. Developers have incentives to highlight strengths and minimize weaknesses, and even honest disclosures can leave out important context. ... The first requirement is independence. Oversight must be based on information that does not come solely from the companies themselves: data that can be inspected, verified and trusted as neutral. Without that independence, even well-intentioned disclosures risk being selective or incomplete. The second requirement is continuity. AI systems evolve quickly, and their performance often shifts once they are deployed in the wild. Benchmarks conducted at launch can’t capture how models change over time, or how they behave across different languages, domains and user needs.  ... AI policy is at a crossroads. The U.S. has set bold goals, but without reliable evaluation, those goals risk becoming little more than rhetoric. Rules set the direction. Proof provides the trust.


5 ways ambitious IT pros can future-proof their tech careers in an age of AI

Successful IT chiefs are expected to be the expert resources for pioneering technology developments. In fact, Daly said the CIOs of the future will demonstrate how AI can fulfill some executive roles and responsibilities. ... David Walmsley, chief digital and technology officer at jewelry specialist Pandora, said up-and-coming IT stars take responsibilities and opportunities. The disconnected technology organization of old, which relied on outsourcing arrangements for project delivery, has been replaced by a department that works closely with the business to achieve its objectives. "The days of technology leaders leaning back and saying, 'Well, which of my external providers do I blame now?' are long gone," he said. "Everyone can see that technology is a strategic lever for growing the business and helping it succeed in its mission." ... The critical skill for next-generation leaders lies not in chasing every new platform or coding language, but in cultivating the human capacities that allow you to adapt. "Those capabilities include curiosity, critical thinking, collaboration, and an understanding of human behavior," he said. "At LIS, we emphasize interdisciplinary learning precisely because technology never exists in isolation; it is always entangled with psychology, economics, ethics, and culture."


Biometrics increase integrity from age checks to agents, but not when compelled

Biometrics are anchoring trust for established but growing use cases like national IDs even as new use cases are taking off. But surveillance concerns inevitably come with increases in the collection of personal data, particularly when the collection is compelled or involuntary. ... Tech industry group the CCIA took aim at Texas’ app store level age checks, arguing the plan is bound to fail in several ways, including data privacy breaches. One of those alleged likely failures is the accuracy of facial age estimation, but the supporting stat from NIST is outdated, and the new figure significantly better. Automated license-place reader-maker Flock and Amazon’s Ring have partnered to share data, allowing law enforcement agencies that use Flock’s investigative platforms to request footage from homeowners. ... The growth of online interactions with credentials that are anchored with biometrics continues unabated, in the form of national ID systems, agentic AI, age checks and identity verification. Juniper Research forecasts digital identity will be an $80 billion global market by 2030, with growth driven by new regulations and credentials. ... Age checks could catalyze digital ID adoption Luciditi CPO Dan Johnson says on the Biometric Update Podcast. He makes the case for the advantages of adding age assurance to apps by integrating a component, rather than building a standalone branded app.


Weak Data Infrastructure Keeps Most GenAI Projects From Delivering ROI

Kolbeck sees companies investing billions while overlooking adequate storage to support their AI infrastructure as one of the major mistakes corporations make. He said that oversight leads to three key failure factors — festering silos, lack of performance, and uptime dilemmas. The most critical resource for AI is data training. When companies store data across multiple silos, data scientists lack access to essential details. “Storage systems must be able to scale and provide unified access to enable an AI data lake, a centralized and efficient storage for the entire company,” he observed. ... “Early AI projects may work well, but as soon as these projects grow in size [as in more GPUs], these arrays tip over, and that’s when mission-critical workflows grind to a halt,” he said. Kolbeck explained the difference between scale-out architecture versus a scale-up approach as a better option for handling the massive and unpredictable data demands of modern AI and ML. He cited his company’s experience in making that transition. ... “Developing and training AI technology is still a very experimental process and requires the infrastructure — including storage — to adapt quickly when data scientists develop new ideas,” Kolbeck noted. Real-time performance analytics are critical. Storage administrators need to be able to precisely identify how applications, such as training or other pipeline phases, impact the storage. 


When your AI browser becomes your enemy: The Comet security disaster

Your regular Chrome or Firefox browser is basically a bouncer at a club. It shows you what's on the webpage, maybe runs some animations, but it doesn't really "understand" what it's reading. If a malicious website wants to mess with you, it has to work pretty hard — exploit some technical bug, trick you into downloading something nasty or convince you to hand over your password. AI browsers like Comet threw that bouncer out and hired an eager intern instead. This intern doesn't just look at web pages — it reads them, understands them and acts on what it reads. Sounds great, right? Except this intern can't tell when someone's giving them fake orders. ... They can actually do stuff: Regular browsers mostly just show you things. AI browsers can click buttons, fill out forms, switch between your tabs, even jump between different websites. ... They remember everything: Unlike regular browsers that forget each page when you leave, AI browsers keep track of everything you've done across your whole session. ... You trust them too much: We naturally assume our AI assistants are looking out for us. That blind trust means we're less likely to notice when something's wrong. Hackers get more time to do their dirty work because we're not watching our AI assistant as carefully as we should. They break the rules on purpose: Normal web security works by keeping websites in their own little boxes — Facebook can't mess with your Gmail, Amazon can't see your bank account. 


Rewriting the Rules of Software Quality: Why Agentic QA is the Future CIOs Must Champion

From continuous deployment to AI-powered applications, software systems are more dynamic, distributed and adaptive than ever. In this changing environment, static testing frameworks are crumbling. What worked yesterday is increasingly not going to work today, and tomorrow’s risks cannot be addressed using yesterday’s checklists. This is where agentic QA steps in, heralding a transformative approach that integrates autonomous, intelligent agents throughout the entire software lifecycle. ... What distinguishes this model isn’t just its intelligence — it’s its adaptability. In a world where AI models are themselves part of the application stack, QA must account for nondeterminism. Agentic systems are uniquely equipped to do this. When AI-driven components exhibit variable behavior based on internal learning states, traditional test-case comparisons fail for evident reasons. Agentic QA, on the other hand, thrives in uncertainty. It detects anomalies, learns from edge cases, and refines its approach continuously. ... However, it is essential to note that as AI takes over repetitive and complex validations, it enables QA professionals to step up and evolve into curators of quality. Their role is freed up to become more strategic: Defining testing intent, ensuring AI alignment with business goals, interpreting nuanced behaviors, and upholding ethical standards. This shift calls for a cultural transformation.


AI-Powered Ransomware Is the Emerging Threat That Could Bring Down Your Organization

AI fundamentally transforms every phase of ransomware operations through several key capabilities. Enhanced reconnaissance allows malware to autonomously scan security perimeters, identify vulnerabilities, and select precise exploitation tools. This eliminates the need for human operators during initial phases, enabling attacks to spread rapidly across IT environments. Adaptive encryption techniques represent another revolutionary advancement. AI-powered ransomware can analyze system resources and data types to modify encryption algorithms dynamically, making decryption more complex. The malware can prioritize high-value targets by analyzing document content using Natural Language Processing before encryption, ensuring maximum strategic impact. Evasive tactics powered by machine learning enable ransomware to continuously modify its code and behavior patterns. This polymorphic capability makes signature-based detection methods ineffective, as the malware presents different fingerprints with each execution. AI also enables malware to track user presence and activate during off-hours to maximize damage while minimizing detection opportunities. The financial consequences of AI-powered ransomware attacks far exceed traditional threats. ... Small businesses face particularly severe consequences, with 60% of attacked companies closing permanently within six months.


When a Provider's Lights Go Out, How Can CIOs Keep Operations Going?

This may seem obvious, but a thousand companies still lost digital functionality on Monday. Why weren't they better prepared? One answer is that while redundancy isn't new, it also isn't very sexy. In a field full of innovation and growth, redundancy is about slowing down, checking your work, and taking the safest route. It's not surprising if some companies are more excited about investing in new AI capabilities than implementing failsafe protocols. Nor is it necessarily wrong. ... "It is important to invest where failure creates real risk, not just minor inconvenience, or noise," he added. This will look different for companies of different sizes, but particularly for companies within different sectors. Some industries, such as healthcare or finance, require a higher level of redundancy across the board simply because the stakes are greater; lack of access to patient records or financial information could have severe repercussions in terms of safety and public trust, which are far beyond inconvenience or frustration. ... But this isn't as simple as tracing third-party contracts, counting how often one name appears, and shifting some operations away from too-dominant providers. If an organization has partnered predominantly with one provider, it's probably for good reason. As Hitchens explained, working with a single provider can accelerate innovation and simplify management, offering visibility, native integrations and unified tooling.


Three Ways Secure Modern Networks Unlock the True Power of AI

AI is network-bound. As always-on models demand up to 100 times more compute, storage, and bandwidth, traditional networks risk becoming bottlenecks both on capacity, and latency. For AI tasks that happen instantly, like self-driving cars or automated stock trading, even tiny delays can cause problems. Modern network infrastructure needs to be more than just fast. It also needs to be safe from cyberattacks and strong enough to handle more AI growth in the future. To realize AI’s full potential, businesses must build purpose-built “AI superhighways”, secure networks designed to scale seamlessly, handling distributed AI workloads across core, cloud, and edge environments. ... The value organizations expect from AI, be it automating workflows, unlocking predictive insights, or powering new digital experiences, depends on more than just compute power or clever algorithms. Furthermore, the demand for real-time machine data from business operations to train AI models is increasing the need for more detailed and extensive networks. This, in turn, accelerates the integration of IT and OT, and expands the adoption of the Internet of Things (IoT) ... The sensitivity of AI data flows is raising the bar for security and compliance. The risks of sticking with outdated infrastructure are stark. 95% of technology leaders say a resilient network is critical to their operations, and 77% have experienced major outages due to congestion, cyberattacks, or misconfigurations.


"It’s not about security, it’s about control" – How EU governments want to encrypt their own comms, but break our private chats

In the wake of ever-larger and frequent cyberattacks – think of the Salt Typhoon in the US – encryption has become crucial to shield everyone's security, whether that's ID theft, scams, or national security risks. Even the FBI urged all Americans to turn to encrypted chats. ... Law enforcement, however, often sees this layer of protection as an obstacle to their investigations, pushing for "lawful access" to encrypted data as a way to combat hideous crimes like terrorism or child abuse. That's exactly where legislation proposals like Chat Control and ProtectEU in the European bloc, or the Online Safety Act in the UK, come from. Yet, people working with encryption know that these solutions are flawed. On a technical level, experts all agree that an encryption backdoor cannot guarantee the same level of online security and privacy we have now. Is then time to redefine what we mean when we talk about privacy? This is what's probably needed, according to Rocket.Chat's Strategic Advisor, Christian Calcagni. "We need to have a new definition of private communication, and that's a big debate. Encryption or no encryption, what could be the way?" Calcagni is, nonetheless, very critical of the current push to break encryption. He told me: "Why should the government know what I think or what I'm sharing on a personal level? We shouldn't focus only on encryption or not encryption, but on what that means for our privacy, our intimacy."

Daily Tech Digest - October 25, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


The day the cloud went dark

This week, the impossible happened—again. Amazon Web Services, the backbone of the digital economy and the world’s largest cloud provider, suffered a large-scale outage. If you work in IT or depend on cloud services, you didn’t need a news alert to know something was wrong. Productivity ground to a halt, websites failed to load, business systems stalled, and the hum of global commerce was silenced, if only for a few hours. The impact was immediate and severe, affecting everything from e-commerce giants to startups, including my own consulting business. ... Some businesses hoped for immediate remedies from AWS’s legendary service-level agreements. Here’s the reality: SLA credits are cold comfort when your revenue pipeline is in freefall. The truth that every CIO has faced at least once is that even industry-leading SLAs rarely compensate for the true cost of downtime. They don’t make up for lost opportunities, damaged reputations, or the stress on your teams. ... This outage is a wake-up call. Headlines will fade, and AWS (and its competitors) will keep promising ever-improving reliability. Just don’t forget the lesson: No matter how many “nines” your provider promises, true business resilience starts inside your own walls. Enterprises must take matters into their own hands to avoid existential risk the next time lightning strikes.


Application Modernization Pitfalls: Don't Let Your Transformation Fail

Modernizing legacy applications is no longer a luxury — it’s a strategic imperative. Whether driven by cloud adoption, agility goals, or technical debt, organizations are investing heavily in transformation. Yet, for all its potential, many modernization projects stall, exceed budgets, or fail to deliver the expected business value. Why? The transition from a monolithic legacy system to a flexible, cloud-native architecture is a complex undertaking that involves far more than just technology. It's a strategic, organizational, and cultural shift. And that’s where the pitfalls lie. ... Application modernization is not just a technical endeavor — it’s a strategic transformation that touches every layer of the organization. From legacy code to customer experience, from cloud architecture to compliance posture, the ripple effects are profound. Yet, the most overlooked ingredient in successful modernization isn’t technology — it’s leadership: Leadership that frames modernization as a business enabler, not a cost center; Leadership that navigates complexity with clarity, acknowledging legacy constraints while championing innovation; Leadership that communicates with empathy, recognizing that change is hard and adoption is earned, not assumed. Modernization efforts fail not because teams lack skill, but because they lack alignment. 


CIOs will be on the hook for business-led AI failures

While some business-led AI projects include CIO input, AI experts have seen many organizations launch AI projects without significant CIO or IT team support. When other departments launch AI projects without heavy IT involvement, they may underestimate the technical work needed to make the projects successful, says Alek Liskov, chief AI officer at data refinery platform provider Datalinx AI. ... “Start with the tech folks in the room first, before you get much farther,” he says. “I still see many organizations where there’s either a disconnect between business and IT, or there’s lack of speed on the IT side, or perhaps it’s just a lack of trust.” Despite the doubts, IT leaders need to be involved from the beginning of all AI projects, adds Bill Finner, CIO at large law firm Jackson Walker. “AI is just another technology to add to the stack,” he says. “Better to embrace it and help the business succeed then to sit back and watch from the bench.” ... “It’s a great opportunity for CIOs to work closely with all the practice areas both on the legal and business professional side to ensure we’re educating everyone on the capabilities of the applications and how they can enhance their day-to-day workflows by streamlining processes,” Finner says. “CIOs love to help the business succeed, and this is just another area where they can show their value.”


Three Questions That Help You Build a Better Software Architecture

You don’t want to create an architecture for a product that no one needs. And in validating the business ideas, you will test assumptions that drive quality attributes like scalability and performance needs. To do this, the MVP has to be more than a Proof of Concept - it needs to be able to scale well enough and perform well enough to validate the business case, but it does not need to answer all questions about scalability and performance ... yet. ... Achieving good performance while scaling can also mean reworking parts of the solution that you’ve already built; solutions that perform well with a few users may break down as load is increased. On the other hand, you may never need to scale to the loads that cause those failures, so overinvesting too early can simply be wasted effort. Many scaling issues also stem from a critical bottleneck, usually related to accessing a shared resource. Spotting these early can inform the team about when, and under what conditions, they might need to change their approach. ... One of the most important architectural decisions that teams must make is to decide how they will know that technical debt has risen too far for the system to be supportable and maintainable in the future. The first thing they need to know is how much technical debt they are actually incurring. One way they can do this is by recording decisions that incur technical debt in their Architectural Decision Record (ADR).


Ransomware recovery perils: 40% of paying victims still lose their data

Decryptors are frequently slow and unreliable, John adds. “Large-scale decryption across enterprise environments can take weeks and often fails on corrupted files or complex database systems,” he explains. “Cases exist where the decryption process itself causes additional data corruption.” Even when decryptor tools are supplied, they may contain bugs, or leave files corrupted or inaccessible. Many organizations also rely on untested — and vulnerable — backups. Making matters still worse, many ransomware victims discover that their backups were also encrypted as part of the attack. “Criminals often use flawed or incompatible encryption tools, and many businesses lack the infrastructure to restore data cleanly, especially if backups are patchy or systems are still compromised,” says Daryl Flack, partner at UK-based managed security provider Avella Security and cybersecurity advisor to the UK Government. ... “Setting aside funds to pay a ransom is increasingly viewed as problematic,” Tsang says. “While payment isn’t illegal in itself, it may breach sanctions, it can fuel further criminal activity, and there is no guarantee of a positive outcome.” A more secure legal and strategic position comes from investing in resilience through strong security measures, well-tested recovery plans, clear reporting protocols, and cyber insurance, Tsang advises.


In IoT Security, AI Can Make or Break

Ironically, the same techniques that help defenders also help attackers. Criminals are automating reconnaissance, targeting exposed protocols common in IoT, and accelerating exploitation cycles. Fortinet recently highlighted a surge in AI-driven automated scanning (tens of thousands of scans per second), where IoT and Session Initiation Protocol (SIP) endpoints are probed earlier in the kill chain. That scale turns "long-tail" misconfigurations into early footholds. Worse, AI itself is susceptible to attack. Adversarial ML (machine learning) can blind or mislead detection models, while prompt injection and data poisoning can repurpose AI assistants connected to physical systems. ... Move response left. Anomaly detection without orchestration just creates work. It's important to pre-stage responses such as quarantine VLANs, Access Control List (ACL) updates, Network Access Control (NAC) policies, and maintenance window tickets. This way, high-confidence detections contain first and ask questions second. Finally, run purple-team exercises that assume AI is the target and the tool. This includes simulating prompt injection against your assistants and dashboards; simulating adversarial noise against your IoT Intrusion Detection System (IDS); and testing whether analysts can distinguish "model weirdness" from real incidents under time pressure.


Cyber attack on Jaguar Land Rover estimated to cost UK economy £1.9 billion

Most of the estimated losses stem from halted vehicle production and reduced manufacturing output. JLR’s production reportedly dropped by around 5,000 vehicles per week during the shutdown, translating to weekly losses of approximately £108 million. The shock has cascaded across hundreds of suppliers and service providers. Many firms have faced cash-flow pressures, with some taking out emergency loans. To mitigate the fallout, JLR has reportedly cleared overdue invoices and issued advance payments to critical suppliers. ... The CMC’s Technical Committee urged businesses and policymakers to prioritise resilience against operational disruptions, which now pose the greatest financial risk from cyberattacks. The committee recommended identifying critical digital assets, strengthening segmentation between IT and operational systems, and ensuring robust recovery plans. It also called on manufacturers to review supply-chain dependencies and maintain liquidity buffers to withstand prolonged shutdowns. Additionally, it advised insurers to expand cyber coverage to include large-scale supply chain disruption, and urged the government to clarify criteria for financial support in future systemic cyber incidents.


Thinking Machines challenges OpenAI's AI scaling strategy: 'First superintelligence will be a superhuman learner'

To illustrate the problem with current AI systems, Rafailov offered a scenario familiar to anyone who has worked with today's most advanced coding assistants. "If you use a coding agent, ask it to do something really difficult — to implement a feature, go read your code, try to understand your code, reason about your code, implement something, iterate — it might be successful," he explained. "And then come back the next day and ask it to implement the next feature, and it will do the same thing." The issue, he argued, is that these systems don't internalize what they learn. "In a sense, for the models we have today, every day is their first day of the job," Rafailov said. ... "Think about how we train our current generation of reasoning models," he said. "We take a particular math problem, make it very hard, and try to solve it, rewarding the model for solving it. And that's it. Once that experience is done, the model submits a solution. Anything it discovers—any abstractions it learned, any theorems—we discard, and then we ask it to solve a new problem, and it has to come up with the same abstractions all over again." That approach misunderstands how knowledge accumulates. "This is not how science or mathematics works," he said. ... The objective would fundamentally change: "Instead of rewarding their success — how many problems they solved — we need to reward their progress, their ability to learn, and their ability to improve."


Demystifying Data Observability: 5 Steps to AI-Ready Data

Data observability ensures data pipelines capture representative data, both the expected and the messy. By continuously measuring drift, outliers, and unexpected changes, observability creates the feedback loop that allows AI/ML models to learn responsibly. In short, observability is not an add-on; it is a foundational practice for AI-ready data. ... Rather than relying on manual checks after the fact, observability should be continuous and automated. This turns observability from a reactive safety net into a proactive accelerator for trusted data delivery. As a result, every new dataset or transformation can generate metadata about quality, lineage, and performance, while pipelines can include regression tests and alerting as standard practice. ... The key is automation. Rather than policies that sit in binders, observability enables policies as code. In this way, data contracts and schema checks that are embedded in pipelines can validate that inputs remain fit for purpose. Drift detection routines, too, can automatically flag when training data diverges from operational realities while governance rules, from PII handling to lineage, are continuously enforced, not applied retroactively. ... It’s tempting to measure observability in purely technical terms such as the number of alerts generated, data quality scores, or percentage of tables monitored. But the real measure of success is its business impact. Rather than numbers, organizations should ask if it resulted in fewer failed AI deployments. 


AI heavyweights call for end to ‘superintelligence’ research

Superintelligence isn’t just hype. It’s a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world’s best researchers. ... Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that’s producing greenhouse gases. ... For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence.

Daily Tech Digest - October 23, 2025


Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale



Leadership lessons from NetForm founder Karen Stephenson

Co-creation is a hot buzzword encouraging individuals to integrate and create with each other, but the simplest way to integrate and create is in the mind of one person — if they’re willing to push forward and do it. Even further, what can an integrated team of diverse minds accomplish when they co-create? ... In the age of AI, humans will need to focus on what humans do well. At the moment, at least, that’s making novel connections, thinking by analogy and creating the new. Our single-field approach to learning, qualifications and career ladders makes it hard for us to compete with machines that are often smarter than we are in any given discipline. For that creative spark and to excel at what messy, forgetful, slow, imperfect humans do best, we need to work, think and live differently. In fact, the founders of five of the largest companies in the world are (or were) polymaths — mentally diverse people skilled in multiple disciplines — Bill Gates, Steve Jobs, Warren Buffett, Larry Page and Jeff Bezos. They learn because they’re curious and want to solve problems, not for a career ladder. It’s easier than ever, today, to learn with AI and online materials and to collaborate with tech and humans around the world. All you need to do is open inward to your talents and desires, explore, collect and fuse.


Why cloud and AI projects take longer and how to fix the holdups

In the case of the cloud, the problem is that senior management thinks that the cloud is always cheaper, that you can always cut costs by moving to the cloud. This is despite the recent stories on “repatriation,” or moving cloud applications back into the data center. In the case of cloud projects, most enterprise IT organizations now understand how to assess a cloud project for cost/benefit, so most of the cases where impossible cost savings are promised are caught in the planning phase. For AI, both senior management and line department management have high expectations with respect to the technology, and in the latter case may also have some experience with AI in the form of as-a-service generative AI models available online. About a quarter of these proposals quickly run afoul of governance policies because of problems with data security, and half of this group dies at this point. For the remaining proposals, there is a whole set of problems that emerge. Most enterprises admit that they really don’t understand what AI can do, which obviously makes it hard to frame a realistic AI project. The biggest gap identified is between an AI business goal and a specific path leading to it. One CIO calls the projects offered by user organizations as “invitations to AI fishing trips” because the goal is usually set in business terms, and these would actually require a project simply to identify how the stated goal could be achieved.


Who pays when a multi-billion-dollar data center goes down?

While the Lockton team is looking at everything from immersion cooling to drought, there are a handful of risks where it feels the industry isn't adequately preparing. “The big thing that isn't getting on people's radars in a growing way is customer equipment," Hayhow says “Looking at this through the lens of the data center owner or developer, it's often very difficult. “It's a bit of an unspoken conversation that the equipment in the white space belongs to the customer. Often you don't have custody over it, you don't have visibility over it, and it’s highly proprietary. But the value of it is growing.” Per square meter of white space, the Lockton partner suggests that the value of the equipment five years from now will be exponentially larger than the value of the equipment five years ago, as more data centers invest in expensive GPUs and other equipment for AI use cases. “Leases have become clearer in terms of placing responsibility for damage to customer equipment more squarely on the shoulders of the owner, developer,” Hayhow says. “We're having that conversation in the US, where the halls are larger, the value of the equipment is greater, and some of the hyperscale customers are being much more prescriptive in terms of wanting to address the topic of damage to our equipment … if you lose 20 megawatts worth of racks of Nvidia chips, the lead time to get those replaced, unless you're building elsewhere, is quite significant.”


AI Agents Need Security Training – Just Like Your Employees

“It may not be as candid as what humans would do during those sessions, but AI agents used by your workforce do need to be trained. They need to understand what your company policies are, including what is acceptable behavior, what data they're allowed to access, what actions they're allowed to take,” Maneval explained. ... “Most AI tools are just trained to do the same thing over and over and so it means decisions are based on assumptions from limited information,” she explained to Infosecurity. “Additionally, most AI tools solve real problems but also create real risks and each solve different problems and creates different risks.” While some cybersecurity experts argue that auditing AI tools is no different to auditing any other software or application, Maneval disagrees. ... Maneval’s said her “rule of thumb” is that whether you’re dealing with traditional machine learning algorithms, generative AI applications of AI agents, “treat them like any other employees.” This not only means that AI-powered agents should be trained on security policies but should also be forced to respect security controls that the staff have to respect, such as role-based access controls (RBAC). “You should look at how you treat your humans and apply those same controls to the AI. You probably do a background check before anyone is hired. Do the same thing with your AI agent. ..."


Why must CISOs slay a cyber dragon to earn business respect?

Why should a security leader need to experience a major cyber incident to earn business colleagues’ respect? Jeff Pollard, VP and principal analyst at Forrester, says this enterprise perception problem is “just part of human nature. If we don’t see the bad thing happening, we don’t appreciate all of the things that were done to prevent that bad thing from happening.” Of course, if an attack turns into an incident and defense goes poorly, “it can easily turn from a hero moment to a scapegoat moment,” Pollard says. Oberlaender, who now works as a cybersecurity consultant, is among those who believe hard-earned experience should be rewarded, but that’s not what he’s seeing in the market today. ... CISOs “feel that they need to fight off an attack to show value, but there are many other successes they can do and show,” says Erik Avakian, technical counselor at Info-Tech Research Group. “Building KPIs is a powerful way to show their value.” ... Chris Jackson, a senior cybersecurity specialist with tech education vendor Pluralsight, reinforces the frustration that many enterprise CISOs feel about the lack of appropriate respect from their colleagues and bosses. “CISOs are a lot like pro sports coaches. It doesn’t matter how well they performed during the season or how many games they won. If they don’t win the championship, it’s seen as a failure, and the coach is often the first to go,” Jackson says. 


The next cyber crisis may start in someone else’s supply chain

Organizations have improved oversight of their direct partners, but few can see beyond the first layer. This limited view leaves blind spots that attackers can exploit, particularly through third-party software or service providers. “We’re in a new generation of risk, one where cyber, geopolitical, technology, political risk, and other factors are converging and reshaping the landscape. The impact on markets and operations is unfolding faster than many organizations can keep up,” said Jim Wetekamp, CEO of Riskonnect. ... Third-party and nth-party risks continue to expose companies to disruption. Most organizations have business continuity plans for supplier disruptions, but their monitoring often stops at direct partners. Only a small fraction can monitor risks across multiple tiers of their supply chain, and some cannot track their critical technology providers at all. Organizations still underestimate how dependent they are on third parties and continue to rely on paper-based continuity plans that offer a false sense of security. ... More companies now have a chief risk officer, but funding for technology and tools has barely moved. Most risk leaders say their budgets have stayed the same even as they are asked to cover more ground. Many are turning to automation and specialized software to do more with what they already have.


Boardroom to War Room: Translating AI-Driven Cyber Risk into Action

Great CISOs today combine strategic leadership, financial knowledge, technological skills, and empathy to turn cybersecurity from a burden on operations into a strong enabler. This change happens faster with artificial intelligence. AI has a lot of potential, but it also makes things more uncertain. It can do things like forecast threats and automate orchestration. CISOs need to see AI problems as more than just technological problems; they need to see them as business risks that need clear communication, openness, and quick response. ... Not storytelling, but data and graphics win over executives. Suggested metrics include: Predictive accuracy - The percentage of risks that AI flagged before a breach compared to the percentage of threats that AI flagged after it happened; Speed of reaction - The average time it took for AI-enabled confinement to work compared to manual reaction; False positive rate - Tech teams employed AI to improve alerts and cut down on alert fatigue from X to Y; Third-party model risk - The number of outside model calls that were looked at and accepted; Visual callout suggestion - A mock-up of a dashboard that illustrates AI risk KPIs, a trendline of predictive value, and a drop in incidences. ... Change from being an IT responder who reacts to problems to a strategic AI-enabled risk leader. Take ownership of your AI risk story, keep an eye on third-party models, provide your board clear information, and make sure your war room functions quickly.


Govt. faces questions about why US AWS outage disrupted UK tax office and banking firms

“The narrative of bigger is better and biggest is best has been shown for the lie it always has been,” Owen Sayers, an independent security architect and data protection specialist with a long history of working in the public sector, told Computer Weekly. “The proponents of hyperscale cloud will always say they have the best engineers, the most staff and the greatest pool of resources, but bigger is not always better – and certainly not when countries rely on those commodity global services for their own national security, safety and operations. “Nationally important services must be recognised as best delivered under national control, and as a minimum, the government should be knocking on AWS’s door today and asking if they can in fact deliver a service that guarantees UK uptime,” he said. “Because the evidence from this week’s outage suggests that they cannot.” ... “In light of today’s major outage at Amazon Web Services … why has HM Treasury not designated Amazon Web Services or any other major technology firm as a CTP for the purposes of the Critical Third Parties Regime,” asked Hillier, in the letter. “[And] how soon can we expect firms to be brought into this regime?” Hillier also asked HM Treasury for clarification about whether or not it is concerned about the fact that “seemingly key parts of our IT infrastructure are hosted abroad” given the outage originated from a US-based AWS datacentre region but impacted the activities of Lloyds Bank and also HMRC.


Quantum work, federated learning and privacy: Emerging frontiers in blockchain research

It is possible to have a future in which the field of quantum computation could serve as the foundation for blockchain consensus. The future is alluring; quantum algorithms can provide solutions to the issues that classical computers find difficult and the method may be more effective and resistant to brute-force attacks. The danger, however, is significant: when quantum computers are sufficiently robust, existing encryption standards can be compromised. ... Federated learning is another upcoming element of blockchain studies, a machine learning model training technique that avoids data centralisation. Federated learning enables various devices or nodes to feed into a standard model instead of storing sensitive data in a central server inaccessible to third parties. ... The issue of privacy is of specific importance today due to the increased regulatory pressure on exchanges and cryptocurrency companies. A compromise between user privacy and regulatory openness could prove to be the key to success. Studies of privacy-saving instruments provide a competitive advantage to blockchain developers and for exchanges interested in increasing their influence on the global economy. ... The decade of blockchain research to come will not be characterised by fast transactions or cheaper costs. It will redraw the borders of trust, calculation, and privacy in digitally based economies. 


Ransomware groups surge as automation cuts attack time to 18 mins

The ransomware group LockBit has recently introduced "LockBit 5.0", reportedly incorporating artificial intelligence for attack randomisation and enhanced targeting options, with a focus on regaining its previous position atop the ransomware ecosystem. Medusa, by contrast, was noted to have fallen behind due in part to lacking widespread automated and customisable features, despite previous activity levels. ReliaQuest's analysis predicts the rise of new groups through the lens of its three-factor model, specifically naming "The Gentlemen" and "DragonForce" as likely to become major threats due to their adoption of advanced technical capabilities. The Gentlemen, for instance, has listed over 30 victims on its data-leak site within its first month of activity, underpinned by automation, prioritised encryption, and endpoint discovery for rapid lateral movement. Conversely, groups such as "Chaos" and "Nova" are likely to remain minor players, lacking the integral features associated with higher victim counts and affiliate recruitment. ... RaaS groups now use automation to reduce breakout times to as little as 18 minutes, making manual intervention too slow. Implement automated containment and response plays to keep pace with attackers. These workflows should automatically isolate hosts, block malicious files, and disable compromised accounts quickly after a critical detection, containing the threat before ransomware can be deployed.

Daily Tech Digest - October 22, 2025


Quote for the day:

"Good content isn't about good storytelling. It's about telling a true story well." -- Ann Handley



When yesterday’s code becomes today’s threat

A striking new supply chain attack is sending shockwaves through the developer community: a worm-style campaign dubbed “Shai-Hulud” has compromised at least 187 npm packages, including the tinycolor package that has 2 million hits weekly, and spreading to other maintainers' packages. The malicious payload modifies package manifests, injects malicious files, repackages, and republishes — thereby infecting downstream projects. This incident underscores a harsh reality: even code released weeks, months, or even years ago can become dangerous once a dependency in its chain has been compromised. ... Sign your code: All packages/releases should use cryptographic signing. This allows users to verify the origin and integrity of what they are installing. Verify signatures before use: When pulling in dependencies, CI/CD pipelines, and even local dev setups, include a step to check that the signature matches a trusted publisher and that the code wasn’t tampered with. SBOMs are your map of exposure: If you have a Software Bill of Materials for your project(s), you can query it for compromised packages. Find which versions/packages have been modified — even retroactively — so you can patch, remove, or isolate them. Continuous monitoring of risk posture: It's not enough to secure when you ship. You need alerts when any dependency or component’s risk changes: new vulnerabilities, suspicious behavior, misuse of credentials, or signs that a trusted package may have been modified after release.


Cloud Sovereignty: Feature. Bug. Feature. Repeat!

Cloud sovereignty isn’t just a buzzword anymore, argues Kushwaha. “It’s a real concern for businesses across the world. The pattern is clear. The cloud isn’t a one-size-fits-all solution anymore. Companies are starting to realise that sometimes control, cost, and compliance matter more than convenience.” ... Cloud sovereignty is increasingly critical due to the evolving geopolitical scenario, government and industry-specific regulations, and vendor lock-ins with heavy reliance on hyperscalers. The concept has gained momentum and will continue to do so because technology has become pervasive and critical for running a state/country and any misuse by foreign actors can cause major repercussions, the way Bavishi sees it. Prof. Bhatt captures that true digital sovereignty is a distant dream and achieving this requires a robust ecosystem for decades. This isn’t counterintuitive; it’s evolution, as Kushwaha epitomises. “The cloud’s original promise was one of freedom. Today, when it comes to the cloud, freedom means more control. Businesses investing heavily in digital futures can’t afford to ignore the fine print in hyperscaler contracts or the reach of foreign laws. Sovereignty is the foundation for building safely in a fragmented world.” ... Organisations have recognised the risks of digital dependencies and are looking for better options. There is no turning back, Karlitschek underlines.


Securing AI to Benefit from AI

As organizations begin to integrate AI into defensive workflows, identity security becomes the foundation for trust. Every model, script, or autonomous agent operating in a production environment now represents a new identity — one capable of accessing data, issuing commands, and influencing defensive outcomes. If those identities aren't properly governed, the tools meant to strengthen security can quietly become sources of risk. The emergence of Agentic AI systems make this especially important. These systems don't just analyze; they may act without human intervention. They triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. ... AI systems are capable of assisting human practitioners like an intern that never sleeps. However, it is critical for security teams to differentiate what to automate from what to augment. Some tasks benefit from full automation, especially those that are repeatable, measurable, and low-risk if an error occurs. ... Threat enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes where consistency outperforms creativity. By contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully grasp. Here, AI should assist by surfacing indicators, suggesting next steps, or summarizing findings while practitioners retain decision authority. Finding that balance requires maturity in process design. 


The Unkillable Threat: How Attackers Turned Blockchain Into Bulletproof Malware Infrastructure

When EtherHiding emerged in September 2023 as part of the CLEARFAKE campaign, it introduced a chilling reality: attackers no longer need vulnerable servers or hackable domains. They’ve found something far better—a global, decentralized infrastructure that literally cannot be shut down. ... When victims visit the infected page, the loader queries a smart contract on Ethereum or BNB Smart Chain using a read-only function call. ... Forget everything you know about disrupting cybercrime infrastructure. There is no command-and-control server to raid. No hosting provider to subpoena. No DNS to poison. The malicious code exists simultaneously everywhere and nowhere, distributed across thousands of blockchain nodes worldwide. As long as Ethereum or BNB Smart Chain operates—and they’re not going anywhere—the malware persists. Traditional law enforcement tactics, honed over decades of fighting cybercrime, suddenly encounter an immovable object. You cannot arrest a blockchain. You cannot seize a smart contract. You cannot compel a decentralized network to comply. ... The read-only nature of payload retrieval is perhaps the most insidious feature. When the loader queries the smart contract, it uses functions that don’t create transactions or blockchain records. 


New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning

Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs to engage in lengthy reasoning without incurring the prohibitive computational costs that currently limit such tasks. The team’s implementation, an environment named Delethink, structures the reasoning chain into fixed-size chunks, breaking the scaling problem that plagues very long LLM responses. Initial estimates show that for a 1.5B parameter model, this method can cut the costs of training by more than two-thirds compared to standard approaches. ... The researchers compared this to models trained with the standard LongCoT-RL method. Their findings indicate that the model trained with Delethink could reason up to 24,000 tokens, and matched or surpassed a LongCoT model trained with the same 24,000-token budget on math benchmarks. On other tasks like coding and PhD-level questions, Delethink also matched or slightly beat its LongCoT counterpart. “Overall, these results indicate that Delethink uses its thinking tokens as effectively as LongCoT-RL with reduced compute,” the researchers write. The benefits become even more pronounced when scaling beyond the training budget. 


The dazzling appeal of the neoclouds

While their purpose-built design gives them an advantage for AI workloads, neoclouds also bring complexities and trade-offs. Enterprises need to understand where these platforms excel and plan how to integrate them most effectively into broader cloud strategies. Let’s explore why this buzzword demands your attention and how to stay ahead in this new era of cloud computing. ... Neoclouds, unburdened by the need to support everything, are outpacing hyperscalers in areas like agility, pricing, and speed of deployment for AI workloads. A shortage of GPUs and data center capacity also benefits neocloud providers, which are smaller and nimbler, allowing them to scale quickly and meet growing demand more effectively. This agility has made them increasingly attractive to AI researchers, startups, and enterprises transitioning to AI-powered technologies. ... Neoclouds are transforming cloud computing by offering purpose-built, cost-effective infrastructure for AI workloads. Their price advantages will challenge traditional cloud providers’ market share, reshape the industry, and change enterprise perceptions, fueled by their expected rapid growth. As enterprises find themselves at the crossroads of innovation and infrastructure, they must carefully assess how neoclouds can fit into their broader architectural strategies. 


Wi-Fi 8 is coming — and it’s going to make AI a lot faster

Unlike previous generations of Wi-Fi that competed on peak throughput numbers, Wi-Fi 8 prioritizes consistent performance under challenging conditions. The specification introduces coordinated multi-access point features, dynamic spectrum management, and hardware-accelerated telemetry designed for AI workloads at the network edge. ... A core part of the Wi-Fi 8 architecture is an approach known as Ultra High Reliability (UHR). This architectural philosophy targets the 99th percentile user experience rather than best-case scenarios. The innovation addresses AI application requirements that demand symmetric bandwidth, consistent sub-5-millisecond latency and reliable uplink performance. ... Wi-Fi 8 introduces Extended Long Range (ELR) mode specifically for IoT devices. This feature uses lower data rates with more robust coding to extend coverage. The tradeoff accepts reduced throughput for dramatically improved range. ELR operates by increasing symbol duration and using lower-order modulation. This improves the link budget for battery-powered sensors, smart home devices and outdoor IoT deployments. ... Wi-Fi 8 enhances roaming to maintain sub-millisecond handoff latency. The specification includes improved Fast Initial Link Setup (FILS) and introduces coordinated roaming decisions across the infrastructure. Access points share client context information before handoff. 


Life, death, and online identity: What happens to your online accounts after death?

Today, we lack the tools (protocols) and the regulations to enable digital estate management at scale. Law and regulation can force a change in behavior by large providers. However, lacking effective protocols to establish a mechanism to identify the decedent’s chosen individuals who will manage their digital estate, every service will have to design their own path. This creates an exceptional burden on individuals planning their digital estate, and on individuals who manage the digital estates of the deceased. ... When we set out to write this paper, we wanted to influence the large technology and social media platforms, politicians, regulators, estate planners, and others who can help change the status quo. Further, we hoped to influence standards development organizations, such as the OpenID Foundation and the Internet Engineering Task Force (IETF), and their members. As standards developers in the realm of identity, we have an obligation to the people we serve to consider identity from birth to death and beyond, to ensure every human receives the respect they deserve in life and in death. Additionally, we wrote the planning guide to help individuals plan for their own digital estate. By giving people the tools to help describe, document, and manage their digital estates proactively, we can raise more awareness and provide tools to help protect individuals at one of the most vulnerable moments of their lives.


5 steps to help CIOs land a board seat

Serving on a board isn’t an extension of an operational role. One issue CIOs face is not understanding the difference between executive management and governance, Stadolnik says. “They’re there to advise, not audit or lead the current company’s CIO,” he adds. In the boardroom, the mandate is to provide strategy, governance, and oversight, not execution. That shift, Stadolnik says, can be jarring for tech leaders who’ve spent their careers driving operational results. ... “There were some broad risk areas where having strong technical leadership was valuable, but it was hard for boards to carve out a full seat just for that, which is why having CIO-plus roles was very beneficial,” says Cullivan. The issue of access is another uphill battle for CIOs. As Payne found, the network effect can play a huge role in seeking a board role. But not every IT leader has the right kind of network that can open the door to these opportunities. ... Boards expect directors to bring scope across business disciplines and issues, not just depth in one functional area. Stadolnik encourages CIOs to utilize their strategic orientation, results focus, and collaborative and influence skills to set themselves up for additional responsibilities like procurement, supply chain, shared services, and others. “It’s those executive leadership capabilities that will unlock broader roles,” he says. Experience in those broader roles bolsters a CIO’s board résumé and credibility.


Microservices Without Meltdown: 7 Pragmatic Patterns That Stick

A good sniff test: can we describe the service’s job in one short sentence, and does a single team wake up if it misbehaves? If not, we’ve drawn mural art, not an interface. Start with a small handful of services you can name plainly—orders, payments, catalog—then pressure-test them with real flows. When a request spans three services just to answer a simple question, that’s a hint we’ve sliced too thin or coupled too often. ... Microservices live and die by their contracts. We like contracts that are explicit, versioned, and backwards-friendly. “Backwards-friendly” means old clients keep working for a while when we add fields or new behaviors. For HTTP APIs, OpenAPI plus consistent error formats makes a huge difference. ... We need timeouts and retries that fit our service behavior, or we’ll turn small hiccups into big outages. For east-west traffic, a service mesh or smart gateway helps us nudge traffic safely and set per-route policies. We’re fans of explicit settings instead of magical defaults. ... Each service owns its tables; cross-service read needs go through APIs or asynchronous replication. When a write spans multiple services, aim for a sequence of local commits with compensating actions instead of distributed locks. Yes, we’re describing sagas without the capes: do the smallest thing, record it durably, then trigger the next hop.