Daily Tech Digest - November 23, 2025


Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln



Lean4: How the theorem prover works and why it's the new competitive edge in AI

Lean4 is both a programming language and a proof assistant designed for formal verification. Every theorem or program written in Lean4 must pass a strict type-checking by Lean’s trusted kernel, yielding a binary verdict: A statement either checks out as correct or it doesn’t. This all-or-nothing verification means there’s no room for ambiguity – a property or result is proven true or it fails. ... Lean4’s value isn’t confined to pure reasoning tasks; it’s also poised to revolutionize software security and reliability in the age of AI. Bugs and vulnerabilities in software are essentially small logic errors that slip through human testing. What if AI-assisted programming could eliminate those by using Lean4 to verify code correctness? ... Beyond software bugs, Lean4 can encode and verify domain-specific safety rules. For instance, consider AI systems that design engineering projects. A LessWrong forum discussion on AI safety gives the example of bridge design: An AI could propose a bridge structure, and formal systems like Lean can certify that the design obeys all the mechanical engineering safety criteria. ... For enterprise decision-makers, the message is clear: It’s time to watch this space closely. Incorporating formal verification via Lean4 could become a competitive advantage in delivering AI products that customers and regulators trust. We are witnessing the early steps of AI’s evolution from an intuitive apprentice to a formally validated expert. 


How pairing SAST with AI dramatically reduces false positives in code security

In our opinion, the path to next-generation code security is not choosing one over the other, but integrating their strengths. So, along with Kiarash Ahi, founder, Virelya Intelligence Research Labs and the co-author of the framework, I decided to do exactly that. Our novel hybrid framework combines the deterministic rigor and the speed of traditional SAST with the contextual reasoning of a fine-tuned LLM to deliver a system that doesn’t just find vulnerabilities, but also validates them. ... The framework embeds the relevant code snippet, the data flow path and surrounding contextual information into a structured JSON prompt for a fine-tuned LLM. We fine-tuned Llama 3 8B on a high-quality dataset of vetted false positives and true vulnerabilities, specifically covering major flaw categories like those in the OWASP Top 10 to form the core of the intelligent triage layer. Based on the relevant security issue flagged, the prompt then asks a clear, focused question, such as, “Does this user input lead to an exploitable SQL injection?” ... A SAST and LLM synergy marks a necessary evolution in static code security. By integrating deterministic analysis with intelligent, context-aware reasoning, we can finally move past the false positive crisis and equip developers with a tool that provides high signal security feedback at the pace of modern development with LLMs.


Quantum Progress Demands Manufacturing Revolution, Martinis Says

Quantum computing’s next breakthroughs will come from factories, not physics labs, according to John Martinis ... He argued that a general-purpose quantum computer will require at least a million physical qubits, a number that is far beyond today’s devices and out of reach without a fundamental shift in how the hardware is built. ... Current machines rely on dense tangles of wires, components and cooling structures that dwarf the tiny chip at the bottom of the machine. He writes that “The complexity of the plumbing completely overwhelms the quantum device itself.” Martinis said the solution is to abandon today’s hand-built, research-lab approach and move to fully integrated chips similar to the transformation that turned 1960s mainframes into the microchips inside smartphones. The field, he argued, must invest in cryogenic integrated circuits that can operate at the ultra-low temperatures required for superconducting qubits. Using that approach, Martinis suggests that engineers could place about 20,000 qubits on a single wafer and reach the million-qubit scale by linking wafers together. That level of integration would also require abandoning manufacturing methods that date back more than half a century. He singled out the “lift-off” process still used in many quantum labs as too dirty and too limited for industrial-scale production.


Dream of quantum internet inches closer after breakthrough helps beam information over fiber-optic networks

"By demonstrating the versatility of these erbium molecular qubits, we're taking another step toward scalable quantum networks that can plug directly into today's optical infrastructure,” David Awschalom, the study's principal investigator and a professor of molecular engineering and physics at the University of Chicago, said in the statement. ... That's largely where the comparison ends, though. Whereas classical bits compute in binary 1s and 0s, qubits behave according to the weird rules of quantum physics, allowing them to exist in multiple states at once — a property known as superposition. A pair of qubits could, therefore, be 0-0, 0-1, 1-0 and 1-1 simultaneously. Qubits typically come in three forms: superconducting qubits, which are made from tiny electrical circuits; trapped ion qubits, which store information in charged atoms held in place by electromagnetic fields; and photonic qubits, which encode quantum states in particles of light. ... Operating at telecom wavelengths provides two key advantages, the first being that signals can travel long distances with minimal loss — vital for transmitting quantum data across fiber networks. The second is that light at fiber-optic wavelengths passes easily through silicon. If it didn't, any data encoded in the optical signal would be absorbed and lost. Because the optical signal can pass through silicon to detectors or other photonic components embedded beneath, the erbium-based qubit is ideal for chip-based hardware, the researchers said.


AWS Outage Fallout: Lessons In Resilience

The impact of the AWS outage has led to multiple warnings about the issues when relying on one cloud provider. But experts warn it’s important to keep in mind that moving to multi-cloud can also cause problems. Multi-cloud is “not the default answer,” says Ryan Gracey, partner and technology lawyer at law firm Gordons. “For a few crown jewel services, splitting across providers can reduce single-supplier risk and satisfy regulators, but it also raises cost and complexity, and opens new ways to fail. Chasing a lowest common denominator setup often means giving up the very features that make cloud attractive.” ... The takeaway from the latest outage is not just to buy more redundancy, says Gracey. “It’s about designing systems that bend, not break. They should slow down gracefully, drop non-essential features and protect the most important customer tasks when things go wrong. A part of this is running drills so teams know who decides what actions to take, what to say to customers and what to do first.” For the cloud service provider, it’s important to recognise where a potential single point of failure – or “race condition” in the case of AWS – may exist, says Jones. “AWS will be looking at its architecture to ensure single points of failure are eliminated and the potential blast radius of any incident is dramatically reduced.” Maintaining operations during outages requires “architectural and operational preparation,” says Nazir.


AI Is Not Just a Tool

At some point in every panel, someone leans into the microphone and says it: “AI is just a tool, like a camera.” It’s meant to end the argument, a warm blanket for anxious minds. Art survived photography; we’ll survive this. But it is wrong. A camera points at the world and harvests what’s already there. A modern AI system points at us and proposes a world — filling gaps, making claims, deciding what should come next. That difference is not semantics. It’s jurisdiction. ... A photo is protectable because a human author made it. Purely AI-generated material, absent sufficient human control, isn’t. The law refuses to pretend the prompt is the picture. That alone should retire the analogy. That doesn’t mean the output is “authorless”; it means the law refuses to pretend the user’s prompt equals human creative control. Cameras yield photographs authored by people; models yield artifacts whose legal status relies on the extent to which a human actually contributed. Different authorship rules = different things. ... The model is not a person, but it isn’t an empty pipe. It embodies choices that will be made (over and over) at human scale, with the same confidence we misread as competence. That’s why generative AI feels creative without being human. It performs composition: not presence, but pattern. It produces objects that look like testimony. Cameras can lie (through framing), but models conjecture. They create the very thing we then argue about.


Are Small Businesses at Risk by Outsourcing Parts of Their Operations?

When you outsource a function or department, you're doing more than simply delegating tasks. Every third-party vendor, managed service provider, virtual assistant, or consultant who requires access to your critical systems carries an element of risk; they're ostensibly a potential entry point into your business. ... Some organizations are bound by specific, stringent regulatory frameworks and standards, depending on their sector(s) of operation. Some remote-working IT or marketing contractors may not be subject to the same data privacy laws that govern your organization, for example. Similarly, an HR outsourcing provider may store employee information in cloud servers that are deemed security-compliant in some jurisdictions but not in others. These compliance gaps create additional security vulnerabilities that threat actors would actively exploit without hesitation if the opportunity arose. ... As AI becomes more ingrained into business operations, the process of outsourcing becomes increasingly gray. According to recent statistics, more than half of businesses have experienced AI-related security vulnerabilities. What's more, cybercriminals are harnessing generative AI technology to escalate and amplify their attacks. ... The biggest danger that SMBs face when outsourcing is the assumption that someone else is now responsible for upholding security standards. 


Why AI Integration in DevOps is so Important

Traditional DevOps pipelines rely heavily on a high degree of automated testing and monitoring. The drawback is that they often lack the machine intelligence needed to recognize new or evolving threats. AI addresses this gap by introducing learning-based security systems capable of real-time behavioral analysis. Instead of waiting for known vulnerabilities to appear or be actively exploited, these systems recognize the predicate behavior and code activity. Once detected, engineers are alerted before an incident occurs. Within DevOps, AI is able to fortify each stage of the process: Reviewing commits for suspicious or vulnerable code, monitoring container environment integrity and evaluating system logs for anomalies that may have escaped real-time recognition. Insights like these help teams locate weak spots and reduce the impact of human error over time. ... AI integration with existing CI/CD workflows gives DevOps teams real-time visibility into security risks. AI-powered automated scanners analyze components automatically. Source code, dependencies and container images are all scanned for hidden vulnerabilities before the build phase is complete. This helps identify issues that could otherwise slip through manual reviews. AI-driven monitoring tools also track activity across the entire delivery pipeline, identifying potential attacks such as credential theft, code injection or dependency poisoning. As these tools learn over time, they adapt to new threat behaviors that traditional scanners might overlook.


NTT: How Japan Leads in Cybersecurity Amid Rising Threats

The Active Cyber Defense Law passed in May 2025 is intended to minimise the damage caused by substantive cyberattacks that can compromise national security, while Japan has also established new requirements for critical infrastructure companies to enhance their cybersecurity practices under the revised Economic Security Promotion Act. ... Gen AI has lowered the bar for adversaries to launch cyberattacks, meaning defenders have no choice but to empower themselves to automate at least partially their tasks including log or phishing analysis, threat detection, behavioural analysis and incident report drafting. This is crucial for defenders who are overwhelmed by ever increasing work around the clock to minimize burnout risks. ... As Japanese companies are increasingly expanding their businesses globally, multiple firms have reported their overseas subsidiaries being hit by ransomware attacks in the United States, Vietnam, Thailand, Singapore and Taiwan. To manage supply chain risks and ensure business continuity, it is becoming more crucial than ever to ensure global governance in cybersecurity and keep proper data backups, the principle of least privilege and network segmentation. Surprisingly, Japan is the country where ransomware infection ratio is lowest amongst 15 major countries such as the United States, the United Kingdom, France and Germany. 


From Data Bottlenecks to Data Products: Building for Speed and Scale

As it stands now, the central data team oversees data quality only at the final stage, a process that is not currently working. This is because it has resulted in the domain team, who create the data, being the only ones who have the full context necessary for proper accuracy and integrity. If businesses shift left with their approach, app developers themselves will take responsibility for the data created by applications. By giving the producer ownership of the quality, ongoing issues can be stopped before trickling down into data dashboards or machine-learning models. Ultimately, this is more than just a technical change. Shifting left will be a culture change that moves toward Data Mesh principles. By embedding ownership and quality within the domains that produce and use data, organisations replace central gatekeeping with shared accountability. Each domain now becomes a creator and protector of reliable data, ensuring governance is built in from the start rather than enforced later. ... Understandably, giving ownership of data to the teams creating it may seem chaotic. But it isn’t about losing control over it; rather, it is about giving teams the freedom and tools to work faster and smarter. At the end stands the lighthouse vision of a self-service data platform where every consumer can independently generate insights for standard questions and only reach out for support when tackling more advanced analyses.

Daily Tech Digest - November 22, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone



How CIOs can get a better handle on budgets as AI spend soars

Everyone wants to become AI-centric or AI-native, says West Monroe’s Greenstein. “But nobody has extra buckets of money to do this unless it’s existential to their company,” he says. So moving money from legacy projects to AI is a popular strategy. “It’s a shift of priorities within companies,” he says. “They look at their investments and ask how many are no longer needed because of AI, or how many can be done with AI. Plus, they’re putting pressure on vendors to drive down costs. They’re definitely squeezing existing suppliers.” Even large, tech-forward companies might have to do this kind of juggling. ... “AI is in a self-funding model at the moment,” he says. “We’re shifting investment from legacy technologies to AI.” ... Another challenge to budgeting is the demands that AI places on people, systems, and data. One of the most significant challenges to managing AI costs is talent, says Principal’s Arora. “Skill gaps and cross-team dependencies can slow deliveries and drive up costs,” he says. Then there’s the problem of evolving regulations, and the need to continuously adapt governance frameworks to stay resilient in the face of these changes. Organizations also often underestimate how much money will be needed to train employees, and to bring data and other foundational systems in line with what’s needed for AI. “Legacy environments add complexity and expense,” he adds. “These one-time costs are heavy but essential to avoid long-term inefficiencies.”


AI agent evaluation replaces data labeling as the critical path to production deployment

It's a fundamental shift in what enterprises need validated: not whether their model correctly classified an image, but whether their AI agent made good decisions across a complex, multi-step task involving reasoning, tool usage and code generation. If evaluation is just data labeling for AI outputs, then the shift from models to agents represents a step change in what needs to be labeled. Where traditional data labeling might involve marking images or categorizing text, agent evaluation requires judging multi-step reasoning chains, tool selection decisions and multi-modal outputs — all within a single interaction. "There is this very strong need for not just human in the loop anymore, but expert in the loop," Malyuk said. He pointed to high-stakes applications like healthcare and legal advice as examples where the cost of errors remains prohibitively high. ... The challenge with evaluating agents isn't just the volume of data, it's the complexity of what needs to be assessed. Agents don't produce simple text outputs; they generate reasoning chains, make tool selections, and produce artifacts across multiple modalities. ... While monitoring what AI systems do remains important, observability tools measure activity, not quality. Enterprises require dedicated evaluation infrastructure to assess outputs and drive improvement. These are distinct problems requiring different capabilities.


How IT leaders can build successful AI strategies — the VC view

It’s clear now that AI is transforming existing business structures, operational layers, organizational charts, and processes. “As a CIO, if you look at long term, you get better visibility of the outcomes of AI,” said Sandhya Venkatachalam, founder and partner at Axiom Partners. “Today, a lot of these net new capabilities are taking the form of AI performing the work or producing the outcomes that humans do, versus emulating or automating software tools,” Venkatachalam said. The shift will inevitably displace legacy systems and processes. She cited customer support as an early area ripe for upheaval. ... VCs typically don’t look at what buyers need right now; they look ahead. Similarly, IT leaders should look at how AI can transform their industry in the future. The real value of AI is in displacing legacy stacks and processes, and short wins or scattered AI initiatives mean nothing, Venkatachalam said. Adding AI to existing workflows — like building an internal large language model (LLM) — is often a waste. Enterprises are also wasting time building proprietary tools and infrastructures, which duplicates work already commoditized by big research labs, Venkatachalam said. ... AI strategies link IT directly to core products, which dictates market survival. IT decision-makers should align AI strategies to their verticals markets. Physical AI is considered the next big AI technology after agents in some areas. 


Could AI transparency backfire for businesses?

Work is underway to devise common ways to disclose the use of AI in content creation. The British Standards Institute’s (BSI) common standard (BS ISO/IEC 42001:2023) provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system (AIMS), ensuring AI applications are developed and operated ethically, transparently, and in alignment with regulatory standards. It helps manage AI-specific risks such as bias and lack of transparency. Mark Thirwell, the BSI’s global digital director, says that such standards are critical for building trust in AI. For his part, Thirwell is mainly focused on improving the transparency of underlying training data over whether content is disclosed as AI-generated. “You wouldn’t buy a toaster if someone hadn’t checked it to make sure it wasn’t going to set the kitchen on fire,” he argues. Thirwell posits that common standards can, and must, interrogate the trustworthiness of AI. Does it do what it says it’s going to do? Does it do that every time? Does it not do anything else – as hallucination and misinformation become increasingly problematic? Does it keep your data secure? Does it have integrity? And unique to AI, is it ethical? “If it’s detecting cancers or sifting through CVs,” he says, “is there going to be a bias based on the data it holds?” This is where transparency of the underlying data becomes key. 


The Importance of Having and Maintaining a Data Asset List and how to create one

The explosive growth of structured and unstructured data has made it increasingly difficult for organizations to track what information they hold across networks, devices, SaaS applications, and cloud platforms. Without clear visibility, businesses face higher risks, including security gaps, audit failures, regulatory penalties, and rising storage costs. ... Before we get into how to build a data asset inventory, it’s important to understand why regulators now expect organizations to maintain one. The compliance landscape in 2025 is more demanding than ever, and nearly every major framework explicitly or implicitly requires data mapping and data inventory management. ... A data asset inventory is a structured, centralized record of all the data types and systems that power your organization. The goal is to gain full visibility into what data exists, where it’s stored, who manages it, and how it flows, while also capturing any compliance obligations tied to that data. ... Many organizations rely on third-party providers to manage or process sensitive data, which can improve efficiency but also introduce new risks. External partnerships expand your organization’s digital footprint, increase the potential attack surface, and add complexity to data governance. ... A data asset inventory isn’t a one-time task, it’s a living, evolving document. As your organization adopts new tools, expands into new markets, or grows its teams, your inventory should evolve to reflect these changes. 


Building and Implementing Cyber Resilience Strategies

Currently, there is no unified standard for managing cyber resilience. Although many vendors offer their own solutions and some general standardization efforts are underway, a clear and consistent framework has yet to be established. As a result, organizations are forced to develop their own methods based on internal priorities and interpretations. The main challenge is that cyberattacks have become unavoidable and frequent. Traditional protective measures alone are no longer sufficient to fight modern threats. Another problem is the lack of coordination between IT, information security, and business units. ... In practice, however, its implementation largely depends on the organization’s maturity, scale, and specific infrastructure characteristics. The main difference lies in the level of detail: as a company grows, its infrastructure becomes more complex, the number of stakeholders increases, and each stage of analysis requires greater depth. In small organizations, identifying critical services is relatively quick, while in large enterprises, the process may involve analyzing hundreds of interconnected operations. Likewise, the scope of security measures varies—from basic hardening of key systems to multi-layered protection across distributed environments. At the same time, core principles such as threat analysis, incident response planning, and regular audits remain largely unchanged across all organizations.


Security researchers develop first-ever functional defense against cyberattacks on AI models

Researchers now warn that the most advanced of these attacks, called cryptanalytic extraction, can rebuild a model by asking it thousands of carefully chosen questions. Each answer helps reveal tiny clues about the model’s internal structure. Over time, those clues form a detailed map that exposes the model’s weights and biases. These attacks work surprisingly well when used on neural networks that rely on ReLU activation functions. Because these networks behave like piecewise linear systems, attackers can hunt for points where a neuron’s output flips between active and inactive and use those moments to uncover the neuron’s signature. ... Early methods could only recover partial information, but newer techniques can figure out both the size and the direction of the weights. Some even work using nothing more than the model’s predicted labels. All rely on the same core assumption. Neurons in a given layer behave differently enough that their signals can be separated. When that is true, the attack can cluster each neuron’s critical points and rebuild the entire network with surprising accuracy. ... The team tested this defense on neural networks that previous studies had broken in just a few hours. One of the clearest results comes from a model trained on the MNIST digit dataset with two small hidden layers. 


Draft Trump executive order signals new battle ahead over state AI powers

By eliminating that federal framework, the Trump White House positions itself not simply as preempting state authority, but also as reversing its immediate federal predecessor’s regulatory approach. The draft EO further states that the U.S. must sustain AI leadership through a “balanced, minimal regulatory environment,” language that signals a clear ideological orientation against safety-first or rights-protective models of AI governance. The administration wants the Department of Justice to challenge state AI laws it views as obstructive; the Department of Commerce to catalogue and publicly criticize state statutes deemed “burdensome;” and agencies like the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) to establish national standards that would override state requirements. ... The move immediately raises questions not only about the future of AI governance but also about the structure of American federalism. For years, states have been the primary actors experimenting with AI regulation. They have advanced bills aimed at biometric privacy, algorithmic fairness, deepfake disclosure, automated decision-making transparency, and even restrictions on government use of facial recognition. These experiments, often more aggressive than anything contemplated in Congress, have become the country’s de facto laboratories of AI oversight. 


Engineering the Perfect Product Launch: Lessons from Prototype to Production

Rushing a product to market without a strong quality framework is a gamble most companies regret. Recalls, warranty claims and reputational damage cost far more than investing in quality upfront. The smarter approach is to build quality into the process from the start rather than bolting it in the end. ... During the product rollout I supported, we built proactive quality checkpoints at every stage of assembly. This meant small defects were caught early, long before they reached final testing. In one instance, a supplier batch with a minor material inconsistency was identified at the first inspection step, preventing what could have been a costly recall. Conversely, I’ve also seen how skipping just one validation step resulted in weeks of rework.  ... When all three elements: Development, quality and ERP work in harmony, product launches move faster and run smoothly. Costs are kept in check because inefficiencies are addressed early. Time-to-market accelerates because bottlenecks are anticipated. Manufacturing excellence becomes the standard from the first unit shipped, not something achieved after painful trial and error. ... Engineering a product launch is about orchestrating dozens of small, interconnected decisions across design, quality and enterprise systems. The companies that consistently succeed treat the launch as an engineering challenge, not just a marketing deadline.


Organisations struggle with non-human identity risks & AI demands

Growth in digital identities-both human and non-human-continues to strain legacy identity and access management practices. This identity sprawl raises the risk of credential-based threats and increases the attack surface for cybercriminals. "With organizations struggling to govern an expanding mesh of digital identities across human, machine, and AI entities, over-permissioned roles, shadow identities, and disconnected IAM systems will continue to expose organizations to credential-based attacks and lateral movement. AI will also reshape traditional social engineering: synthetic voices, deepfakes, and adaptive phishing will erode the reliability of static authentication, forcing organizations to adopt continuous and context-aware verification as the new baseline," said Benoit Grange ... "The NIS2 directive has ushered in stricter cybersecurity measures and reporting for a wider range of critical infrastructure and essential services across the European Union. For industries newly brought under this directive, including manufacturing, logistics and certain digital services, 2026 will bring new growing pains. The sectors, many long accustomed to minimal compliance oversight, now face strict governance and reporting requirements. In contrast, mature sectors like finance and healthcare will adapt more smoothly. The disparity will expose structural weaknesses in organizations unfamiliar with continuous compliance, making them attractive targets for attackers exploiting regulatory confusion," said Niels Fenger.

Daily Tech Digest - November 21, 2025


Quote for the day:

“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl



DPDP Rules and the Future of Child Data Safety

Most obligations for Data Fiduciaries, including verifiable parental consent, security safeguards, breach notifications, data minimisation, and processing restrictions for children’s data, come into force after 18 months. This means that although the law recognises children’s rights today, full legal protection will not be enforceable until the culmination of the 18-month window. ... Parents’ awareness of data rights, online safety, and responsible technology is the backbone of their informed participation. The government needs to undertake a nationwide Digital Parenting Awareness Campaign with the help of State Education Departments, modelled on literacy and health awareness drives. ... schools often outsource digital functions to vendors without due diligence. Over the next 18 months, they must map where the student data is collected and where it flows, renegotiate contracts with vendors, ensure secure data storage, and train teachers to spot data risks. Nationwide teacher-training programmes should embed digital pedagogy, data privacy, and ethical use of technology as core competencies. ... effective implementation will be contingent on the autonomy, resourcefulness, and accessibility of the Data Protection Board. The regulator should include specialised talent such as cybersecurity specialists and privacy engineers. It should be supported by building an in-house digital forensics unit, capable of investigating leaks, tracing unauthorised access, and examining algorithmic profiling. 


5 best practices for small and medium businesses (SMEs) to strengthen cybersecurity

First, begin with good access control which would entail restricting employees to only the permissions that they specifically require. It is also important to have multi-factor authentication in place, and regularly audit user accounts, particularly when roles shift or personnel depart. Second, keep systems and software current by immediately patching operating systems, applications, and security software to close vulnerabilities before they can be exploited by attackers. Similarly, updates should be automated to avoid human error. The staff are usually at the front line of the defence, so the third essential practice is the continuous ongoing training of employees in identifying phishing attempts, suspicious links, and social engineering methods, making them active guardians of corporate data and effectively cutting the risk of a data breach. Fourth is the safeguarding your data which can be implemented by having regular backups stored safely in multiple places and by complementing them with an explicit disaster recovery strategy, so that you are able to restore operations promptly, reduce downtime, and constrain losses in the event of a cyber attack. Fifth and finally, companies should embrace the layered security paradigm using antivirus tools, firewalls, endpoint protection, encryption, and safe networks. Each of those layers complement each other, creating a resilient defence that protects your digital ecosystem and strengthens trust with partners, customers, and stakeholders.


How Artificial Intelligence is Reshaping the Software Development Life Cycle (SDLC)

With AI tools, workflows become faster and more efficient, giving engineers more time to concentrate on creative innovation and tackling complex challenges. As these models advance, they can better grasp context, learn from previous projects, and adapt to evolving needs. ... AI streamlines software design by speeding up prototyping, automating routine tasks, optimizing with predictive analytics, and strengthening security. It generates design options, translates business goals into technical requirements, and uses fitness functions to keep code aligned with architecture. This allows architects to prioritize strategic innovation and boosts development quality and efficiency. ... AI is shifting developers’ roles from manual coding to strategic "code orchestration." Critical thinking, business insight, and ethical decision-making remain vital. AI can manage routine tasks, but human validation is necessary for security, quality, and goal alignment. Developers skilled in AI tools will be highly sought after. ... AI serves to augment, not replace, the contributions of human engineers by managing extensive data processing and pattern recognition tasks. The synergy between AI's computational proficiency and human analytical judgment results in outcomes that are both more precise and actionable. Engineers are thus empowered to concentrate on interpreting AI-generated insights and implementing informed decisions, as opposed to conducting manual data analysis.


Innovative Approaches To Addressing The Cybersecurity Skills Gap

In a talent-constrained world, forward-leaning organizations aren’t hiring more analysts—they’re deploying agentic AI to generate continuous, cryptographic proof that controls worked when it mattered. This defensible automation reduces breach impact, insurer friction and boardroom risk—no headcount required. ... Create an architecture and engineering review board (AERB) that all current and future technical designs are required to flow through. Make sure the AERB comprises a small group of your best engineers, developers, network engineers and security experts. The group should meet multiple times a year, and all technical staff should be required to rotate through to listen and contribute to the AERB. ... Build security into product design instead of adding it in afterward. Embed industry best practices through predefined controls and policy templates that enforce protection automatically—then partner with trusted experts who can extend that foundation with deep, domain-specific insight. Together, these strategies turn scarce talent into amplified capability. ... Rather than chasing scarce talent, companies should focus on visibility and context. Most breaches stem from unknown identities and unchecked access, not zero days. By strengthening identity governance and access intelligence, organizations can multiply the impact of small security teams, turning knowledge, not headcount, into their greatest defense.


The Configurable Bank: Low‑Code, AI, and Personalization at Scale

What does the present day modern banking system look like: The answer depends on where you stand. For customers, Digital banking solutions need to be instant, invisible, and intuitive – a seamless tap, a scan, a click. For banks, it’s an ever-evolving race to keep pace with rising expectations. ... What was once a luxury i.e. speed and dependability – has become the standard. Yet, behind the sleek mobile apps and fast payments, many banks are still anchored to quarterly release cycles and manual processes that slow innovation. To thrive in this landscape, banks don’t need to rip out their core systems. What they need is configurability – the ability to re-engineer services to be more agile, composable, and responsive. By making their systems configurable rather than fixed, banks can launch products faster, adapt policies in real time, and reduce the cost and complexity of change. ... The idea of the Configurable Bank is built on this shift – where technology, powered by low-code and AI, transforms banking into a living, adaptive platform. One that learns, evolves, and personalizes at scale – not by replacing the core, but by reimagining how it connects with everything around it. ... This is not just a technology shift; it’s a strategic one. With low-code, innovation is no longer the privilege of IT alone. Business teams, product leaders, and even customer-facing units can now shape and deploy digital experiences in near real time. 


Deepfake crisis gets dire prompting new investment, calls for regulation

Kevin Tian, Doppel’s CEO, says that organizations are not prepared for the flood of AI-generated deception coming at them. “Over the past few months, what’s gotten significantly better is the ability to do real-time, synchronous deepfake conversations in an intelligent manner. I can chat with my own deepfake in real-time. It’s not scripted, it’s dynamic.” Tian tells Fortune that Doppel’s mission is not to stamp out deepfakes, but “to stop social engineering attacks, and the malicious use of deepfakes, traditional impersonations, copycatting, fraud, phishing – you name it.” The firm says its R&D team has “just scratched the surface” of innovations it plans to bring to existing and upcoming products, notably in social engineering defense (SED). The Series C funds will “be used to invest in the core Doppel gang to meet the exponential surge in demand.” ... Advocating for “laws that prioritize human dignity and protect democracy,” the piece points to the EU’s AI Act and Digital Services Act as models, and specifically to new copyright legislation in Denmark, which bans the creation of deepfakes without a subject’s consent. In the authors’ words, Denmark’s law would “legally enshrine the principle that you own you.” ... “The rise of deepfake technology has shown that voluntary policies have failed; companies will not police themselves until it becomes too expensive not to do so,” says the piece.


The what, why and how of agentic AI for supply chain management

To be sure, software and automation are nothing new in the supply chain space. Businesses have long used digital tools to help track inventories, manage fleet schedules and so on as a way of boosting efficiency and scalability. Agentic AI, however, goes further than traditional SCM software tools, offering capabilities that conventional systems lack. For instance, because agents are guided by AI models, they are capable of identifying novel solutions to challenges they encounter. Traditional SCM tools can’t do this because they rely on pre-scripted options and don’t know what to do when they encounter a scenario no one envisioned beforehand. AI can also automate multiple, interdependent SCM processes, as I mentioned above. Traditional SCM tools don’t usually do this; they tend to focus on singular tasks that, although they may involve multiple steps, are challenging to automate fully because conventional tools can’t reason their way through unforeseen variables in the way AI agents do. ... Deploying agents directly into production is enormously risky because it can be challenging to predict what they’ll do. Instead, begin with a proof of concept and use it to validate agent features and reliability. Don’t let agents touch production systems until you’re deeply confident in their abilities. ... For high-stakes or particularly complex workflows, it’s often wise to keep a human in the loop.


How AI can magnify your tech debt - and 4 ways to avoid that trap

The survey, conducted in September, involved 123 executives and managers from large companies. There are high hopes that AI will help cut into and clear up issues, along with cost reduction. At least 80% expect productivity gains, and 55% anticipate AI will help reduce technical debt. However, the large segment expecting AI to increase technical debt reflects "real anxiety about security, legacy integration, and black-box behavior as AI scales across the stack," the researchers indicated. Top concerns include security vulnerabilities (59%), legacy integration complexity (50%), and loss of visibility (42%). ... "Technical debt exists at many different levels of the technology stack," Gary Hoberman, CEO of Unqork, told ZDNET. "You can have the best 10X engineer or the best AI model writing the most beautiful, efficient code ever seen, but that code could still be running on runtimes that are themselves filled with technical debt and security issues. Or they may also be relying on open-source libraries that are no longer supported." ... AI presents a new raft of problems to the tech debt challenge. The rising use of AI-assisted code risks "unintended consequences, such as runaway maintenance costs and increasing tech debt," Hoberman continued. IT is already overwhelmed with current system maintenance.


The State and Current Viability of Real-Time Analytics

Data managers now prefer real-time analytical capabilities built within their applications and systems, rather than a separate, standalone, or bolted-on proj­ect. Interest in real-time analytics as a standalone effort has dropped from 50% to 32% during the past 2 years, a recent survey of 259 data managers conducted by Unisphere Research finds ... So, the question becomes: Are real-time analytics ubiqui­tous to the point in which they are automatically integrated into any and all applications? By now, the use of real-time analyt­ics should be a “standard operating requirement” for customer experience, said Srini Srinivasan, founder and CTO at Aero­spike. This is where the rubber meets the road—where “the majority of the advances in real-time applications have been made in consumer-oriented enterprises,” he added. Along these lines, the most prominent use cases for real-time analytics include “risk analysis, fraud detection, recommenda­tion engines, user-based dynamic pricing, dynamic billing and charging, and customer 360,” Srinivasan continued. “For over a decade, these systems have been using AI and machine learning [ML], inferencing for improving the quality of real-time deci­sions to improve customer experience at scale. The goal is to ensure that the first customer and the hundred-millionth cus­tomer have the same vitality of customer experience.” ... “Within industries such as energy, life sciences, and chemicals, the next decade of real-time analytics will be driven by more autono­mous operations,” said David Streit


You Down with EDD? Making Sense of LLMs Through Evaluations

We're facing a major infrastructure maturity gap in AI development — the same gap the software world faced decades ago when applications grew too complex for informal testing and crossed fingers. Shipping fast with user feedback works early on, but when done at scale with rising stakes, "vibes" break down and developers demand structure, predictability, and confidence in their deployments. ... AI engineering teams are turning to an emerging solution: evaluation-driven development (EDD), the probabilistic cousin to TDD. An evaluation looks similar to a traditional software test. You have an assertion, a response, and pass-fail criteria, but instead of asking "Does this function return 42?" you're asking "Does this legal AI application correctly flag the three highest-risk clauses in this nightmare of a merger agreement?" Our trust in AI systems comes from our trust in the evaluations themselves, and if you never see an evaluation fail, you're not testing the right behaviors. The practice of Evaluation-Driven Development (EDD) is about repeatedly testing these evaluations. ... The technology for EDD is ready. Modern AI platforms provide solid evaluation frameworks that integrate with existing development workflows, but the challenge facing wide adoption is cultural. Teams need to embrace the discipline of writing evaluations before changing systems, just like they learned to write tests before shipping code. It requires a mindset shift from "move fast and break things," to "move deliberately and measure everything."

Daily Tech Digest - November 20, 2025


Quote for the day:

"Choose your heroes very carefully and then emulate them. You will never be perfect, but you can always be better." -- Warren Buffet



A developer’s guide to avoiding the brambles

Protect against the impossible, because it just might happen. Code has a way of surprising you, and it definitely changes. Right now you might think there is no way that a given integer variable would be less than zero, but you have no idea what some crazed future developer might do. Go ahead and guard against the impossible, and you’ll never have to worry about it becoming possible. ... If you’re ever tempted to reuse a variable within a routine for something completely different, don’t do it. Just declare another variable. If you’re ever tempted to have a function do two things depending on a “flag” that you passed in as a parameter, write two different functions. If you have a switch statement that is going to pick from five different queries for a class to execute, write a class for each query and use a factory to produce the right class for the job. ... Ruthlessly root out the smallest of mistakes. I follow this rule religiously when I code. I don’t allow typos in comments. I don’t allow myself even the smallest of formatting inconsistencies. I remove any unused variables. I don’t allow commented code to remain in the code base. If your language of choice is case-insensitive, refuse to allow inconsistent casing in your code. ... Implicitness increases cognitive load. When code does things implicitly, the developer has to stop and guess what the compiler is going to do. Default variables, hidden conversions, and hidden side effects all make code hard to reason about.


SaaS Rolls Forward, Not Backward: Strategies to Prevent Data Loss and Downtime

The SaaS provider owns infrastructure-level redundancy and backups to maintain operational continuity during regional outages or major disruptions. InfoSec and SaaS teams are no longer responsible for infrastructure resilience. Instead, they are responsible for backing up and recovering data and files stored in their SaaS instances. This is significant for two primary reasons. First, the RTO and RPO for SaaS data become dependent on the vendor's capabilities, which are not within the control of the customer. ... A common misconception, even among mature InfoSec teams, is the assumption that SaaS data protection is fully managed by the vendor. This “set it and forget it” mindset, while understandable given the cloud promise, overlooks the need for organizations to backup their SaaS data. Common causes of data loss and corruption are human errors within the customer’s SaaS instance, including accidental deletion, integration issues, and migration mishaps which fall under the customer’s responsibility. ... InfoSec and SaaS teams must combine their knowledge and experience to ensure that backups contain all necessary data, as well as metadata, which provides the necessary context, and can be restored reliably. SaaS administrators can prevent users from logging in, disable automations, block upstream data from being sent, or restrict data from being sent to downstream systems as needed.


EU publishes Digital Omnibus leaving AI Act future uncertain

The European Commission unveiled amendments on Wednesday designed to simplify its digital regulatory framework, including the AI Act and data privacy rules, in a bid to boost innovation. The Digital Omnibus package introduces several measures, including delaying the stricter regulation of ‘high-risk’ AI applications until late 2027 and allowing companies to use sensitive data, such as biometrics, for AI training under certain conditions. ... The Digital Omnibus also attempts to adapt rules within privacy regulation, such as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act. The Commission plans to clarify when data stops being “personal.” This could open the doors for tech companies to include anonymous information from EU citizens into large datasets for training AI, even when they contain sensitive information such as biometric data, as long as they make reasonable efforts to remove it. ... EU member states have also called for postponing the rollout of the AI Act altogether, citing difficulties in defining related technical standards and the need for Europe to stay competitive in the global technological race. “Europe has not so far reaped the full benefits of the digital revolution,” says European economy commissioner Valdis Dombrovskis. “And we cannot afford to pay the price for failing to keep up with demands of the changing world.”


Building Distributed Event-Driven Architectures Across Multi-Cloud Boundaries

The elegant simplicity of "fire an event and forget" becomes a complex orchestration of latency optimization, failure recovery, and data consistency across provider boundaries. Yet, when done right, multi-cloud event-driven architectures offer unprecedented resilience, performance, and business agility. ... Multi-cloud latency isn't just about network speed, it's about the compound effect of architectural decisions across cloud boundaries. Consider a transaction that needs to traverse from on-premise to AWS for risk assessment, then to Azure for analytics processing, and back to on-premise for core banking updates. Each hop introduces latency, but the cumulative effect can transform a sub-100 ms transaction into a multi-second operation. ... Here is an uncomfortable truth: Most resilience strategies focus on the wrong problem. As engineers, we typically put our efforts into handling failures that occur during an outage or when a service component is down. Equally important is how you recover from those failures after the outage is over. This approach to recovery creates systems that "fail fast" but "recover never". ... The combination of event stores, resilient policies, and systematic event replay capabilities creates a distributed system that not only survives failures, but also recovers automatically, which is a critical requirement for multi-cloud architectures. ... While duplicate risk processing merely wastes resources, duplicate financial transactions create regulatory nightmares and audit failures.


For AI to succeed in the SOC, CISOs need to remove legacy walls now

"The legacy SOC, as we know it, can't compete. It's turned into a modern-day firefighter," warned CrowdStrike CEO George Kurtz during his keynote at Fal.Con 2025. "The world is entering an arms race for AI superiority as adversaries weaponize AI to accelerate attacks. In the AI era, security comes down to three things: the quality of your data, the speed of your response, and the precision of your enforcement." Enterprise SOCs average 83 security tools across 29 different vendors, each generating isolated data streams that defy easy integration to the latest generation of AI systems. System fragmentation and lack of integration represent AI's greatest vulnerability, and organizations' most fixable problem. The mathematics of tool sprawl proves devastating. Organizations deploying AI across fragmented toolsets report significantly elevated false-positive rates. ... Getting governance right is one of a CISO's most formidable challenges and often includes removing longstanding roadblocks to make sure their organization can connect and make contributions across the business. ... A CISO's transformation from security gatekeeper to business enabler and strategist is the single best step any security professional can take in their career. CISOS often remark in interviews that the transition from being an app and data disciplinarian to an enabler of new growth with the ultimate goal of showing how their teams help drive revenue was the catalyst their careers needed.


Selling to the CISO: An open letter to the cybersecurity industry

Vendors think they’re selling technology. They’re not. They’re trying to sell confidence to people whose jobs depend on managing the impossible. As a CISO, I buy because I’m trying to reduce the odds that something catastrophic happens on my watch. Every decision is a gamble. There is no “safe” option in this field. I buy to reduce personal and organizational risk, knowing there’s no such thing as perfect protection. Cybersecurity is not a puzzle you solve. It’s a game you play — and it never ends. You make the best moves you can, knowing you’ll never win. Even if I somehow patched every system and closed every gap, the cost of perfection would cripple the company. ... The truth is that most organizations don’t need more tools. They need to get the fundamentals right. If you can patch consistently, maintain good access controls, and segment your networks so you aren’t running flat, you’re ahead of most of the market — no shiny tools required. Strong patching alone will eliminate most of the attack surface that vendors keep promising to “detect.” ... We can’t blame vendors alone. We created the market they’re serving. We bought into the illusion that innovation equals progress. We ignored the fundamentals because they’re hard and unglamorous. We filled our environments with products we couldn’t fully use and called it maturity. We built complexity and called it strategy. Then we act shocked when the same root causes keep taking us down. Good security still starts with good IT. Always has. Always will. If you don’t know what you own, you can’t protect it.


When IT fails, OT pays the price

Criminal groups are now demonstrating a better understanding of industrial dependencies. The Qilin group carried out 63 confirmed attacks against industrial entities since mid 2024 and has focused on energy distribution and water utilities. Their use of Windows and Linux payloads gives them wider reach inside mixed environments. Several incidents involved encryption of shared engineering resources and historian systems, which caused operational delays even when controllers remained untouched. ... Across intrusions, attackers favored techniques that exploit weak segmentation. PowerShell activity made up the largest share of detections, followed by Cobalt Strike. The findings show that adversaries rarely need ICS specific exploits at the start of an attack. They rely on stolen accounts, remote access tools, and administrative shares to move toward engineering assets. ... The vulnerability data reinforces the emphasis on the boundary between enterprise systems and industrial systems. Ongoing exploitation of Cisco ASA and FTD devices, including attacks that modified device firmware. Several critical flaws in SAP NetWeaver and other manufacturing operations software were also exploited, which created direct pivot points into factory workflows. Recent disclosures affecting Rockwell ControlLogix and GuardLogix platforms allow remote code execution or force the controller into a failed state. Attacks on these devices pose immediate availability and safety risks. 


India has the building blocks to influence global standards in AI infrastructure

The convergence of cloud, edge, and connectivity represents the foundation of India’s next AI leap. In a country as geographically and economically diverse as India, AI workloads can’t depend solely on centralized cloud resources. Edge computing allows us to bring compute closer to the source of data be it in a factory, retail store, or farm which reduces latency, lowers costs, and enhances privacy. Cloud provides elasticity and scalability, while secure connectivity ensures that both environments communicate seamlessly. This triad enables an AI model to be trained in the cloud, refined at the edge, and deployed securely across networks unlocking innovation in every geography. We have been building this connected fabric to ensure that access to compute and intelligence isn’t limited by location or scale. ... We see this evolution already unfolding. AI-as-a-Service will thrive when infrastructure, connectivity, and platforms converge under a single, interoperable framework. Each stakeholder; telecoms, data centres, and hyperscalers brings a unique value: scale, proximity, and reach. ... India is already shaping global conversations around digital equity and secure connectivity, and the same potential exists in AI infrastructure. In next 5 years, India could stand out not for the size of its compute capacity but for how effectively it builds an inclusive digital foundation, one that blends cloud, edge, data governance, and innovation seamlessly.


How to Overcome Latency in Your Cyber Career

The presence of latency is not an indictment of your ability. It's a signal that something in your system needs attention. Identifying what creates latency in your professional life and learning how to address it are essential components of long-term growth. With a diagnostic mindset and a willingness to optimize, you can restore throughput and move forward with purpose. ... Career latency often appears when your knowledge no longer reflects current industry expectations. Even highly capable professionals experience slowdown when their technical foundation lags behind evolving practices. ... Unclear goals create misalignment between where you invest your time and where you want to progress. Without a defined direction, you may be working hard but not moving in a way that supports advancement. ... Professionals often operate under heavy workloads that dilute productivity. Too many competing responsibilities, constant context switching or tasks disconnected from your goals can limit your effectiveness and delay growth. ... Career progress can slow when your professional network lacks the signal strength needed to route opportunities in your direction. Without mentorship, community or visibility, growth becomes harder to sustain. ... Missed opportunities often stem from limited readiness. Preparation, bandwidth or timing may be misaligned, and promising chances can disappear before you can act.


Why IT-SecOps Convergence is Non-Negotiable

The message is clear: siloed operations are no longer just inefficient—they’re a security liability. ... The first, and often the most difficult step toward achieving true IT-SecOps convergence, is cultural. For years, IT and security teams have operated in silos, essentially functioning as two different businesses. ... On paper, these Key Performance Indicators (KPIs) appear aligned—both measure speed and efficiency. But in practice, they reflect different views: one is laser-focused on minimizing risk, the other on maximizing uptime. ... The real opportunity lies in establishing a shared mandate. Both teams need to understand that their goals are two sides of the same coin: you can’t have productive systems that aren’t secure, and security that breaks the system isn’t sustainable; therefore, convergence begins not with tools, but with alignment of intent. Once this clicks, both teams begin working from a common set of goals, shared KPIs, and joint decision frameworks. ... The strongest security posture doesn’t come from piling on more tools. It comes from creating continuous alignment between management, security, and user experience. When those three functions operate in sync, IT doesn’t deploy technology that security can’t enforce, security doesn’t introduce controls that slow down work, and users don’t feel the need to bypass policies with shadow apps or risky shortcuts. ... When a unified structure is implemented, policies can be deployed instantly, validated automatically, and adjusted based on real user impact—all without waiting for separate teams to sync.

Daily Tech Digest - November 19, 2025


Quote for the day:

"You are not a team because you work together. You are a team because you trust, respect and care for each other." -- Vala Afshar



How to automate the testing of AI agents

Experts view testing AI agents as a strategic risk management function that encompasses architecture, development, offline testing, and observability for online production agents. ... “Testing agentic AI is no longer QA, it is enterprise risk management, and leaders are building digital twins to stress test agents against messy realities: bad data, adversarial inputs, and edge cases,” says Srikumar Ramanathan ... “Agentic systems are non-deterministic and can’t be trusted with traditional QA alone; enterprises need tools that trace reasoning, evaluate judgment, test resilience, and ensure adaptability over time,” says Nikolaos Vasiloglou ... Part of the implementation strategy will require integrating feedback from production back into development and test environments. Although testing AI agents should be automated, QA engineers will need to develop workflows that include reviews from subject matter experts and feedback from other end users. “Hierarchical scenario-based testing, sandboxed environments, and integrated regression suites—built with cross-team collaboration—form the core approach for test strategy,” says Chris Li ... Mike Finley, says, “One key way to automate testing of agentic AI is to use verifiers, which are AI supervisor agents whose job is to watch the work of others and ensure that they fall in line. Beyond accuracy, they’re also looking for subtle things like tone and other cues. If we want these agents to do human work, we have to watch them like we would human workers.”


AI For Proactive Risk Governance In Today’s Uncertain Landscape

Emerging risks are no longer confined to familiar categories like credit or operational performance. Instead, leaders are contending with a complex web of financial, regulatory, technological and reputational pressures that are interconnected and fast-moving. This shift has made it harder for executives to anticipate vulnerabilities and act before risks escalate into real business impact. ... The sheer volume of evolving requirements can overwhelm compliance teams, increasing the risk of oversight gaps, missed deadlines or inconsistent reporting. For many organizations, the challenge is not simply keeping up but proving to regulators and stakeholders that governance practices are both proactive and defensible. ... As businesses evaluate their options to get ahead of risk, AI is top of the list. But not all AI is created equal, and paradoxically, some approaches may introduce added risk. General-purpose large language models can be powerful tools for information synthesis, but they are not designed to deliver the accuracy, transparency and auditability required for high-stakes enterprise decisions. Their probabilistic nature means outputs can at times be incomplete or inaccurate. ... Every AI output must be explainable, traceable and auditable. Executives need to understand the reasoning behind the recommendations they present to boards, regulators or shareholders. Defensible AI ensures that decisions can withstand scrutiny, fostering both compliance and trust between human and machine.


Navigating India's Data Landscape: Essential Compliance Requirements under the DPDP Act

The Digital Personal Data Protection Act, 2023 (DPDP Act) marks a pivotal shift in how digital personal data is managed in India, establishing a framework that simultaneously recognizes the individual's right to protect their personal data and the necessity for processing such data for lawful purposes. For any organization—defined broadly to include individuals, companies, firms, and the State—that determines the purpose and means of processing personal data (a "Data Fiduciary" or DF), compliance with the DPDP Act requires strict adherence to several core principles and newly defined rules. Compliance with the DPDP Act is like designing a secure building: it requires strong foundational principles, robust security systems, specific safety features for vulnerable occupants (Child Data rules), specialized certifications for large structures, and a clear plan for Data Erasure. Organizations must begin planning now, as the core operational rules governing notice, security, child data, and retention come into force eighteen months after the publication date of the DPDP Rules in November 2025. ... DFs must implement appropriate technical and organizational measures. These safeguards must include techniques like encryption, obfuscation, masking, or the use of virtual tokens, along with controlled access to computer resources and measures for continued processing in case of compromise, such as data backups.


Doomed enterprise AI projects usually lack vision

CIOs and other IT decision-makers are under pressure from boards and CEOs who want their companies to be “AI-first” operations; that runs the risk of moving too fast on execution and choosing the right projects, said Steven Dickens, principal analyst at Hyperframe Research. Smart leaders are cautious and pragmatic and focused on validated value, not jumping the gun on mission-critical processes. “They are ring-fencing pilot projects to low-risk, high-impact areas like internal code generation or customer service triage,” Dickens said. ... In this experimental period, organizations viewing AI as a way to reimagine business will take an early lead, Tara Balakrishnan, associate partner at McKinsey, said in the study. “While many see leading indicators from efficiency gains, focusing only on cost can limit AI’s impact,” Balakrishnan wrote. Scalability, project costs, and talent availability also play key roles in moving proof-of-concept projects to production. AI tools are not just plug and play, said Jinsook Han, chief strategy and agentic AI officer at Genpact. While companies can experiment with flashy demos and proofs of concept, the technology also needs to be usable and relevant, Han said. ... Many AI projects fail because they are built atop legacy IT systems, Han said, adding that modifying a company’s technology stack, workflows, and processes will maximize what AI can do. Humans also still need to oversee AI projects and outcomes — especially when agentic AI is involved, Han said. 


GenAI vs Agentic AI: From creation to action — What enterprises need to know

Generative AI and Agentic AI are two separate – but often interrelated – paradigms. Generative AI excels in authoring or creating content from prompts, while Agentic AI involves taking autonomous actions to achieve objectives in complex workflows that involve multiple steps. ... Agentic AI is the next step to advances in data science – from construction to self-execution. They act as intelligent digital workers capable of managing a vast array of complex multi-step workflows. In banking and financial services, Agentic AI enables autonomous function for trading and portfolio management. Given a strategic objective like “maximize return within an acceptable risk parameter,” it can perform autonomously by monitoring market signals, executing traders’ decisions by rebalancing assets and adjusting portfolios, all in real-time. ... The difference between Generative AI and Agentic AI is starting to fade. We are heading toward a future version of generative models being the “thinking engine” of agentic systems. It will not be Generative AI versus Agentic AI. Intelligent systems will reason, create and act across business ecosystems. For this to happen, there will be a need for interoperable systems and common standards. There are frameworks such as the Model Context Protocol (MCP) and metadata standards like AgentFacts already laying the groundwork for a transparent and plug-and-play agent ecosystems to provide trust, transparency, and safe collaboration for agents between platforms.


Pushing the thermal envelope

“When new data centers are designed today, instead of relying solely on the grid, they are integrating on-site power stations with their facilities. These on-site generators function like traditional power stations, and as heat engines, they produce substantial byproduct heat,” Hannah explains. This high-grade, abundant heat opens new possibilities. Technologies such as absorption chillers, historically underutilized in data centers due to insufficient heat, can now be deployed effectively when coupled with BYOP systems. This flexibility extends to operational optimization as well. ... The digital twin methodology allows engineers to create theoretical models of systems to simulate responses and tune control algorithms accordingly. Operational or production-based digital twins extend this approach by using field and system data to continuously improve model accuracy over time. ... The thermal chain and power train now operate less as separate systems and more as partners in a shared ecosystem, each dependent on the other for optimal performance. This growing synergy extends beyond technology, driving closer collaboration between traditionally separate teams across design, engineering, manufacturing, and operations. “The growth is so incredible that customers are looking for products and systems they can deploy quickly – solutions that are easy to install, reliable, densified, cost-effective, and efficient,” says Hannah. “Right now, speed of deployment is the priority.”


Cloud Services Face Scrutiny Under the Digital Markets Act

Today, European authorities announced three new market investigations into cloud-computing services under the Digital Markets Act (DMA), as EU leaders gather in Berlin for the Summit on European Digital Sovereignty — an event billed as a push for an “independent, secure and innovation-friendly digital future for Europe.” Two investigations will assess whether Amazon Web Services (AWS) and Microsoft’s Azure should be designated as gatekeepers, despite apparently “not meeting the DMA gatekeeper thresholds for size, user number and market position.” A third investigation is to assess if the DMA is best placed to “effectively tackle practices that may limit competitiveness and fairness in the cloud computing sector in the EU.” ... Europe is increasingly concerned about data security and sovereignty, spurred in part by the Trump administration’s ongoing hostility to the EU and the powers granted by the CLOUD Act (Clarifying Lawful Overseas Use of Data Act), which allows US law enforcement to obtain data stored abroad, even data concerning non-US citizens. Fears of a potential “kill switch” have pushed digital sovereignty up the EU agenda, with some member states switching away from the biggest cloud providers and adopting European alternatives. However, to switch away from US providers at scale may require competition law enforcement and regulation. The European Commission has passed the Data Act, which requires cloud providers to eliminate switching charges by 2027 and bans “technical, contractual and organisational obstacles’ to switching to another provider.” 


IBM readies commercially valuable quantum computer technology

According to Chong, Loon puts a separate layer on the chip, going three-dimensional, allowing connections between qubits that aren’t immediate neighbors. Even separate chips, the ones contained in the boxes at the base of those giant cryogenic chandelier-shaped refrigerators, can be linked together, says IBM’s Crowder. In fact, that’s already possible with Nighthawk. “You can think of it as wires going between the boxes at the bottom,” Crowder says. “Nighthawk is designed to be able to do that, and it’ll also be used to connect the fault-tolerant modules in the large-scale fault-tolerant system as well.” “That is a big announcement for the industry,” says IDC analyst Heather West. “Now we’re seeing ways to actually begin scaling these systems without squeezing thousands or hundreds of thousands of qubits on a chip.” It’s a misperception that quantum computing isn’t beneficial and can’t be used today. Organizations should already be thinking about how they will use quantum computing, especially if they expect to be able to get a competitive edge from it, West says. “Waiting until the technology advances further could be detrimental because the learning curve that you need to be able to understand quantum and to program quantum algorithms is quite high,” West says. It’s difficult to develop these skills internally, and difficult to bring them into an organization. And then there’s the time it takes to develop use cases and figure out new workflows.


Why modular AI is emerging as the next enterprise architecture standard

LLMs are remarkable, but they are not inherently aligned with enterprise control frameworks. Without a way to govern the reasoning and retrieval pathways, organizations place themselves at risk of unpredictable outputs — and unpredictable headlines. ... The modular approach I explored is built on two ideas: small language models and retrieval-augmented generation. SLMs focus on specific domains rather than being trained to handle everything. Because they are compact and specialized, they can run on more common infrastructure and offer predictable performance. Instead of forcing one model to understand every topic in the enterprise, SLMs stay close to the context they are responsible for. ... Together, SLMs and RAG form a system where intelligence is both efficient and explainable. The model contributes language understanding, while retrieval ensures accuracy and alignment with business rules. It’s an approach that favors control and clarity over brute-force scale — exactly what large organizations need when AI decisions must be defended, not just delivered. ... At the heart of this approach is what I call a semantic layer: a coordination surface where AI agents reason only over the business context and data sources assigned to them. This layer defines three critical elements: What information an agent can access; How its decisions are validated; and When it should escalate or defer to humans. In this design, smaller language models are used where focus matters more than size. 


The long conversations that reveal how scammers work

The slow cadence is what scammers use to build trust. The study shows how predictable that progression is when viewed at scale. Early messages tend to focus on small talk, harmless questions, light personal details, and daily routines. These early exchanges often contain subtle checks to see if the target is human. Some scammers ask directly. “By the way, there are a lot of fake people here, are you a real person” is one of the lines captured in the study. ... That distance between the greeting and the attempted cash out is the core challenge in studying long game fraud. Scammers send photos of meals or walks, talk about family, and bring up current events to lay the groundwork for later requests. Scammers often sent images, while audio and video were less common, but when used, they tended to appear at moments when scammers wanted to strengthen the sense of presence. The researchers found that 20 percent of conversations included selfie requests, and more than half of those requests took place on WhatsApp. ... Long haul scams do not rely on high urgency. They rely on comfort, familiarity, and patience. This is a different challenge than technical support scams or prize scams. Defenders need to detect slow moving risk signals before money leaves accounts. The study also shows the scale challenge. Manual research that covers weeks of dialog is difficult to sustain. The researchers address this by blending an LLM with a workflow that pulls in human reviewers at key points.