Daily Tech Digest - January 01, 2026


Quote for the day:

"It always seems impossible until it’s done." -- Nelson Mandela



Why data trust is the missing link in digital transformation

Data trust is often framed as a technical issue, delegated to IT or data teams. In reality, it is a business capability with direct implications for growth, risk, and reputation. Trusted data enables organisations to: Confidently automate customer and operational workflows; Personalise experiences without introducing errors; Improve forecasting and performance reporting; and Reduce operational rework and exception handling When data cannot be trusted, leaders are forced to rely on manual checks, conservative assumptions, and duplicated processes. This increases cost and slows decision-making - the opposite of what digital transformation aims to achieve. .... Establishing data trust is not a one-time project. It requires a shift in mindset across the organisation. Data quality should be viewed as a shared responsibility, supported by the right processes and tools. Leading organisations embed data validation into their digital workflows, measure data quality as part of system health, and treat trusted data as a strategic asset. Over time, this creates a culture where decisions are made with confidence and transformation initiatives are more likely to succeed. ... Digital transformation is ultimately about enabling better decisions, faster execution, and stronger customer relationships. None of these goals can be achieved without trusted data. As organisations continue to modernise their platforms and processes, data quality should be treated as core infrastructure, not an afterthought. 


Health Data Privacy, Cyber Regs: What to Watch in 2026

When federal regulators hesitate, states often jump into filing privacy and security gaps involving health data. That includes mandates in New York to shore up cybersecurity at certain hospitals (see: New York Hospitals Are Facing Tougher Cyber Rules Than HIPAA). Also worth watching is the New York Health Information Privacy Act, Greene said. "It was passed by both New York legislative chambers in January but has not yet been formally submitted to the governor for signature, with lobbying efforts underway to amend it." "In its most recent version, it would be the toughest health privacy law in the country in many respects, including a controversial prohibition on obtaining consents for secondary uses of data until at least 24 hours after an individual creates an account or first uses the requested product or service," Greene said.  ... Greene predicted HIPAA resolution agreements and civil monetary penalties will continue much as they have in years past, with one to two dozen such cases next year. HHS has recently indicated that it intends to begin enforcing the Information Blocking Rule. "The primary target will be health IT developers," Greene said. "I expect that there are less information blocking issues with health information networks and believe that the statute and regulation's knowledge standard makes it more challenging to enforce against healthcare providers because the government must prove that a healthcare provider knew its practice to be unreasonable."


From integration pain to partnership gain: How collaboration strengthens cybersecurity

When collaborators leverage data in specific cybersecurity work, they unlock several valuable benefits, especially since no organization has complete insight into every possible threat. A shared, data-driven cybersecurity framework can offer both sides a better understanding of existing and emerging threats that could undermine one or both collaborators. Data-driven collaboration also enables partners to become more proactive in their cybersecurity posture. Coordinated data can give business partners insights into where there’s greater exposure for a cyberattack, allowing partners to work together with data-backed guidance on how to better prepare. ... The Vested model — an innovative approach based on research from the University of Tennessee — focuses on shared goals and outcomes rather than traditional transactional buyer and seller agreements. Both companies agreed on a specific set of KPIs they could use to measure the health of the partnership and keep their security goals on track, allowing them to continue to adapt cybersecurity initiatives as needs and threats evolve. “You have to build, maintain and exercise the right partnerships with business units and shared services across the enterprise so continuity plans identify the issue quickly, deploy appropriate mitigations, and ultimately restore client and business services as quickly as possible,” says Royce Curtin, IBM’s former VP of corporate security.


AI governance: A risk and audit perspective on responsible AI adoption

AI governance refers to the policies, procedures, and oversight mechanisms that guide how AI systems are developed, deployed, and monitored. It ensures that AI aligns with business objectives, complies with applicable laws, and operates in a way that is ethical and transparent. Regulatory scrutiny is increasing. The EU AI Act is setting a precedent for global standards, and U.S. agencies are signaling more aggressive enforcement, particularly in sectors like healthcare, finance, and employment. Organizations are expected to demonstrate accountability in how AI systems make decisions, manage data, and interact with users. Beyond regulation, there is growing pressure from customers, employees, and investors. ... Audit teams also help boards and audit committees understand the risks associated with AI. Their work supports transparency and builds trust with regulators and stakeholders. As AI becomes more embedded in business operations, internal audit must expand its scope to include model governance, data lineage, and ethical risk. ... Organizations that treat AI as a strategic risk are better positioned to scale it responsibly. Risk and internal audit teams have a central role in ensuring that AI systems are secure, compliant, and aligned with business goals. Citrin Cooperman helps organizations navigate AI adoption with confidence by combining deep risk expertise, practical governance frameworks, and advanced technology solutions that support secure, scalable, and compliant growth.


Six data shifts that will shape enterprise AI in 2026

While RAG won't entirely disappear in 2026, one approach that will likely surpass it in terms of usage for agentic AI is contextual memory, also known as agentic or long-context memory. This technology enables LLMs to store and access pertinent information over extended periods. Multiple such systems emerged over the course of 2025 including Hindsight, A-MEM framework, General Agentic Memory (GAM), LangMem, and Memobase. RAG will remain useful for static data, but agentic memory is critical for adaptive assistants and agentic AI workflows that must learn from feedback, maintain state, and adapt over time. In 2026, contextual memory will no longer be a novel technique; it will become table stakes for many operational agentic AI deployments. ... In 2025, we saw numerous innovations, like the notion that an AI is able to parse data from an unstructured data source like a PDF. That's a capability that has existed for several years, but proved harder to operationalize at scale than many assumed. Databricks now has an advanced parser, and other vendors, including Mistral, have emerged with their own improvements. The same is true with natural language to SQL translation. While some might have assumed that was a solved problem, it's one that continued to see innovation in 2025 and will see more in 2026. It's critical for enterprises to stay vigilant in 2026. 


Communicating AI Risk to the Board With Confidence

Most board members can comprehend that AI will drive growth. What they fail to grasp concretely is how the technology introduces a massive amount of exposure. This predicament is typically a result of how information is presented. Security and risk managers (SRMs) often describe AI incidents in the vocabulary of adversarial inputs, model drift, and architecture choices, which matter deeply but rarely answer the questions that directors tackle during their meetings. High-level stakeholders, in reality, are concerned with issues such as revenue protection, operational continuity, and competitive differentiation, creating a gap that requires more than translating acronyms. ... Traditional discussions about technology risk revolve around the triad of confidentiality, integrity, and availability. Boards know these categories well, and over the past few decades, they have learned that cybersecurity failures directly affect the business along these lines. GenAI has formidably challenged this familiar structure, with its associated risks not limited to one of these three domains.  ... When the conversation begins with the business consequence, though, the relevance is immediate. The most effective approach involves replacing those mechanics that mean so much to the internal teams with the strategic information boards need to operate. These details open a path for meaningful conversations that encourage directors to think through the implications and make more informed decisions. 


The six biggest security challenges coming in 2026

For many organizations, cybersecurity and resilience is a compliance exercise. But it must evolve into “a core intentional cybersecurity capability”, says Dimitriadis. “In 2026, organizations will need to build the capacity to anticipate regulatory changes, understand their strategic implications, and embed them into long-term planning.” ... Attackers are leveraging AI to create convincing email templates and fake websites “almost indistinguishable” from real ones – and without the common warning signs employees are trained to identify, says Mitchell. AI is also being used in vishing attacks, with deepfakes making it easier to clone the voice of high-ranking company executives to trick victims. In 2026, there will be more attacks utilizing realistic voice cloning and high-quality video deepfakes, says Joshua Walsh ... There is a current shift towards agentic AI that can take real-world actions, such as adjusting configurations, interacting with APIs, booking services and initiating financial tasks. This can increase efficiency, but it can also lead to unsafe decisions made at speed, says rradar’s Walsh. An agent told to "optimize performance" might disable logging or bypass authentication because it views security controls as delays, he suggests. Prompt injection is a hidden issue to look out for, he adds. “If a threat actor slips hidden instructions into data that the agent consumes, they can make it run actions on internal systems without anyone realising.” 


5 Changes That Will Define AI-Native Enterprises in 2026

As enterprises scale to multi-agent systems, the engineering focus will shift from creating prompts to architecting context. Multi-agent workflows rapidly expand requirements with tool definitions, conversation history, and data from multiple sources. This creates two challenges: context windows fill up, and models suffer from “context rot,” forgetting information buried in lengthy prompts. By mid-2026, context engineering will emerge as a distinct discipline with dedicated teams and specialized infrastructure, serving the minimal but complete information agents need. The best context engineers will understand both LLM constraints and their business domain’s semantic structure. ... Enterprises are realizing that AI agents need both data and meaning. Companies that spent years perfecting data lakes are already finding those assets are insufficient. AI can retrieve data, but without semantic context, it can’t interpret action or intent. That’s why teams will move beyond vector search toward building knowledge graphs, ontologies, and metadata-driven maps that teach AI how their business works. The battleground will shift from owning raw data to owning its interpretation. Off-the-shelf agents will struggle in complex domains because semantics are domain-specific. ... The AI-native enterprise looks very different from what came before. It serves machine customers, treats context as critical infrastructure, and has the tools to escape decades of technical debt. 


Microsegmentation: the unsung hero of cybersecurity (and why it should be your top priority)

Think of your network like an apartment building. You’ve got a locked front door — that’s your perimeter. But once someone gets inside, there’s no front desk checking IDs, no elevator security and the same outdated lock on every unit. An intruder can roam freely, entering any apartment they choose. Microsegmentation is the internal security system. It’s the keycard for the elevator, the camera in the hallway, the unique lock on your door. It’s what stops one compromised device from becoming a full-blown breach. ... OT environments are different. They’re often built on legacy systems, lack patching and operate in real-time. You can’t just drop an agent or reroute traffic without risking downtime. That’s why agencies need solutions that are agentless, software-defined and tailored to the unique constraints of OT. Otherwise, you’re only protecting half the house. ... Microsegmentation also plays a critical role in enabling zero trust. It enforces least privilege at the network level. It’s not just about who gets in; it’s about what they can touch once they’re inside. For agencies building toward zero trust, microsegmentation isn’t an afterthought. It’s a foundation. Despite all this, microsegmentation remains underutilized. According to TechTarget’s Enterprise Strategy Group, only 36% of organizations use it today, even though it’s foundational to zero trust. Why? Because 28% believe it’s too complex. But that perception is often rooted in outdated tooling.


Beyond Chatbots: What Makes an AI Agent Truly Autonomous

Autonomous agents must retain and use context over time. Memory enables an agent to recall previous interactions, data, and decisions—allowing it to continue a process seamlessly without restarting each time. That persistence turns single exchanges into long-running workflows. In enterprise settings, it means an agent can track a contract review across multiple sessions or follow a complex support case without losing context. ... Traditional automation runs on fixed, rule-based workflows. Autonomous agents build and revise their own plans on the fly, adapting to results and feedback. This ability to plan dynamically—think, act, observe, and adjust—is what differentiates agentic AI from robotic process automation (RPA) or prompt chaining. In practice, an agent might be tasked with analyzing a set of contracts, then automatically decide how to proceed: extract key terms, assess risk, and summarize results. ... Resilient agents are designed to operate across models, retry failed actions, or launch sub-agents to handle specialized work—all within defined guardrails. That adaptability is what separates a proof of concept (POC) from a production-ready system. ... All the reasoning in the world means little if an agent can’t execute. Tools are what translate intelligence into impact. They’re the functions, APIs, and integrations that allow agents to interact with business systems—searching systems, generating documents, updating records, or triggering workflows across CRMs, ERPs, and analytics platforms.

Daily Tech Digest - December 31, 2025


Quote for the day:

“To be successful you need friends and to be very successful you need enemies.” -- Sidney Sheldon



AI agents to outnumber humans, warns Token Security

Many agents now run in controlled, non-production environments. Token Security predicts that organisations will soon connect them directly to live systems. The company says this will force enterprises to manage agent permissions and lifecycle controls more actively. It also expects new processes for assigning accountability when an autonomous system carries out an action on behalf of a team or individual. Apelblat believes established compliance structures will not cope with this change in the workforce. Traditional frameworks assume that humans sit at the centre of most workflows. ... "Despite innovation in agentic AI, enterprises will enter 2026 still relying on static API keys and long-term credentials. These legacy mechanisms will quietly weaken agent identity integrity, creating fragile trust chains that attackers can easily exploit," said Shlomo. Shlomo also predicts a reversal of some progress made in reducing secrets stored on endpoints. Many companies have moved staff onto single sign-on and centralised identity systems. He believes poor management of local Model Context Protocol servers will now cause a resurgence of cleartext service credentials on employee devices. ... "The industry is shifting from breaches caused by human identity failures to breaches rooted in AI agent identity compromise. As agents become operational backbones, attacks targeting their tokens, personas, and delegated authority will define the next wave of high-impact incidents," said Shlomo.


AI killed the cloud-first strategy: Why hybrid computing is the only way forward now

Existing infrastructures now configured with cloud services simply may not be ready for emerging AI demands, a recent analysis from Deloitte warned. "The infrastructure built for cloud-first strategies can't handle AI economics," the report, penned by a team of Deloitte analysts led by Nicholas Merizzi, said. "Processes designed for human workers don't work for agents. Security models built for perimeter defense don't protect against threats operating at machine speed. IT operating models built for service delivery don't drive business transformation." ... AI token costs have dropped 280-fold in two years, they observe -- yet "some enterprises are seeing monthly bills in the tens of millions." The overuse of cloud-based AI services "can lead to frequent API hits and escalating costs." There's even a tipping point in which on-premises deployments make more sense. ... AI often demands near-zero latency to deliver actions. "Applications requiring response times of 10 milliseconds or below cannot tolerate the inherent delays of cloud-based processing," the Deloitte authors point out. ... Resilience is also part of the pressing requirements for fully functional AI processes. These include "mission-critical tasks that cannot be interrupted require on-premises infrastructure in case connection to the cloud is interrupted," the analysts state. ... Whether employing cloud or on-premises systems, companies should always take direct responsibility for security and monitoring, Rana said.


Agentic AI breaks out of the lab and forces enterprises to grow up

The first major stride is the shift from improvisation to repeatable patterns. Early agentic projects were nearly all “glue code”, prompt chains stitched together with brittle tool wiring and homegrown memory hacks. Every workflow was a snowflake. But now, mature organizations are creating shared agentic primitives that development teams can reuse. ... The second major stride is the rise of enterprise grade governance and safety frameworks designed specifically for agentic workflows. Traditional AI governance wasn’t built for systems that take autonomous actions, call tools, modify infrastructure, and reason over long sequences. Enterprises are now treating governance as a first class engineering challenge. ... The third stride is a philosophical and architectural shift in where enterprises choose to invest. Many companies spent months crafting custom planning modules, memory layers, tool registries, and agent routers, believing these would become strategic assets. But experience is proving otherwise. ... The fourth and most important stride is the move toward building durable components that will matter long after orchestration layers become commoditized. Enterprises increasingly understand that their competitive advantage will come from institutional intelligence: domain specific tool schemas, curated datasets, validated decision policies, and deep integration with their existing SDLC, incident response, and SOC workflows.


Businesses have always complained about compliance obligations. Could they automate themselves out of it?

Compliance can often seem like an exercise in Kafkaesque absurdity. Nutanix’s director of systems engineering, James Sturrock, says it’s not uncommon for two in-house experts to have differing opinions on how to solve the same thorny regulatory conundrum. That isn’t even getting into how competing jurisdictions might view the problem. ... Equally important are potential unknowns such as contaminated soil or sewers that don’t appear on maps or where data is incomplete. These don’t just represent potential holdups to work – and resulting penalties – but represent further risks in themselves. ... Automating alerts or making it easier to spot compliance headaches early is one thing. But what might AI contribute toward simplifying more complex compliance conundrums, like those encountered by the financial services industry? In that sector, explains Pegasystems’ global banking industry lead Steve Morgan, such models have to be readily explainable not only to customers, but internal audit teams and regulators, too. Even then, it’s already clear that certain types of AI applications aren’t completely suitable for insertion into compliance workflows – most notably, GenAI. “Unless you have a very special model that’s trained” on a specific use case, says Morgan, the answers that such models provide compliance experts just aren’t predictable or accurate enough to meet the high standards demanded of banks.


Security coverage is falling behind the way attackers behave

Cybercriminals keep tweaking their procedures, trying out new techniques, and shifting tactics across campaigns. Coverage that worked yesterday may miss how those behaviors appear today. ... Activity expanded from ransomware driven campaigns into espionage aligned behavior, with targets including telecom, energy, military, and government organizations. Researchers tracked changes in tooling, credential access, and detection evasion, including expanded use of advanced techniques against cloud and enterprise environments. ... The report describes zero-day use as commoditized. Exploits move quickly from discovery into active abuse. This compresses defender response windows from weeks into days. Early detection depends on identifying behavior tied to exploitation rather than waiting for vulnerability disclosures or patches. ... Identity became a primary target. Campaigns focused on SaaS access, cloud administration, and single sign-on abuse. Luna Moth evolved from simple callback phishing into multi-channel operations combining voice, email, and infrastructure control. ... One theme that runs through the findings is the presence of defensive gaps at the procedure level. Many organizations track techniques and tools, while execution details that signal intent receive less attention. The research connects observed procedures directly to detection and prevention controls, showing where coverage holds and where it breaks down.


Widely Used Malicious Extensions Steal ChatGPT, DeepSeek Conversations

Stolen browser history data includes not only the complete URLs from all Chrome tabs, but also search queries containing sensitive keywords and research topics, URL parameters that could contain session tokens, user IDs, and authentication data, and internal corporate URLs revealing organizational structure and tools. ... Extensions are used to improve and customize users’ browsing experience. More people are using browsers, which can expand the attack surface of the individual and the companies they work for, according to security experts. “Browser extensions aren’t niche tools anymore; they’re deeply embedded in how people work,” Grip Security researchers Ben Robertson and Guy Katzir wrote earlier this year. “But that convenience comes with risk, especially when security teams don’t have visibility into what’s installed, what it can access, or how it behaves after login. The attack surface has shifted. And while endpoint agents and network controls still matter, they can’t see what’s happening inside the browser. That’s where threats like token hijacking and data leakage quietly take shape.” ... In the most recent case, the hackers created malicious extensions that impersonated a legitimate browser created by a company called AITOPIA. The extension puts a sidebar onto any website to give users the ability to chat with popular AI LLMs, OX Security’s Siman and Bustan wrote. 


2026: The year we stop trusting any single cloud

The real story is not that cloud platforms failed; it’s that enterprises quietly allowed those platforms to become single points of failure for entire business models. In 2025, many organizations discovered that their digital transformation had traded physical single points of failure for logical ones in the form of a single region, a single provider, or even a single managed database. When a hyperscaler region had trouble, companies learned the hard way that “highly available within a region” is not the same as “business resilient.” What caught even seasoned teams off guard was the hidden dependency chain. ... Expect to see targeted workload shifts that move critical customer-facing systems from single-region to multi-region or cross-cloud setups, re-architecting data platforms with replicated storage and active-active databases (meaning that we have two running, with one backing up the other). Also, relocating some systems to private or colocation environments based on risk. ... In 2026, smart enterprises will start asking their vendors the hard questions. Which regions and providers do you use? Do you have a tested failover strategy across regions or providers? What happens to my data and SLAs if your primary cloud has a regional incident? Many will diversify not just across hyperscalers, but across SaaS and managed services, deliberately avoiding over-concentration on any provider that cannot demonstrate meaningful redundancy.


AI Is Forcing Businesses To Rethink Their Data Strategies

One of the biggest misconceptions about cloud repatriation is that it’s a simple reversal of a cloud migration. In reality, AI workloads frequently exceed the capabilities of existing on-prem infrastructure. “Servers that were procured three years ago may not be able to handle what these applications require,” Brodsky says. As a result, repatriation decisions often trigger broader modernization efforts, including new hardware, increased power and cooling capacity, and redesigned architectures. Before making those investments, organizations need a clear understanding of their current environment and future requirements. ... “You have to evaluate whether your on-prem environment can actually ingest and protect what you’re bringing down from the cloud,” he says. Timelines and approaches vary. Some organizations opt for high-level assessments to guide strategy, while others pursue deeper technical workshops or phased transitions based on business priorities and service-level agreements. Despite the renewed interest in on-prem infrastructure, cloud repatriation doesn’t signal a retreat from cloud computing. Instead, it reflects a more mature understanding of hybrid IT. “Five years ago, we had daily conversations with customers who wanted to be 100% cloud,” Brodsky says. “Very few actually got there.” Today, most organizations operate hybrid environments by necessity, balancing cloud flexibility with on-prem performance, cost predictability and governance. 


AI-Driven CLM: The New Standard for Enterprise Contracts

Most enterprises still rely on fragmented approaches to contract management. Agreements live in email threads, local folders, and legacy systems that do not communicate with each other. Legal teams spend hours searching for documents that should be accessible in seconds. This disorganization creates real business consequences. Contracts expire without renewal. Compliance obligations go untracked. Revenue recognition gets delayed because finance cannot locate the signed agreement. ... AI-driven contract lifecycle management takes a fundamentally different approach. Instead of treating contracts as paperwork to be stored, modern CLM platforms treat them as data to be analyzed, monitored, and optimized. The shift starts with intelligent data extraction. When a contract enters the system, AI automatically identifies and extracts key terms, dates, obligations, and clauses. No more manual data entry. No more inconsistent tagging. The system understands what it is reading and organizes information accordingly. ... Every contract carries risk. Hidden indemnification clauses, unfavorable liability terms, and non-standard language can expose organizations to significant liability. Catching these issues manually requires experienced legal reviewers and substantial time. AI changes this equation. Modern CLM platforms scan agreements against predefined playbooks and flag deviations instantly. 


How to Do Enterprise Autonomy Right

Autonomous enterprise agents are architected differently. They integrate language understanding, taking calls, planning and orchestration into a closed loop. This allows the agent to assess goals, interpret inputs, break them down into tasks and execute across multiple systems. It can adapt when conditions change and learn from feedback over time. The shift from automation to autonomy requires moving from flow-based design to intent-based execution. For enterprises, this means embedding capabilities that allow agents to sense, decide and act in real time. ... It's non-negotiable for agents to function only within clearly defined domains, with visibility restricted to authorized data and systems. Second, their decision-making logic should be transparent and traceable, ensuring that every outcome can be audited and explained. Third, controls must exist to intervene in real time, whether to pause, override or shut down the agent entirely. Lastly, it is crucial for agents to be built to fail safely. If context shifts beyond their training, the agent must escalate or defer. This is not a fallback but rather is a core design principle that reinforces responsible AI posture. ... The line between productive autonomy and dangerous overgeneralization is best drawn where explainability ends. If a system's actions can no longer be explained in business terms, it is no longer serving the enterprise. Control is central to it and autonomy should expand only when safeguards, governance and organizational readiness evolve alongside it.

Daily Tech Digest - December 30, 2025


Quote for the day:

“It is never too late to be what you might have been.” -- George Eliot


Cybersecurity Trends: What's in Store for Defenders in 2026?

For hackers of all stripes, a ready supply of easily procured, useful tools abounds. Numerous breaches trace to information stealing malware, which grabs credentials from a system, or log. Automated "clouds of logs" make it easy for info stealer subscribers to monetize their attacks. ... Clop, aka Cl0p, again stole data and held it for ransom. How many victims paid a ransom isn't known, although the group's repeated ability to pay for zero-days suggests it's making a tidy profit. Other cybercrime groups appear to have learned from Clop's successes, including The Com cybercrime collective spinoff lately calling itself Scattered Lapsus$ Hunters. One repeat target of that group has been third-party software that connects to customer relationship management software platform Salesforce, allowing them to steal OAuth tokens and gain access to Salesforce instances and customer data. ... Beyond the massive potential illicit revenue being earned by these teenagers, what's also notable is the sheer brutality of many of these attacks, such as data breaches involving children's nurseries including Kiddo and disrupting the British economy to the tune of $2.5 billion through a single attack against Jaguar Land Rover that shut down assembly lines and supply chains. ... Well-designed defenses help blunt many an attacker, or at least slow an intrusion. Enforcing least-privileged access to resources and multifactor authentication always helps, as do concrete security practices designed to block CEO fraud, tricking help desk ploys and other forms of forms of social engineering.


4 New Year’s resolutions for devops success

“Develop a growth mindset that AI models are not good or bad, but rather a new nondeterministic paradigm in software that can both create new issues and new opportunities,” says Matthew Makai, VP of developer relations at DigitalOcean. “It’s on devops engineers and teams to adapt to how software is created, deployed, and operated.” ... A good place to start is improving observability across APIs, applications, and automations. “Developers should adopt an AI-first, prevention-first mindset, using observability and AIops to move from reactive fixes to proactive detection and prevention of issues,” says Alok Uniyal, SVP and head of process consulting at Infosys. ... “Integrating accessibility into the devops pipeline should be a top resolution, with accessibility tests running alongside security and unit tests in CI as automated testing and AI coding tools mature,” says Navin Thadani, CEO of Evinced. “As AI accelerates development, failing to fix accessibility issues early will only cause teams to generate inaccessible code faster, making shift-left accessibility essential. Engineers should think hard about keeping accessibility in the loop, so the promise of AI-driven coding doesn’t leave inclusion behind.” ... For engineers ready to step up into leadership roles but concerned about taking on direct reports, consider mentoring others to build skills and confidence. “There is high-potential talent everywhere, so aside from learning technical skills, I would challenge devops engineers to also take the time to mentor a junior engineer in 2026,” says Austin Spires


New framework simplifies the complex landscape of agentic AI

Agent adaptation involves modifying the foundation model that underlies the agentic system. This is done by updating the agent’s internal parameters or policies through methods like fine-tuning or reinforcement learning to better align with specific tasks. Tool adaptation, on the other hand, shifts the focus to the environment surrounding the agent. Instead of retraining the large, expensive foundation model, developers optimize the external tools such as search retrievers, memory modules, or sub-agents. ... If the agent struggles to use generic tools, don't retrain the main model. Instead, train a small, specialized sub-agent (like a searcher or memory manager) to filter and format data exactly how the main agent likes it. This is highly data-efficient and suitable for proprietary enterprise data and applications that are high-volume and cost-sensitive. Use A1 for specialization: If the agent fundamentally fails at technical tasks you must rewire its understanding of the tool's "mechanics." A1 is best for creating specialists in verifiable domains like SQL or Python or your proprietary tools. For example, you can optimize a small model for your specific toolset and then use it as a T1 plugin for a generalist model. Reserve A2 (agent output signaled) as the "nuclear option": Only train a monolithic agent end-to-end if you need it to internalize complex strategy and self-correction. This is resource-intensive and rarely necessary for standard enterprise applications.


Radio signals could give attackers a foothold inside air-gapped devices

For an attack to work, sensitivity needs to be predictable. Multiple copies of the same board model were tested using the same configurations and signal settings. Several sensitivity patterns appeared consistently across samples, meaning an attacker could characterize one device and apply those findings to another of the same model. They also measured stability over 24 hours to assess whether the effect persisted beyond short test windows. Most sensitive frequency regions remained consistent over time, with modest drift in some paths ... Once sensitive paths were identified, the team tested data reception. They used on-off keying, where the transmitter switches a carrier on for a one and off for a zero. This choice matched the observed behavior, which distinguishes between presence and absence of a signal. Under ideal synchronization, several paths achieved bit error rates below 1 percent when estimated received power reached about 10 milliwatts. One path stayed below 2 percent at roughly 1 milliwatt. Bandwidth tests showed that symbol rates up to 100 kilobits per second remained distinguishable, even as transitions blurred at higher rates. In a longer test, the researchers transmitted about 12,000 bits at 1 kilobit per second. At three meters, reception produced no errors. At 20 meters, the bit error rate reached about 6.2 percent. Errors appeared in bursts that standard error correction could address.


Smart Companies Are Taking SaaS In-House with Agentic Development

The uncomfortable truth: when your critical business processes depend on an AI SaaS vendor’s survival, you’ve outsourced your competitive advantage to their cap table. ... But the deeper risk isn’t operational disruption — it’s strategic surrender. When you pipe your proprietary business context through external AI platforms, you’re training their models on your differentiation. You’re converting what should be permanent strategic assets into recurring operational expenses that drag down EBITDA. For companies evaluating AI SaaS alternatives, the real question is no longer whether to build or buy — but what parts of the AI stack must be owned to protect long‑term competitive advantage. ... “Who maintains these apps?” It’s the right question, with a surprising answer: 1. SaaS Maintenance Isn’t Free — Vendors deprecate APIs, change pricing, pivot features. Your team still scrambles to adapt. Plus, the security risk often comes from having an external third party connecting to internal data. 2. Agents Lower Maintenance Costs Dramatically — Updating deprecated libraries? Agents excel at this, especially with typed languages. The biggest hesitancy — knowledge loss when developers leave — evaporates when agents can explain the codebase to anyone. 3. You Control the Update Schedule — With owned infrastructure, you decide when to upgrade dependencies, refactor components, or add features. No vendor forcing breaking changes on their timeline.


6 cyber insurance gotchas security leaders must avoid

Before committing to a specific insurer, Lindsay recommends consulting an attorney with experience in cyber insurance contracts. “A policy is a legal document with complex definitions,” he notes. “An attorney can flag ambiguous terms, hidden carve-outs, or obligations that could create disputes at claim time,” Lindsay says. ... It’s hardly surprising, but important to remember, that the language contained in cybersecurity policies generally favors the insurer, not the insured. “Businesses often misinterpret the language from their perspective and overlook the risks that the very language of the policy creates,” Polsky warns. ... You may believe your policy will cover all cyberattack losses, yet a look at the fine print may revealed that it’s riddled with exclusions and warranties that can’t be realistically met, particularly in areas such as social engineering, ransomware, and business interruption. ... Many enterprises believe they’re fully secure, yet when they file a claim the insurer points to the fine print about security measures you didn’t know were required, Mayo says. “Now you’re stuck with cleanup costs, legal fees, and potential lawsuits — all without support from your insurance provider.” ... The retroactive date clause can be the biggest cyber insurance trap, warns Paul Pioselli, founder and CEO of cybersecurity services firm Solace. ... Perhaps the biggest mistake an insurance seeker can make is failing to understand the difference between first-party coverage and third-party coverage, and therefore failing to acquire a policy that includes both, says Dylan Tate


7 major IT disasters of 2025

In July, US cleaning product vendor Clorox filed a $380 million lawsuit against Cognizant, accusing the IT services provider’s helpdesk staff of handing over network passwords to cybercriminals who called and asked for them. ... Zimmer Biomet, a medical device company, filed a $172 million lawsuit against Deloitte in September, accusing the IT consulting company of failing to deliver promised results in a large-scale SAP S/4HANA deployment. ... In September, a massive fire at the National Information Resources Service (NIRS) government data center in South Korea resulted in the loss of 858TB of government data stored there. ... Multiple Google cloud services, including Gmail, Docs, Drive, Maps, and Gemini, were taken down during a massive outage in June. The outage was triggered by an earlier policy change to Google Service Control, a control plan service that provides functionality for managed services, with a null-pointer crash loop breaking APIs across several products. ... In late October, Amazon Web Services’ US-EAST-1 region was hit with a significant outage, lasting about three hours during early morning hours. The problem was related to DNS resolution of the DynamoDB API endpoint in the region, causing increased error rates, latency, and new instance launch failures for multiple AWS services. ... In late July, services in Microsoft’s Azure East US region were disrupted, with customers experiencing allocation failures when trying to create or update virtual machines. The problem? A lack of capacity, with a surge in demand outstripping Microsoft’s computing resources.


Stop Guessing, Start Improving: Using DORA Metrics and Process Behavior Charts

The DORA framework consists of several key metrics. Among them, Change Lead Time (CLT) shows how quickly a team can deliver change. Deployment Frequency (DF) shows what the team actually delivers. While important, DF is often more volatile, influenced by team size, vacations, and the type of work being done. Finally, the instability metrics and reliability SLOs serve as a counterbalance. ... Beyond spotting special causes, PBCs are also useful for detecting shifts, moments when the entire system moves to a new performance level. In the commute example above, these shifts appear as clear drops in the average commute time whenever a real improvement is introduced, such as buying a bike or finding a shorter route. Technically, a shift occurs when several consecutive points fall above or below the previous mean, signaling that the process has fundamentally changed. ... Sustainable improvement is rarely linear. It depends on a series of strategic bets whose effects emerge over time. Some succeed, others fail, and external factors, from tooling changes to team turnover, often introduce temporary setbacks. ... According to DORA research, these metrics have a predictive relationship with broader outcomes such as organizational performance and team well-being. In other words, teams that score higher on DORA metrics are statistically more likely to achieve better business results and report higher satisfaction.


5 Threats That Defined Security in 2025

Salt Typhoon is a Chinese state-sponsored threat actor best known in recent memory for targeting telecom giants — including Verizon, AT&T, Lumen Technologies, and multiple others — discovered last fall, targeting the systems used by police for court-authorized wiretapping. The group, also known as Operator Panda, uses sophisticated techniques to conduct espionage against targets and pre-position itself for longer-term attacks. ... CISA layoffs, indirectly, mark a threat of a different kind. At the beginning of the year, the Trump administration cut all advisory committee members within the Cyber Safety Review Board (CSRB), a group run by public and private sector experts to research and make judgments about large issues of the moment. As the CSRB was effectively shuttered, it was working on a report about Salt Typhoon. ... React2Shell describes CVE-2025-55182, a vulnerability disclosed early this month affecting the React Server Components (RSC) open source protocol. Caused by unsafe deserialization, vulnerability was considered easily exploitable and highly dangerous, earning it a maximum CVSS score of 10. Even worse, React is fairly ubiquitous, and at the time of disclosure it was thought that a third of cloud providers were vulnerable. ... In September, a self-replicating malware emerged known as Shai-Hulud. It's an infostealer that infects open source software components; when a user downloads a package infected by the worm, Shai-Hulud infects other packages maintained by the user and publishes poisoned versions, automatically and without much direct attacker input. 


How data-led intelligence can help apparel manufacturers and retailers adapt faster to changing consumer behaviour

AI is already helping retail businesses to understand the complex buying patterns of India’s diverse population. To predict demand, big box chains such as Reliance Retail and e-commerce leaders like Flipkart use machine learning algorithms to analyse historical sales, search patterns and even social media conversations. ... With data-led intelligence studying real-time demand signals, manufacturers can adjust their lines much sooner. If data shows a rising preference for electric scooters in certain cities, for instance, factories can scale up output before the trend peaks. And when interest in a product starts dipping, production can be slowed to prevent excess stock. ... One of the strongest outcomes of the AI wave is its ability to bring consumer demand and industrial supply onto the same page. In the past, customer preferences often evolved faster than factories could react, creating gaps between what buyers wanted and what stores stocked. AI has made this far easier to manage. Manufacturers and retailers now share richer data and insights across the supply chain, allowing production teams to plan with far better clarity. This also enhances supply chain transparency, a growing priority for global buyers seeking traceability. ... If data intelligence tools notice a sharp rise in conversations around eco-friendly packaging or sustainable clothing, retailers can adjust their marketing and stock in advance, while manufacturers source greener materials and redesign processes to match the growing interest.

Daily Tech Digest - December 29, 2025


Quote for the day:

"What great leaders have in common is that each truly knows his or her strengths - and can call on the right strength at the right time." -- Tom Rath


Beyond automation: Physical AI ushers in a new era of smart machines

“Physical AI has reached a critical inflection point where technical readiness aligns with market demand,” said James Davidson, chief artificial intelligence officer at Teradyne Robotics, a leader in advanced robotics solutions. “The market dynamics have shifted from skepticism to proof. Early adopters are reporting tangible efficiency and revenue gains, and we’ve entered what I’d characterize as the early-majority phase of adoption, where investment scales dramatically.” ... To train and prepare these models, a new specialized class of AI model emerged: World Foundation Models. WFMs serve two primary functions for robotics AI: They enable engineers to develop vast synthetic datasets rapidly to train robots on unseen actions, and they test these robots in virtual environments before real-world deployment. WFMs allow developers to create virtual training grounds that mimic reality through “digital twins” of environments. Within these simulated scenes, robots learn to navigate real-world challenges safely and at a pace far exceeding what physical presence would permit. ... Despite grabbing a lot of headlines, humanoid robots only represent a small fraction of AI robotics deployments. For now, it’s collaborative robots, robotic arms and autonomous mobile robots that are transforming warehouse and factory settings. The forefront example is Amazon.com Inc., which uses intelligent robots across its warehouses. 


When Digital Excellence Turns Into Strategic Technical Debt

Asian Paints' digital architecture was built for a world that valued scale, predictability and discipline. Its systems continuously optimize for efficiency, minimize variability and ensure consistency across thousands of dealers and SKUs. For nearly 20 years, these capabilities have directly contributed to better margins, improved service levels and increased shareholder confidence. But today's market is different. New entrants, backed by capital and "largely free from legacy" process constraints, are willing to accept inefficiencies to gain market share quickly. ... The result is a market that is more volatile, more tactical, and less patient. Additionally, new technology plays a vital role in creating a competitive edge. This is where the strategic technical debt surfaces. Unlike traditional technical debt, this isn't about outdated systems or underinvestment. ... The difference lies in architecture and intent. Newer players are born cloud-native, with a more modular approach, better governance and greater tolerance for experimentation. They use analytics and AI proactively to adjust incentives quickly, test local pricing strategies and pivot dealer engagement models in response to demand. Speed and flexibility matter more than optimization. ... Strategic technical debt accumulates because CIOs are rewarded for stability, uptime and optimization. Optionality, speed and the ability to unlearn don't appear on scorecards. Over time, this imbalance becomes part of the architecture and results in digital stress.


The Evolution of North Korea – And What To Expect In 2026

What has changed most notably through 2024 and 2025 is the shift away from “purely external intrusion” towards “abuse of legitimate access,” says Pontiroli. “Rather than breaking in, North Korean operators increasingly aim to be hired as remote IT workers inside real companies, gaining steady income, trusted network access, and the option to pivot into espionage, data theft, or follow on attacks.” ... The workers claim to be US based with IT experience, “but in reality, they are North Korean or proxied by North Korean networks,” he explains. Over time, the threat actors have developed deep expertise in software engineering, mobile applications, blockchain infrastructure, and cryptocurrency ecosystems says Tom Hegel, distinguished threat researcher, SentinelLABS. ... In parallel, cybersecurity researchers have observed related campaigns with distinct names and tradecraft. A malicious campaign dubbed Contagious Interview involves threat actors masquerading as recruiters or employers to lure job seekers, particularly in tech and cryptocurrency sectors, into fake interviews that deliver malware such as BeaverTail, InvisibleFerret, and variants such as OtterCookie, says Pontiroli. ... Today, fake worker schemes remain an “active and growing threat,” says Jack. KnowBe4 offers training to customers to combat this and strengthen their security culture, he says. Security leaders must assume that the hiring pipeline itself is part of the attack surface, says Hegel. 


Five Attack-Surface Management Trends to Watch in 2026

In 2026, regulators will anchor security and risk leaders’ approaches to exposure strategy. This will mean not only demonstrating due diligence during annual audits, but also demonstrating proof of resilience every day. Exposure management platforms that can map external assets against regulatory expectations; provide real-time compliance dashboards and metrics; and quantify benefits and exposures to boardrooms will become table stakes. ... Attackers see the enterprise as a single, unified attack surface, with each constituent part informing the next priority: cloud workloads, SaaS, subsidiaries, shadow IT, and third-party dependencies. In 2026, savvy security leaders will be adopting that same perspective. Point-in-time, penetration-test-style engagements and bug-bounty programs will give way to organizations that expect full-scope, attacker-centric discovery of digital asset footprints, as well as automated prioritization to cut through the noise.  ... In 2026, successful vendor choices will be those that strike a balance between consolidation and integration. Enterprises will demand more flexible integration into existing workflows, including third-party APIs and visibility into SIEM, SOAR, and GRC tools, as well as the ability to support hybrid and multi-cloud environments without friction. Transparency and visibility into roadmap, enterprise-readiness proofs, and customer success will become significant differentiators in a category that has been defined by mergers and acquisitions.


Daon outlines five digital identity shifts for 2026

Daon said non-human identities, including agentic AI systems, are expanding quickly across enterprise networks. It cited independent 2025 studies reporting roughly 44% year-on-year growth in non-human identities and a rise in machine-to-human ratios from around 80:1 to 144:1 in some environments. The prediction for 2026 is that enterprises will treat autonomous and agentic systems as full participants in the identity lifecycle. These systems would be registered, authenticated, authorised and monitored under formal policies, with containment processes defined in case of compromise or misbehaviour. ... Daon said progress in techniques such as zero-knowledge proofs, federated learning and sensor attestation now enables biometric checks on personal devices while reducing movement of raw biometric data. On-device processing can bind verification to a specific capture environment and lower the risk of replay or injection. Local storage of biometric templates supports data-minimisation approaches. The company expects these on-device checks to align with proof-of-possession flows and hardware-backed sensor attestations. It said federated learning and zero-knowledge techniques allow systems to validate claims without sharing underlying biometric templates with servers. ... Daon expects continued pressure on pre-hire verification because of deepfake applicants and impersonation. It said the more significant change in 2026 will come after hiring as employers adopt continuous workforce assurance.


Quantum computing made measurable progress toward real-world use in 2025

Fully functional quantum computers remain out of reach, but optimism across the field is rising. At the Q2B Silicon Valley conference in December, researchers and executives pointed to a year marked by tangible progress – particularly in hardware performance and scaling – and a growing belief that quantum advantage for real-world problems may be achievable sooner than expected. "More people are getting access to quantum computers than ever before, and I have a suspicion that they'll do things with them that we could never even think of," said Jamie Garcia at IBM. ... Aaronson, long known for his critical analysis of claims in quantum computing, described the progress in qubit fidelity and control systems as "spectacular." However, he cautioned that new algorithms remain essential for converting that hardware performance into practical value. While technical strides have been impressive, translating those advances into applications remains difficult. Ryan Babbush of Google Quantum AI said hardware continues to outpace software in usefulness. ... Dutch startup QuantWare introduced an architecture aimed at solving one of the industry's most significant hardware limitations: scaling up without losing reliability. The company's superconducting quantum processor design targets 10,000 qubits, roughly 100 times more than today's leading devices. QuantWare's Matt Rijlaarsdam said the first systems of this size could be operational within 2.5 years.


Ship Reliable AI: 7 Painfully Practical DevOps Moves

In AI land, “what changed” is anything that teaches or nudges the model: training data slices, prompt templates, system instructions, retrieval schemas, embeddings pipelines, tokenizer versions, and the model binary itself. We treat each as code. Prompts live next to code with unit tests. We commit small evaluation sets in-repo for quick signals, and keep larger benchmarks in object storage with content hashes and a manifest. ... Shiny demos hide flaky edges. We force those edges to show up in CI, where they’re cheap. Our pipeline runs fast unit tests, a tiny evaluation suite, and a couple of safety checks against handcrafted adversarial prompts. The goal isn’t to solve safety in CI; it’s to block footguns. We test the glue code around the model, we lint prompts for hard-to-diff formatting changes, and we run a 50-example eval that catches obvious regressions in latency, grounding, and accuracy. ... For AI pods, that starts with resource quotas and limits. GPU nodes are expensive; “just one more experiment” can melt the budget by lunch. We set namespace-level quotas for GPU and memory, and we stop requests that try to sneak past. For egress, we deny everything and allow only the API endpoints our apps need. When someone tries to point a staging pod at a random external endpoint “just to test it,” the policy does the talking.


What support is available for implementing Agentic AI systems

The adoption of Agentic AI systems is reshaping the way organizations implement security measures, particularly for NHIs. Agentic AI—capable of self-directed learning and decision-making—proves advantageous in deploying security protocols that adapt in real-time to evolving threats. By utilizing such technology, organizations can leverage data-driven insights to enhance their NHI management strategies. ... Given the critical role of NHIs in maintaining robust cloud security, organizations need to adopt advanced methodologies that integrate seamlessly with their existing security frameworks. ... Effective NHI management relies heavily on leveraging insights that stem from analyzing large data sets. Organizations that prioritize the use of data analytics in their cybersecurity strategies can efficiently discover, classify, and monitor machine identities and their associated secrets. Advanced analytical tools can help security teams identify patterns and anomalies in system activities, providing early indicators of potential security threats. These insights make it possible to implement more effective security protocols and prevent unauthorized access before it happens. ... The security of an organization is not solely the responsibility of the IT department; it is a shared responsibility across all stakeholders. Building a culture of security awareness is crucial in ensuring that every member of an organization understands the role that NHIs play in cybersecurity.


Godspeed curtain twitchers: DPDP and its peers just got ruthless

Organisations will have to work on privacy very seriously- in everyday business operations and in every area, Bhambry cautions. They will have to make sure it pervades product development, processes (From the onset), internal audit, regular training and the very culture of that company and its employees. Enterprises will have to focus on individual rights, consent protocols and data governance.” There is no doubt that data privacy is going to get stronger, transparent, and comprehensive, affirms Advocate Dr. Bhavna Sharma, Delhi High Court. Cybercrime Expert and Legal Consultant, Delhi Police and a techno-legal policy professional. But it is also going to get complex in 2026 as it shifts from abstract legal principles to a tangible operational mandate with the notification of the DPDPA Rules, 2025, adds Dr. Sharma ... “India’s DPDPA and MeitY’s localisation mandates echo a growing consensus that data sovereignty equals digital sovereignty. Governments are recognising that control over citizen data is foundational to national security and economic resilience.” Cheema explains. In an era marked by competition among nations with their own data systems, state leaders are taking control, Yadav observes. “They are not willing to allow strategic assets to slip through their fingers. And as a result, the government calls for ‘localisation’ to trap extra-territorial storage simply because it has yet to be regulated by authorities in those countries.


Tech innovations fuelling Indian GCCs as BFSI powerhouses

Responsible AI governance, model explainability, and auditability remain difficult across regulated domains worldwide. Institutions everywhere also face constraints around scalable compute, high-quality data flows, and real-time analytics. As AI systems process more sensitive financial data, cybersecurity risks are rising across the industry, prompting greater investment in zero-trust architectures, model-security testing, and stronger third-party controls. ... GCCs in India have been instrumental in orchestrating cloud migrations for complex banking systems, allowing banks and insurers to transition from monolithic legacy systems toward microservices and API-led platforms. This modular architecture has enabled financial institutions to launch products rapidly and build disaster resilience. Additionally, regulatory complexity and rising compliance costs have created a fertile ground for RegTech innovation. Indian GCCs are helping global enterprises build AI-powered KYC and Anti-Money Laundering (AML) solutions, compliance dashboards, and automated regulatory reporting pipelines that reduce manual work and false positives and make audits more efficient. ... Security, observability, and governance have also become board-level priorities. According to industry insights, as GCCs ingest more sensitive financial data and run mission-critical AI models, investments in cyber-resilience, third-party access monitoring, and federated data controls have surged.

Daily Tech Digest - December 28, 2025


Quote for the day:

"The best reason to start an organization is to make meaning; to create a product or service to make the world a better place." -- Guy Kawasaki



PIN It to Win It: India’s digital address revolution

DIGIPIN is a nationwide geo-coded addressing system developed by the Department of Posts in collaboration with IIT Hyderabad. It divides India into approximately 4m x 4m grids and assigns each grid a unique 10-character alphanumeric code based on latitude and longitude coordinates. The ability of DIGIPIN to function as a persistent, interoperable location identifier across India’s dispersed public and private networks is what gives it its real power. Unlike normal addresses, which depend on textual descriptions, a DIGIPIN condenses the geo-coordinates, administrative metadata and unique spatial identifiers into a 10-character alphanumeric string. Because of which, DIGIPIN is readable by machines, compatible with maps and unaffected by changes in naming conventions. When combined with systems like Aadhaar (identity), UPI (payments), ULPIN (land) and UPIC (property), DIGIPIN can enable seamless KYC validation, last-mile delivery automation, digital land titling and geographic analytics. ... For DIGIPIN to become the default address format in India, it has to succeed across three critical dimensions: A 10-character code might be accurate, but is it memorable? For a busy delivery rider or a rural farmer, remembering and sharing it must be easier than reciting a landmark-heavy address. The code must be accepted across platforms – Aadhaar, land registries, GST, KYC forms, food delivery apps and banks. 


Deepfakes leveled up in 2025 – here’s what’s coming next

Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people. For many everyday scenarios — especially low-resolution video calls and media shared on social media platforms — their realism is now high enough to reliably fool nonexpert viewers. In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions. And this surge is not limited to quality. ... Looking forward, the trajectory for next year is clear: Deepfakes are moving toward real-time synthesis that can produce videos that closely resemble the nuances of a human’s appearance, making it easier for them to evade detection systems. The frontier is shifting from static visual realism to temporal and behavioral coherence: models that generate live or near-live content rather than pre-rendered clips. ... As these capabilities mature, the perceptual gap between synthetic and authentic human media will continue to narrow. The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections. These include secure provenance such as media signed cryptographically, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications.


Your Core Is Being Retired. Now What?

Eventually, all financial institutions will find themselves in the position of voluntarily or involuntarily going through a core migration. The stock market hammered one of the largest core processing companies in the world recently, effectively admitting publicly what most of the industry has known for years: They were more concerned about financial engineering of the share price than they were about product engineering a better outcome for their clients. Unfortunately, the market also learned recently that the largest core processing provider will soon be making some big changes and consolidating many of its core systems. It’s hard to imagine how a software company can effectively support and maintain this many diverse core platforms – and the rationale behind this decision seems obvious and needed. However, this is an incredibly risky inflection point for banks and credit unions on platforms targeted for retirement. The hope and bet is that most clients will be incentivized to migrate to one of the remaining cores. ... The retirement of your core is an opportunity to rethink the foundation of your institution’s future. While no core conversion is easy, those who approach it strategically, armed with data, foresight, and the right partners, can turn a forced migration into a competitive advantage. The next generation of cores promises greater flexibility, integration and scalability, but only for institutions that negotiate wisely, plan deliberately, and take control of their own timelines before someone else does.


Whether AI is a bubble or revolution, how does software survive?

Bubble or not, AI has certainly made some waves, and everyone is looking to find the right strategy. It’s already caused a great deal of disruption—good and bad—among software companies large and small. The speed at which the technology has moved from its coming out party, has been stunning; costs have dropped, hardware and software have improved, and the mediocre version of many jobs can be replicated in a chat window. It’s only going to continue. “AI is positioned to continuously disrupt itself, said McConnell. “It's going to be a constant disruption. If that's true, then all of the dollars going to companies today are at risk because those companies may be disrupted by some new technology that's just around the corner.” First up on the list of disruption targets: startups. If you’re looking to get from zero to market fit, you don’t need to build the same kind of team like you used to. “Think about the ratios between how many engineers there are to salespeople,” said Tunguz. “We knew what those were for 10 or 15 years, and now none of those ratios actually hold anymore. If we are really are in a position that a single person can have the productivity of 25, management teams look very different. Hiring looks extremely different.” That’s not to say there won’t be a need for real human coders. We’ve seen how badly the vibe coding entrepreneurs get dunked on when they put their shoddy apps in front of a merciless internet. 


Why Windows Just Became Disruptible in the Agentic OS Era

Identity is where the cracks show early. Traditional Windows environments assume a human logging into a device, launching applications, and accessing resources under their account. Entra ID and Active Directory groups, role-based access control across Microsoft 365, and Conditional Access policies all grew out of that pattern. An agentic environment forces a different set of questions. Who is authenticated when an agent books a conference room, issues a purchase order draft, or requests a sensitive dataset? How should policy cope with agents that mix personal and organizational context, or that act for multiple managers across overlapping projects? What happens when an internal agent needs to negotiate with an external agent that belongs to a partner or supplier? ... Agentic systems improve as they see more behavior. Early customers who allow their interactions, decisions, and corrections to be observed become de facto trainers for the platform. That creates a race to capture training data, not just market share. The same is true for the user experience. How people “vibe reengineer” processes isn’t optimized yet. The vendor that gets that experience right will empower AI-savvy users in new ways, and deep knowledge about those emerging processes will be hard to copy. It is likely, however, that more than one approach will emerge, which will set up the next round of competition.


SaaS attacks surge as boards turn to AI for defence

"SaaS security, together with concerns around the secure use of AI moved from a niche security initiative to a boardroom imperative. The 2025 Verizon Data Breach Investigations Report (DBIR) called out a doubling of breaches involving third-party applications stemming from misconfigured SaaS platforms and unauthorized integrations, particularly those exploited by threat actors through scanning and credential stuffing," said Soby, Co-founder and Chief Technology Officer, AppOmni. ... "Security technologies leveraging AI agents have the potential to move the industry closer towards security operations autonomy. In fact, we're seeing innovative advancements there, especially in the development of SOC AI agents," said Ruzzi, Director of AI, AppOmni. She highlighted the Model Context Protocol, an emerging technical standard, as a mechanism that can act as a universal adapter between AI models and external systems. ... She warned that AI agents still face challenges when they deal with large and complex data sets. "But organizations need to look beyond the AI hype of agents to implement the technology in a way that will be truly useful for them. Handling large volumes of complex data still presents a challenge here. Agents are most useful when assigned to perform a targeted task that handles smaller volumes of simpler data," said Ruzzi.


Why CIOs must lead AI experimentation, not just govern it

The role of IT leadership is undergoing a profound transformation. We were once the gatekeepers of technology. Then came SaaS, which began to democratize technology access, putting powerful tools directly into the hands of employees. AI represents an even more significant shift. It can feel intimidating, and as leaders, we have a crucial responsibility to demystify it and make it accessible. Much like the dot.com boom, we're witnessing a transformative moment, and IT leaders must harness this potential to drive innovation. ... The key to successful AI adoption is fostering a culture of learning and experimentation. Employees at all levels, whether developers or non-developers, executives or individual contributors, must have the opportunity to get their hands on AI tools and understand how they work. Some companies are having employees train AI models and learn prompt engineering, which is a fantastic way to remove the mystery and show people how AI truly functions. We’re encouraging our own teams to write prompts and train chatbots, aiming for AI to become a true copilot in their daily tasks. Think of it as akin to an athlete who trains consistently, refining their skills to achieve better results. That’s the feeling we want our employees to have with AI — a tool that makes their work faster, better and, ultimately, more meaningful and joyful. My own mother’s relationship with her voice assistant, which has become an integral part of her life, is a simple reminder of how seamlessly technology can integrate when it’s genuinely helpful.


AI, fraud and market timing drive biometrics consolidation in 2025 … and maybe 2026

Fraud has overwhelmed organizations of all kinds, and Verley emphasizes the degree to which this has pulled enterprise teams and market players in adjacent areas together. AI has contributed to this wave of fraud in several important ways. The barrier to entry has been lowered, and forgeries are now scalable in a way cybercriminals could only have dreamed of just a few years ago. The proliferation of generative AI tools has also changed the state of the art in biometric liveness detection, with injection attack detection (IAD) now table stakes for secure remote user onboarding the way presentation attack detection (PAD) has been for the last several years. ... Reducing fraud is part of the motivation behind the EU Digital Identity Wallet, which launches in the year ahead. By tying digital IDs to government-issued biometric documents with electronic chips. “That’s going to mean a huge uptick in onboarding people to issue them these new credentials that are going to be big in identity verification, and that’s going to be the best way to do that,” Goode says. At the same time, businesses that had no choice but to pay for identity services during pandemic now have more choice, Verley says. So providers are emphasizing fraud protection to justify the value of their products. ... Uncertainty is a central feature of the AI market landscape, and Goode notes the possibility that if predictions of the AI market popping like a bubble in 2026 come true, restricted credit availability “could put a damper on acquisitions.”


Why Strategic Planning Without CIOs Fail

For large IT projects exceeding $15 million in initial budget, the research found average cost overruns of 45%, value delivery 56% below predictions, and 17% of projects becoming black swan events with cost overruns exceeding 200%, sometimes threatening organizational survival. These outcomes are not random. BCG 2024 research surveying global C-suite executives across 25 industries found that organizations including technology leaders from the start of strategic initiatives achieve 154% higher success rates than those that do not. When CIOs enter after critical decisions are made, organizations discover mid-execution that constraints render promised features impossible, integration requirements multiply beyond projections, and vendor capabilities fail to match sales promises. Direct project costs pale beside the accumulated burden of technical debt. ... Gartner’s 2025 CIO Survey (released October 2024), which surveyed over 3,100 CIOs and technology executives, revealed that only 48% of digital initiatives meet or exceed their business outcome targets. However, Digital Vanguard CIOs, who co-own digital delivery with business leaders, achieve a 71% success rate. That 48% improvement represents the difference between coin-flip odds and a reliable strategic advantage. Failed transformations do not merely waste money. They consume organizational capacity that could deliver value elsewhere.


Top 3 Reasons Why Data Governance Strategies Fail

Clearly, data governance is policy, not a solution. It nests within any organization that has deployed business analytics as part of its overall strategy – in fact, one of the reasons for data governance failure is that it is not being aligned with an enterprise’s business strategy. Governance is about ensuring the proper implementation of business rules and controls around your organization’s data. It involves the wholehearted participation of all company departments, especially IT and business. Any attempt to run it in a vacuum or silo means it’s imminently doomed. ... A well-thought-out data governance plan must have a governing body and a defined set of procedures with a plan to execute them. To begin with, one has to identify the custodians of an enterprise’s data assets. Accountability is key here. The policy must determine who in the system is responsible for various aspects of the data, including quality, accessibility, and consistency. Then come to the processes. A set of standards and procedures must be defined and developed for how data is stored, backed up, and protected. To be left out, a good data governance plan must also include an audit process to ensure compliance with government regulations. ... If an Enterprise does not know where it’s headed with its data governance plan, reflected in black and white, it’s bound to stutter. Things like targets achieved, dollars saved, and risks mitigated need to be measured and recorded.