Daily Tech Digest - January 05, 2026


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe



How to make AI agents reliable

Easier said than done. After all, the way genAI works, we’re trying to build deterministic software on top of probabilistic models. Large language models (LLMs), cool though they may be, are non-deterministic by nature. Chaining them together into autonomous loops amplifies that randomness. If you have a model that is 90% accurate, and you ask it to perform a five-step chain of reasoning, your total system accuracy drops to roughly 59%. That isn’t an enterprise application; it’s a coin toss—and that coin toss can cost you. Whereas a coding assistant can suggest a bad function, an agent can actually take a bad action. ... Breunig highlights “context poisoning” as a major reliability killer, where an agent gets confused by its own history or irrelevant data. We tend to treat the context window like a magical, infinite scratchpad. It isn’t. It is a database of the agent’s current state. If you fill that database with garbage (unstructured logs, hallucinated prior turns, or unauthorized data), you get garbage out. ... Finally, we need to talk about the user. One reason Breunig cites for the failure of internal agent pilots is that employees simply don’t like using them. A big part of this is what I call the rebellion against robot drivel. When we try to replace human workflows with fully autonomous agents, we often end up with verbose, hedging, soulless text, and it’s increasingly obvious to the recipient that AI wrote it, not you. And if you can’t be bothered to write it, why should they bother to read it?


Three Cybersecurity predictions that will define the CISO agenda in 2026

Different tools report different versions of “critical” risk. One team escalates an issue while another deprioritises it based on alternative scoring models. Decisions become subjective, slow and inconsistent without a coherent strategy - and critical attack paths remain open. If cyber risk is not presented consistently in the context of business impact, it’s nearly impossible to align cybersecurity with broader business objectives. In 2026, leaders will no longer tolerate this ambiguity. Boards and executives don’t want more dashboards. ... Social engineering campaigns are already more convincing, more personalised and harder for users to detect. Messages sound legitimate. Voices and content appear authentic. The line between real and fake is blurring at scale. In 2026, mature organisations will take a more disciplined approach. They will map AI initiatives to business objectives, identify which revenue streams and operational processes depend on them, and quantify the value at risk. This allows CISOs to demonstrate where existing investments meaningfully reduce exposure — and where they don’t — while maintaining operational integrity and trust. ... AI agents will take over high-volume, repetitive tasks — continuously analysing vast streams of telemetry, correlating signals across environments, and surfacing the handful of risks that truly matter. They will identify the needle in the haystack. Humans will remain firmly in the loop. 


The Hidden Costs of Silent Technology Failures

"Most CIOs see failures as negative experiences that undermine their credibility, effectiveness and ultimate growth within the organization," Koeppel said. Under those conditions, escalation is rationally delayed. CIOs attempt recovery first, including new baseline plans, renegotiations of vendor commitments and a narrower scope before formally declaring failure. ... CIOs, Dunkin noted, frequently underplay failure to shield their teams from blame. Few leaders want finger-pointing to cascade through already strained organizations. But Dunkin pointed out that the same instincts are shaped by fear of job loss, budget erosion or internal power shifts. And, she warns, bad news does not age well. Beyond politics and incentives, decision-making psychology compounds the problem. Jim Anderson, founder of Blue Elephant Consulting, describes how sunk-cost bias distorts executive judgment. Admitting a mistake publicly opens leaders to criticism, so past decisions are defended rather than reassessed. ... But not all organizations respond this way. Koeppel said that in his experience, boards and CEOs are receptive to clear, concise explanations when technology initiatives deviate from plan. Over time, disclosure improves because consequences change. Sethi described the shift to openness that followed a major outage in one organization. It resulted in mandatory, blameless post-mortem reviews that focused on systemic and process breakdowns rather than individual fault.


2026 Low-Code/No-Code Predictions

The promise of low-code platforms will finally materialize by the end of 2026. AI will let business users create bespoke applications without writing code, while professional developers guide standards, security, and integration. The line between "developer" and "user" will blur as agentic systems become part of daily work. ... No code's extinction: No code's on its last legs — it's being snuffed out by vibe coding. AI-driven development tools will be the final knell for no code as we know it, with its remit curtailed in this new coding landscape. In this future, the focus will transition entirely to model orchestration and high-level knowledge work, where humans express their intent and expertise through abstract models rather than explicit code. The human role becomes centered on the plan to build. Specifically, ensuring the problem is correctly scoped and defined. ... In 2026, low-code/no-code interfaces will rapidly shift from drag and drop canvases to natural language interfaces, as user expectations rapidly adopt to the changing landscape. As this transition occurs, application vendors will struggle to provide transparency into how the application has interpreted the users' intent. ... While it's proved remarkable for supercharging development speed and allowing non-technical individuals to produce functional software, its outputs are less than perfect. This year, we've continued to uncover that much of AI-generated code turns out fragile or flat-out wrong once it faces real workflows or customers. 


AI security risks are also cultural and developmental

The research shows that AI systems increasingly shape cultural expression, religious understanding, and historical narratives. Generative tools summarize belief systems, reproduce artistic styles, and simulate cultural symbols at scale. Errors in these representations influence trust and behavior. Communities misrepresented by AI outputs disengage from digital systems or challenge their legitimacy. In political or conflict settings, distorted cultural narratives contribute to disinformation, polarization, and identity-based targeting. Security teams working on information integrity and influence operations encounter these risks directly. The study positions cultural misrepresentation as a structural condition that adversaries exploit rather than an abstract ethics issue. ... Systems designed with assumptions of reliable connectivity or standardized data pipelines fail in regions where those conditions do not hold. Healthcare, education, and public service applications show measurable performance drops when deployed outside their original development context. These failures expose organizations to cascading risks. Decision support tools generate flawed outputs. Automated services exclude segments of the population. Security monitoring systems miss signals embedded in local language or behavior. ... Models operate on statistical patterns and lack awareness of missing data. Cultural knowledge, minority histories, and local practices often remain absent from training sets. This limitation affects detection accuracy. 


The Board’s Duty in the Age of the Black Box

Today, when this Board approves the acquisition of a Generative AI startup or authorizes a billion-dollar investment in GPU infrastructure, you are acquiring a Black Box. You are purchasing a system defined not by logical rules, but by billions of specific weights, biases, and probabilistic outcomes. These systems are inherently unstable; they “hallucinate,” they drift, and they contain latent biases that no static audit can fully reveal. They are closer to biological organisms than to traditional software. ... Critics may argue that applying financial volatility models to operational AI risk is a conceptual leap. There is no perfect mathematical bridge between “Model Drift” and “WACC” (Weighted Average Cost of Capital). However, in the absence of a liquid market for “Algorithm Liability Insurance” or standardized auditing protocols, the Board must rely on empirical proxies to gauge risk. ... The single largest destroyer of capital in the current AI cycle is the misidentification of a “Wrapper” as a “Moat.” The Board must rigorously interrogate the strategic durability of the asset. ... The Risk Committee’s role is shifting from passive monitoring to active defense. The risks associated with AI are “Fat-Tailed”—meaning that while day-to-day operations might be smooth, the rare failure modes are catastrophic. ... For the Chief Information Officer (CIO), the concept of “Model Risk” translates directly into operational reality. It is critical to differentiate between “Valuation Risk” and “Maintenance Cost.”


Cybersecurity leaders’ resolutions for 2026

Any new initiative will start with a clear architectural plan and a deep understanding of end-to-end dependencies and potential points of failure. “By taking a thoughtful, engineering-driven approach — rather than reacting to outages or disruptions — we aim to strengthen the stability, scalability, and reliability of our systems,” he says. “This foundation enables the business to move with confidence, knowing our technology and security investments are built to endure and evolve.” ... As new attack surfaces emerge with AI-driven applications and systems, Piekarski’s priorities will focus on defending and hardening the environment against AI-enabled threats and tactics.  ... In practice, SaaS management and discovery tools will be used to get a handle on shadow IT and unsanctioned AI usage. Automation for compliance and reporting will be important as customer and regulatory requirements around ESG and security continue to grow, along with threat intelligence feeds and vulnerability management solutions that help Gallagher and the team stay ahead of what’s happening in the wild. “The common thread is visibility and control; we need to know what’s in our environment, how it’s being used, and that we can respond quickly when things change,” he tells CSO. ... “Quantum computing poses significant cyber risks by potentially breaking current encryption methods, impacting data security, and enabling new attack vectors,” says Piekarski.


Enterprise Digital Twin: Why Your AI Doesn’t Understand Your Organization

Agentic AI systems are moving from research papers to production pilots, taking critical business actions such as processing invoices, scheduling meetings, drafting communications, and coordinating workflows across teams. They operate with increasing autonomy. When an agent misunderstands organizational context, it does not just give a wrong answer. It takes wrong actions, such as approving expenses that violate policy, scheduling meetings with people who should not be in the room, routing decisions to the wrong authority, and creating compliance exposure at machine speed. The industry is catching up to this reality. ... An AI system reviewing a staffing request might confirm that the budget exists, the policy allows the hire, and the hiring manager has authority. All technically correct. But without Constraint Topology, the system does not know that HR cannot process new hires until Q2 due to a systems migration, that the only approved vendor for background checks has a six-week backlog, or that three other departments have competing requisitions for the same job grade and only two can be filled this quarter. ... Most AI frameworks focus on making models smarter. CTRS focuses on making organizations faster. Technically correct outputs that do not translate into action are not actually useful. The bottleneck is not AI capability. It is the distance between what AI recommends and what the organization can execute.


The agentic infrastructure overhaul: 3 non-negotiable pillars for 2026

If 2025 was about the brain (the LLM), 2026 must be about the nervous system. You cannot bolt a self-correcting, multi-step agent onto a 2018 ERP and expect it to function. To move from isolated pilots to enterprise-wide autonomous workflows, we must overhaul our architectural blueprint. We are moving from a world of rigid, synchronous commands to a world of asynchronous, event-driven fluidity. ... We build dashboards with red and green lights so a DevOps engineer can identify a spike in latency. However, an AI agent cannot “look” at a Grafana dashboard. If an agent encounters an error mid-workflow, it needs to understand why in a format it can digest. ... Stop “bolting on” agents to legacy REST APIs. Instead, build an abstraction layer — an “agent gateway” — that converts synchronous legacy responses into asynchronous events that your agents can subscribe to. ... The old mantra was “Data is the new oil.” In 2026, data is just the raw material; Metadata is the fuel. Businesses have spent millions “cleaning” data in snowflakes and lakes, but clean data lacks the intent that agents require to make decisions. ... Invest in a data catalog that supports semantic tagging. Ensure your data engineers are not just moving rows and columns, but are defining the “meaning” of those rows in a way that is accessible via your RAG pipelines. ... The temptation in 2026 will be to build “bespoke” agents for every department — a HR agent, a finance agent, a sales agent. This is a recipe for a new kind of “shadow IT” and massive technical debt.


The New Front Line Of Digital Trust: Deepfake Security

AI-generated deepfakes are ruining the way we perceive one another, as well as undermining institutions’ ways of ensuring identity, verifying intent and maintaining trust. For CISOs and IT security risk leaders, this is a new and pressing frontier for us to focus on: defending against attacks not on systems but on beliefs. ... Deepfakes are coming to the forefront just as CISOs have more risk to manage than ever. Here are some of the other key pressures driving the financial cybersecurity environment today: Multicloud misconfigurations and API exposure; Ransomware shift to triple extortion; Expanding third-party and fourth-party dependencies; Insider threats facing hybrid workforces; Barriers to zero-trust implementation and Regulatory fragmentation. ... Deepfake security isn’t a fringe issue anymore; it’s now a foremost challenge to digital trust and systemic financial resilience. In today’s world, where synthetic voices can create markets and fake identities can trigger transactions, authenticity reigns as the currency of banking. Tomorrow’s front-runners will be those building the next-generation financial systems—secured, transparent and globally trusted. Those systems will include reconfigured trust frameworks, deepfake detection, AI governance that drives model integrity and a resilient-by-design approach. In this world, where anyone can create an AI-generated identity, the ultimate competitive differentiator is proving what’s real.

Daily Tech Digest - January 03, 2026


Quote for the day:

“Some people dream of great accomplishments, while others stay awake and do them.” -- Anonymous


Cloud costs now No. 2 expense at midsize IT companies behind labor

The Cloud Capital survey shows midsize IT vendor CFOs and their CIO partners struggling to contain cloud spending, with significant cost volatility from month to month. Three-quarters of IT org CFOs report cloud spending forecasts varying between 5% and 10% of company revenues each month, Pingry notes. Costs of AI workloads are harder to predict than traditional SaaS infrastructure, Pingry adds, and organizations running major AI workloads are more likely to report margin declines tied to cloud spending than those with moderate AI exposure. “Training spikes, usage-driven inference, and experimentation noise introduce non-linear patterns that break the forecasting assumptions finance relies on,” says a report from Cloud Capital. “The challenge will intensify as AI’s share of cloud spend continues scaling.” ... Cloud services in themselves aren’t inherently too expensive, but many organizations shoot themselves in the foot through unintentional consumption, Clark adds. “Costs rise when the system is built without a clear understanding of the value it is meant to deliver,” he adds. ... “No CxO wants to explain to the board why another company used AI to leap ahead,” Clark adds. “This has created a no-holds-barred spending spree on training, inference, and data movement, often layered on top of architectures that were already economically incoherent.”


Securing Integration of AI into OT Technology

For critical infrastructure owners and operators, the goal is to use AI to increase efficiency and productivity, enhance decision-making, save costs, and improve customer experience – much like digitalization. However, despite the many benefits, integrating AI into operational technology (OT) environments that manage essential public services also introduces significant risks – such as OT process models drifting over time or safety-process bypasses – that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure. ... Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle. ... Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration. ... Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance. ... Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans.The agencies said critical infrastructure owners and operators should review this guidance so they can safely and securely integrate AI into OT systems. 


Rethinking Risk in a Connected World

As consumer behavior data proliferates and becomes increasingly available, it presents both an opportunity and a challenge for actuaries, Samuell says. Actuaries have the opportunity to better align expected and actual outcomes, while also facing the challenge of accounting for new sources of variability that traditional data does not capture. ... Keep in mind that incorporating behavioral factors into risk models does not guarantee certainty. A customer whom the model predicts to be at high risk of dishonesty may actually act honestly. “Ethical insurers must avoid treating predictive categories as definitive labels,” Samuell says. “Operational guidelines should ensure that all customers are treated with fairness and dignity, even as insurers make better use of available data.” ... Behavioral analytics is also changing how insurers engage with their customers. For example, by understanding how policyholders interact with digital platforms—including how often they log in, which features they use, and where they disengage—insurers can identify friction points and design more intuitive, personalized services. ... Consumer behavior data can also inform communication strategies for insurers. For example, “actuaries often want to be very precise, but data shows that can diminish comprehension of communications,” Stevenson says. ... In addition to data generated by insured individuals through technology, some insurance companies also use data from government and other sources in risk modeling. 


Inside the Cyber Extortion Boom: Phishing Gangs and Crime-as-a-Service Trends

Phishing attempts are growing in volume partly because organized crime groups no longer need technical knowledge to launch ransomware or other forms of cyber extortion: they can simply buy in the services they need. This ongoing trend is combined with emerging social engineering techniques, including multi-channel attacks, deep fakes and ClickFix exploits. Cybercriminals are also using AI to fine tune their operations, with more persuasive personalization, better translation into other languages and easier reconnaissance against high-value targets. It is becoming harder to detect and block attacks, and harder to train workforces to spot suspicious activity. ... “AI has increased the accuracy of a lot of phishing emails. Everybody was familiar with phishing emails you could spot it by the bad grammar and the poor formatting and stuff like that. Previously, a good attacker could create a good phishing email. All AI has done is allowed the attacker to generate good quality phishing emails at speed and at scale,” explained Richard Meeus, EMEA director of security strategy and technology at Akamai. ... For CISOs, wider cybersecurity and fraud prevention teams, recent developments in phishing and cyber extortion schemes will pose real challenges in the coming year. “User awareness still matters, but it isn’t enough,” cautioned Forescout’s Ferguson. “In a world of deepfake video, cloned voices and perfect written English, your control point can’t be ‘would our users spot this?’”


AI Fatigue: Is the backlash against AI already here?

The problem of AI fatigue is inevitable, but also to be expected, according to Dr Clare Walsh, director of education at the Institute of Analytics (IoA). “For those working in digital long enough. They know there is always a period after the initial excitement at the launch of a new technology when ordinary users start to see the costs and limitations of the latest technologies,” she says. “After 10 years of non-stop exciting advancements – from the first neural nets in 2016 to RAG solutions today – we may have forgotten this phase of disappointment was coming. It doesn’t negate the potential of AI technology – it is just an inevitable part of the adoption curve.” ... Holding back the tide of AI fatigue is also about not presenting it as the only solution to every problem, warns Claus Jepsen, Unit4’s CTO. “It is absolutely critical the IT team is asking the right questions and thoroughly interrogating the brief from the business,” he explains. “Quite often, AI is not the right answer. If you foist AI onto the business when they don’t want or need it, you’ll get a backlash. You can avoid the threat of AI fatigue if you listen carefully to your team and really appreciate how they want to interact with technology, where its use can be improved, and where it adds absolutely no value.” ... “AI fatigue is not just a productivity issue; it is a board-level risk,” she says. “When workflows are interrupted, or systems overlap, trust in technology erodes, driving disengagement, errors, and higher attrition. ...”


Why Cybersecurity Risk Management Will Continue to Increase in Complexity in 2026

The year 2026 ushers in tougher rules across regions and industries. Compliance pressure continues to build from multiple directions. By 2026, sector-specific and regional rules will grow tighter, from NIS2 enforcement across Europe to updated PCI DSS controls, alongside firmer privacy and AI oversight. Privacy laws continue tightening while new AI regulations add requirements around algorithmic transparency and data handling. Organizations are now juggling NIST frameworks, ISO 27001 certifications, and sector-specific mandates simultaneously. Each framework arrives with a valid intent, yet together they create layers of obligation that rarely align cleanly. This tension surfaced clearly in 2025, when more than forty CISOs from global enterprises urged the G7 and OECD to push for closer regulatory coordination. Their message was simple. Fragmented rules drain limited security resources and weaken collective response. ... The majority of organizations no longer run security in isolation. Daily operations depend on cloud providers, managed service partners, niche SaaS tools, and open-source libraries pulled into production without much ceremony. The problem keeps compounding: your vendors have their own vendors, creating chains of dependency that stretch impossibly far. You can secure your own network perfectly and still get breached because a third-party contractor left credentials exposed.


Seven steps to AI supply chain visibility — before a breach forces the issue

NIST’s AI Risk Management Framework, released in 2023, explicitly calls for AI-BOMs as part of its “Map” function, acknowledging that traditional software SBOMs don’t capture model-specific risks. But software dependencies resolve at build time and stay fixed. Conversely, model dependencies resolve at runtime, often fetching weights from HTTP endpoints during initialization, and mutate continuously through retraining, drift correction, and feedback loops. LoRA adapters modify weights without version control, making it impossible to track which model version is actually running in production. ... AI-BOMs are forensics, not firewalls. When ReversingLabs discovered nullifAI-compromised models, documented provenance would have immediately identified which organizations downloaded them. That’s invaluable to know for incident response, while being practically useless for prevention. Budgeting for protecting AI-BOMs needs to take that factor into account. The ML-BOM tooling ecosystem is maturing fast, but it's not where software SBOMs are yet. Tools like Syft and Trivy generate complete software inventories in minutes. ML-BOM tooling is earlier in that curve. Vendors are shipping solutions, but integration and automation still require additional steps and more effort. Organizations starting now may need manual processes to fill gaps. AI-BOMs won't stop model poisoning as that happens during training, often before an organization ever downloads the model.


Power, compute, and sovereignty: Why India must build its own AI infrastructure in 2026

Digital infrastructure decisions made in 2026 will shape India’s technological posture well into the 2040s. Data centers, power systems, and AI platforms are not short-cycle investments; they are multi-decade commitments. In this context, policy clarity becomes a prerequisite for execution rather than an afterthought. Clear, stable frameworks around data governance, AI regulation, cross-border compute flows, and energy integration reduce long-term risk and enable infrastructure to be designed correctly the first time. Ambiguity forces fragmentation capital hesitates, architectures become reactive, and systems are retrofitted instead of engineered. As India accelerates its AI ambitions, predictability in policy will be as important as speed in deployment. ... In India’s context, sovereignty does not imply isolation. It implies resilience. Compliance, data residency, and AI governance cannot be retrofitted into infrastructure after it is built. They must be embedded from inception governing where data resides, how it moves, how workloads are isolated, audited, and secured, and how infrastructure responds to evolving regulatory expectations. Systems designed this way reduce friction for enterprises operating in regulated environments and provide governments with confidence in domestic digital capability. This reality also reframes the role of domestic technology firms. 


Why AI Risk Visibility Is the Future of Enterprise Cybersecurity Strategy

Vulnerabilities arise from two sources: internal infrastructure and third-party tools that companies rely on. Organizations typically have stronger control over internally developed systems. The complexity stems from third-party software that introduces new risks whenever a new version or patch is released. A comprehensive asset inventory is essential for documenting the software and hardware resources in use. Once the enterprise knows what it has, it can evaluate which systems pose the highest risk. Asset management, infrastructure, and information security teams, along with audit functions, all contribute to that assessment. Together, they can determine where remediation must occur first. Cloud service providers are responsible for cloud-based Software as a Service (SaaS) applications. It’s vital, however, for the company to take on data governance and service offboarding responsibilities. Contracts must clearly specify how data is handled, transferred, or destroyed at the end of the relationship. ... Alignment between business and IT leadership is essential. The chief information officer (CIO) approves the IT project kickoff and allocates the required budget and other resources. The business analysis team translates those needs into technical requirements. Quarterly scorecards and governance checkpoints create visibility, enabling leaders to make decisions that balance business outcomes and technical realities.


Why are IT leaders optimistic about future AI governance

IT leaders are optimistic about AI’s transformative potential. This optimism extends to AI governance, where the strategic integration of NHI management enhances security and enables organizations to confidently pursue AI initiatives. It’s essential to ensure that security measures evolve alongside technological advancements, safeguarding AI systems without stifling innovation. ... Can robust security and innovation coexist harmoniously? The answer lies in striking a balance between rigorous security measures and fostering an environment conducive to innovation. Properly managing NHIs equips organizations with the flexibility to innovate while maintaining a fortified security posture. With advancements in artificial intelligence and automation progress, machine identities play an increasingly pivotal role in enabling these technologies. By ensuring that machine interactions are secure and transparent, businesses can confidently explore the transformative potential of AI without compromising on security. Herein lies the essence of responsible AI governance: leveraging data-driven insights to enable ethical and sustainable technological growth while safeguarding against inherent risks. ... What can organizations do to harness the collective expertise of stakeholders? Where cyber threats are increasingly sophisticated, collaboration becomes the cornerstone of a resilient cybersecurity framework. 

Daily Tech Digest - January 02, 2026


Quote for the day:

“If your ship doesn’t come in, swim out to meet it!” -- Jonathan Winters



Delivering resilience and continuity for AI

Think of it as technical debt, suggests IDC group VP Daniel Saroff as most enterprises underestimate the strain AI puts on connectivity and compute. Siloed infrastructure won’t deliver what AI needs and CIOs need to think about these and other things in a more integrated way to make AI successful. “You have to look at your GPU infrastructure, bandwidth, network availability, and connectivity between respective applications,” he says. “If you have environments not set up for highly transactional, GPU-intensive environments, you’re going to have a problem,” Saroff warns. “And having very fragmented infrastructure means you need to pull data and integrate multiple different systems, especially when you start to look at agentic AI.” ... Making AI scale will almost certainly mean taking a hard look at your data architecture. Every database adds features for AI. And lakehouses promise you can bring operational data and analytics together without affecting the SLAs of production workloads. Or you can go further with data platforms like Azure Fabric that bring in streaming and time series data to use for AI applications. If you’ve already tried different approaches, you likely need to rearchitect your data layer to get away from the operational sprawl of fragmented microservices, where every data hand-off between separate vector stores, graph databases, and document silos introduces latency and governance gaps. Too many points of failure make it hard to deliver high availability guarantees.


Technological Disruption: Strategic Inflection Points From 2026 - 2036

From a defensive standpoint, AI-driven security solutions will provide continuous surveillance, automated remediation, and predictive threat modeling at a scale unattainable by human analysts. Simultaneously, attackers will utilize AI to create polymorphic malware, execute influence operations, and exploit holes at machine speed. The outcome will be an environment where cyber war progresses more rapidly than conventional command-and-control systems can regulate. As we approach 2036, the primary concern will be AI governance rather than AI capacity. ... From 2026 to 2030, enterprises will increasingly recognize that cryptographic agility is vital. The move to post-quantum cryptography standards means that old systems, especially those in critical infrastructure, financial services, and government networks, need to be fully inventoried, evaluated, and upgraded. By the early 2030s, quantum innovation will transcend cryptography, impacting optimization, materials science, logistics, and national security applications. ... In the forthcoming decade, supply chain security will transition from compliance-based evaluations to ongoing risk intelligence. Transparency methods, including software bills of materials, hardware traceability, and real-time vendor risk assessment, will evolve into standard expectations rather than just best practices. Supply chain resilience will strategically impact national competitiveness.


True agentic AI is years away - here's why and how we get there

We're not there yet. We're not even close. Today's bots are limited to chat interactions and often fail outside that narrow operating context. For example, what Microsoft calls an "agent" in the Microsoft 365 productivity suite, probably the best-known instance of an agent, is simply a way to automatically generate a Word document. Market data shows that agents haven't taken off. ... Simple automations can certainly bring about benefits, such as assisting a call center operator or rapidly handling numerous invoices. However, a growing body of scholarly and technical reports has highlighted the limitations of today's agents, which have failed to advance beyond these basic automations. ... Before agents can live up to the "fully autonomous code" hype of Microsoft and others, they must overcome two primary technological shortcomings. Ongoing research across the industry is focused on these two challenges: Developing a reinforcement learning approach to designing agents; and Re-engineering AI's use of memory -- not just memory chips such as DRAM, but the whole phenomenon of storing and retrieving information. Reinforcement learning, which has been around for decades, has demonstrated striking results in enabling AI to carry out tasks over a very long time horizon. ... On the horizon looms a significant shift in reinforcement learning itself, which could be a boon or further complicate matters. Can AI do a better job of designing reinforcement learning than humans?


Why Developer Experience Matters More Than Ever in Banking

Effective AI assistance, in fact, meets developers where they are—or where they work. Some prefer a command-line interface, others live inside an IDE, and still others rely heavily on sample code and language-specific SDKs. A strong DX strategy supports all of these modes, using AI to surface accurate, context-aware guidance without forcing developers into a single workflow. When AI reinforces clarity, it becomes a force multiplier. ... As AI-assisted development becomes more common, the quality of documentation takes on new importance. Because it is no longer read only by humans, documentation increasingly serves as the knowledge base that enables AI agents that help developers search, generate, and validate code. When documentation is vague or poorly structured, it introduces confusion, often in ways that actively undermine developer confidence. ... In highly regulated environments, developers want, and expect, guardrails—but not at the expense of speed and consistency. One of the most effective ways to balance those demands is by codifying business rules and compliance requirements directly into the platform, rather than relying on manual, human-driven review at key milestones. Talluri describes this approach as “policy as code”: embedding rules, validations, and regional requirements into the system so developers receive immediate, actionable prompts and feedback as they work. ... The business case for exceptional developer experience rests on a simple truth: trust drives productivity.


AI-powered testing for strategic leadership

Nearly half of teams still release untested code due to time pressure, creating fragile systems and widening risk exposure. Legacy architectures further compound this, making modernisation difficult and slowing down automated validation,” he said. AI-generated code also introduces new vulnerabilities. Without strong validation pipelines, testing quickly becomes the bottleneck of transformation. Developers often view testing as tedious, and with modern codebases spanning multiple interconnected applications, the challenge intensifies. At the same time, misalignment between leadership and engineering teams leads to unclear priorities and rushed decisions. While the pace of development already feels fast, it is only set to accelerate. To overcome barriers, CIOs can adopt model-based, codeless AI testing that reduces dependence on fragile code-level automation and cuts ongoing maintenance. This approach can reduce manual effort by 80%–90% and enables non-technical experts to participate through natural-language and visual test generation. For Wong, strong governance is vital. This entails domain-trained, testing-specific AI that avoids hallucinations and supports safe, transparent validation. Instead of becoming autonomous, AI can act as a co-pilot working alongside developers. “By aligning teams, modernising toolchains, and embedding guardrails, CIOs can shift from reactive firefighting to proactive, AI-driven quality engineering,” he said.


The Architect’s Dilemma: Choose a Proven Path or Pave Your Own Way?

Platforms and frameworks are like paved roads that may help a team progress faster on their journey, with well-defined "exit ramps" or extension points where a team can extend the platform to meet their needs, but they come with side-effects that may make them undesirable. Teams need to decide when, if ever, they need to leave the path others have paved and find their own way by developing extensions to the platform or framework, or by developing new platforms or frameworks. The challenge teams face when they use platforms or frameworks as the basis for their software architectures is to choose the "paved road" (platform or framework) that gets them closest to their desired destination with minimal diversions or new construction. ... Many platform decisions are innocuous and can be accepted and ignored when they don’t affect the QARs that the team needs to meet. The only way to know whether the decisions are harmful is through experiments that expose when the platform is failing to meet the goals of the system. Since the decisions made by the platform developers are often undocumented and/or unknowable, it’s imperative that teams be able to test their system (including the platforms on which they are built) to make sure that their architectural goals (i.e. QARs) are being met. ... Using the "paved road" metaphor, the LLM provides a proven path but it does not take the team where they need to go. When this happens, they have no choice but to either start extending the platform (if they can), finding a different platform, or building their own platform.


Supply chains, AI, and the cloud: The biggest failures (and one success) of 2025

By compromising a single target with a large number of downstream users—say a cloud service or maintainers or developers of widely used open source or proprietary software—attackers can infect potentially millions of the target’s downstream users. ... Another significant security story cast both Meta and Yandex as the villains. Both companies were caught exploiting an Android weakness that allowed them to de-anonymize visitors so years of their browsing histories could be tracked. The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allowed Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. ... The outage with the biggest impact came in October, when a single point of failure inside Amazon’s sprawling network took out vital services worldwide. It lasted 15 hours and 32 minutes. The root cause that kicked off a chain of events was a software bug in the software that monitors the stability of load balances by, among other things, periodically creating new DNS configurations for endpoints within the Amazon Web Services network. A race condition—a type of bug that makes a process dependent on the timing or sequence of events that are variable and outside the developers’ control—caused a key component inside the network to experience “unusually high delays needing to retry its update on several of the DNS endpoint,” Amazon said in a post-mortem.


The Evolving Cybersecurity Challenge for Critical Infrastructure

Convergence between OT, IT and the cloud is providing cybercriminal groups with the opportunity to target critical infrastructure. Operators, and regulators, are wrestling with new technology and new manufacturers, outside the traditional OT/ICS supply chain. “With the geopolitical tensions and the way that the world will look in maybe a few years, they're starting to scratch their heads and think, ‘okay, is it secure? Is it safe? How was it developed? Is there any remote access? How is it being configured?’ There are things that are being done now, that will have an effect in a few years’ time,” cautioned Daniel dos Santos, head of security research at Forescout's Vedere Labs. Given the lifespans of operational technology, installing insecure equipment now can have long-term consequences. Meanwhile, CISOs face dealing with older hardware that was not designed for modern threats. Even where vendors release patches, CNI operators do not always apply them, either because of concerns about business interruption, or a lack of visibility. ... Threats to CNI are not likely to abate in 2026. Legislators are putting more emphasis on cyber resilience and directives, such as the EU’s Cyber Resilience Act, will improve the security of connected devices. But these upgrades take time. “Threats from criminal groups continue to grow exponentially,” said Phil Tonkin, CTO at OT security specialists Dragos


The changing role of the MSP: What does this mean for security?

MSPs hold a unique position within the IT ecosystem, as they are often responsible for managing and supporting the IT infrastructures, cloud services, and cybersecurity of many different organizations. These trusted partners often have privileged access to the inner workings of the organizations they support, including access to the critical systems, sensitive information, and intellectual property of their clients. ... Research shows that over half of MSP leaders globally believe that their customers are at more risk today than this time last year when it comes to cyber threats, with AI-based attack vectors, ransomware/malware, and insider threats the most commonly faced threats. As a result of this uptick in threats, more organizations than ever are leaning on MSPs for cyber support. In fact, in 2025, 84% of MSPs managed either their clients’ cyber infrastructure or their cyber and IT estates combined. This increased significantly, from 64% the previous year. What this shows is that SMEs are realising that they cannot handle cybersecurity alone, turning to MSPs for additional help. Cybersecurity is no longer an optional extra or add-on; it’s becoming a core, expected service for MSPs. MSP leaders are transitioning from general IT support to becoming essential cybersecurity guardians. ... MSPs that adapt by investing in specialized cybersecurity expertise, advanced technologies, and a proactive security posture will thrive, becoming indispensable partners to businesses navigating the complex world of cyber risk. 


What’s next for Azure containers?

Until now, even though Azure has had deep eBPF support, you’ve had to bring your own eBPF tools and manage them yourself, which does require expertise to run at scale. Not everyone is a Kubernetes platform engineer, and with tools like AKS providing a managed environment for cloud-native applications, having a managed eBPF environment is an important upgrade. The new Azure Managed Cilium tool provides a quick way of getting that benefit in your applications, using it for host routing and significantly reducing the overhead that comes with iptables-based networking. ... Declarative policies let Azure lock down container features to reduce the risk of compromised container images affecting other users. At the same time, it’s working to secure the underlying host OS, which for ACI is Linux. SELinux allows Microsoft to lock that image down, providing an immutable host OS. However, those SELinux policies don’t cross the boundary into containers, leaving their userspace vulnerable. ... Having a policy-driven approach to security helps quickly remediate issues. If, say, a common container layer has a vulnerability, you can build and verify a patch layer and deploy it quickly. There’s no need to patch everything in the container, only the relevant components. Microsoft has been doing this for OS features for some time now as part of its internal Project Copacetic, and it’s extending the process to common runtimes and libraries, building patches with updated packages for tools like Python.

Daily Tech Digest - January 01, 2026


Quote for the day:

"It always seems impossible until it’s done." -- Nelson Mandela



Why data trust is the missing link in digital transformation

Data trust is often framed as a technical issue, delegated to IT or data teams. In reality, it is a business capability with direct implications for growth, risk, and reputation. Trusted data enables organisations to: Confidently automate customer and operational workflows; Personalise experiences without introducing errors; Improve forecasting and performance reporting; and Reduce operational rework and exception handling When data cannot be trusted, leaders are forced to rely on manual checks, conservative assumptions, and duplicated processes. This increases cost and slows decision-making - the opposite of what digital transformation aims to achieve. .... Establishing data trust is not a one-time project. It requires a shift in mindset across the organisation. Data quality should be viewed as a shared responsibility, supported by the right processes and tools. Leading organisations embed data validation into their digital workflows, measure data quality as part of system health, and treat trusted data as a strategic asset. Over time, this creates a culture where decisions are made with confidence and transformation initiatives are more likely to succeed. ... Digital transformation is ultimately about enabling better decisions, faster execution, and stronger customer relationships. None of these goals can be achieved without trusted data. As organisations continue to modernise their platforms and processes, data quality should be treated as core infrastructure, not an afterthought. 


Health Data Privacy, Cyber Regs: What to Watch in 2026

When federal regulators hesitate, states often jump into filing privacy and security gaps involving health data. That includes mandates in New York to shore up cybersecurity at certain hospitals (see: New York Hospitals Are Facing Tougher Cyber Rules Than HIPAA). Also worth watching is the New York Health Information Privacy Act, Greene said. "It was passed by both New York legislative chambers in January but has not yet been formally submitted to the governor for signature, with lobbying efforts underway to amend it." "In its most recent version, it would be the toughest health privacy law in the country in many respects, including a controversial prohibition on obtaining consents for secondary uses of data until at least 24 hours after an individual creates an account or first uses the requested product or service," Greene said.  ... Greene predicted HIPAA resolution agreements and civil monetary penalties will continue much as they have in years past, with one to two dozen such cases next year. HHS has recently indicated that it intends to begin enforcing the Information Blocking Rule. "The primary target will be health IT developers," Greene said. "I expect that there are less information blocking issues with health information networks and believe that the statute and regulation's knowledge standard makes it more challenging to enforce against healthcare providers because the government must prove that a healthcare provider knew its practice to be unreasonable."


From integration pain to partnership gain: How collaboration strengthens cybersecurity

When collaborators leverage data in specific cybersecurity work, they unlock several valuable benefits, especially since no organization has complete insight into every possible threat. A shared, data-driven cybersecurity framework can offer both sides a better understanding of existing and emerging threats that could undermine one or both collaborators. Data-driven collaboration also enables partners to become more proactive in their cybersecurity posture. Coordinated data can give business partners insights into where there’s greater exposure for a cyberattack, allowing partners to work together with data-backed guidance on how to better prepare. ... The Vested model — an innovative approach based on research from the University of Tennessee — focuses on shared goals and outcomes rather than traditional transactional buyer and seller agreements. Both companies agreed on a specific set of KPIs they could use to measure the health of the partnership and keep their security goals on track, allowing them to continue to adapt cybersecurity initiatives as needs and threats evolve. “You have to build, maintain and exercise the right partnerships with business units and shared services across the enterprise so continuity plans identify the issue quickly, deploy appropriate mitigations, and ultimately restore client and business services as quickly as possible,” says Royce Curtin, IBM’s former VP of corporate security.


AI governance: A risk and audit perspective on responsible AI adoption

AI governance refers to the policies, procedures, and oversight mechanisms that guide how AI systems are developed, deployed, and monitored. It ensures that AI aligns with business objectives, complies with applicable laws, and operates in a way that is ethical and transparent. Regulatory scrutiny is increasing. The EU AI Act is setting a precedent for global standards, and U.S. agencies are signaling more aggressive enforcement, particularly in sectors like healthcare, finance, and employment. Organizations are expected to demonstrate accountability in how AI systems make decisions, manage data, and interact with users. Beyond regulation, there is growing pressure from customers, employees, and investors. ... Audit teams also help boards and audit committees understand the risks associated with AI. Their work supports transparency and builds trust with regulators and stakeholders. As AI becomes more embedded in business operations, internal audit must expand its scope to include model governance, data lineage, and ethical risk. ... Organizations that treat AI as a strategic risk are better positioned to scale it responsibly. Risk and internal audit teams have a central role in ensuring that AI systems are secure, compliant, and aligned with business goals. Citrin Cooperman helps organizations navigate AI adoption with confidence by combining deep risk expertise, practical governance frameworks, and advanced technology solutions that support secure, scalable, and compliant growth.


Six data shifts that will shape enterprise AI in 2026

While RAG won't entirely disappear in 2026, one approach that will likely surpass it in terms of usage for agentic AI is contextual memory, also known as agentic or long-context memory. This technology enables LLMs to store and access pertinent information over extended periods. Multiple such systems emerged over the course of 2025 including Hindsight, A-MEM framework, General Agentic Memory (GAM), LangMem, and Memobase. RAG will remain useful for static data, but agentic memory is critical for adaptive assistants and agentic AI workflows that must learn from feedback, maintain state, and adapt over time. In 2026, contextual memory will no longer be a novel technique; it will become table stakes for many operational agentic AI deployments. ... In 2025, we saw numerous innovations, like the notion that an AI is able to parse data from an unstructured data source like a PDF. That's a capability that has existed for several years, but proved harder to operationalize at scale than many assumed. Databricks now has an advanced parser, and other vendors, including Mistral, have emerged with their own improvements. The same is true with natural language to SQL translation. While some might have assumed that was a solved problem, it's one that continued to see innovation in 2025 and will see more in 2026. It's critical for enterprises to stay vigilant in 2026. 


Communicating AI Risk to the Board With Confidence

Most board members can comprehend that AI will drive growth. What they fail to grasp concretely is how the technology introduces a massive amount of exposure. This predicament is typically a result of how information is presented. Security and risk managers (SRMs) often describe AI incidents in the vocabulary of adversarial inputs, model drift, and architecture choices, which matter deeply but rarely answer the questions that directors tackle during their meetings. High-level stakeholders, in reality, are concerned with issues such as revenue protection, operational continuity, and competitive differentiation, creating a gap that requires more than translating acronyms. ... Traditional discussions about technology risk revolve around the triad of confidentiality, integrity, and availability. Boards know these categories well, and over the past few decades, they have learned that cybersecurity failures directly affect the business along these lines. GenAI has formidably challenged this familiar structure, with its associated risks not limited to one of these three domains.  ... When the conversation begins with the business consequence, though, the relevance is immediate. The most effective approach involves replacing those mechanics that mean so much to the internal teams with the strategic information boards need to operate. These details open a path for meaningful conversations that encourage directors to think through the implications and make more informed decisions. 


The six biggest security challenges coming in 2026

For many organizations, cybersecurity and resilience is a compliance exercise. But it must evolve into “a core intentional cybersecurity capability”, says Dimitriadis. “In 2026, organizations will need to build the capacity to anticipate regulatory changes, understand their strategic implications, and embed them into long-term planning.” ... Attackers are leveraging AI to create convincing email templates and fake websites “almost indistinguishable” from real ones – and without the common warning signs employees are trained to identify, says Mitchell. AI is also being used in vishing attacks, with deepfakes making it easier to clone the voice of high-ranking company executives to trick victims. In 2026, there will be more attacks utilizing realistic voice cloning and high-quality video deepfakes, says Joshua Walsh ... There is a current shift towards agentic AI that can take real-world actions, such as adjusting configurations, interacting with APIs, booking services and initiating financial tasks. This can increase efficiency, but it can also lead to unsafe decisions made at speed, says rradar’s Walsh. An agent told to "optimize performance" might disable logging or bypass authentication because it views security controls as delays, he suggests. Prompt injection is a hidden issue to look out for, he adds. “If a threat actor slips hidden instructions into data that the agent consumes, they can make it run actions on internal systems without anyone realising.” 


5 Changes That Will Define AI-Native Enterprises in 2026

As enterprises scale to multi-agent systems, the engineering focus will shift from creating prompts to architecting context. Multi-agent workflows rapidly expand requirements with tool definitions, conversation history, and data from multiple sources. This creates two challenges: context windows fill up, and models suffer from “context rot,” forgetting information buried in lengthy prompts. By mid-2026, context engineering will emerge as a distinct discipline with dedicated teams and specialized infrastructure, serving the minimal but complete information agents need. The best context engineers will understand both LLM constraints and their business domain’s semantic structure. ... Enterprises are realizing that AI agents need both data and meaning. Companies that spent years perfecting data lakes are already finding those assets are insufficient. AI can retrieve data, but without semantic context, it can’t interpret action or intent. That’s why teams will move beyond vector search toward building knowledge graphs, ontologies, and metadata-driven maps that teach AI how their business works. The battleground will shift from owning raw data to owning its interpretation. Off-the-shelf agents will struggle in complex domains because semantics are domain-specific. ... The AI-native enterprise looks very different from what came before. It serves machine customers, treats context as critical infrastructure, and has the tools to escape decades of technical debt. 


Microsegmentation: the unsung hero of cybersecurity (and why it should be your top priority)

Think of your network like an apartment building. You’ve got a locked front door — that’s your perimeter. But once someone gets inside, there’s no front desk checking IDs, no elevator security and the same outdated lock on every unit. An intruder can roam freely, entering any apartment they choose. Microsegmentation is the internal security system. It’s the keycard for the elevator, the camera in the hallway, the unique lock on your door. It’s what stops one compromised device from becoming a full-blown breach. ... OT environments are different. They’re often built on legacy systems, lack patching and operate in real-time. You can’t just drop an agent or reroute traffic without risking downtime. That’s why agencies need solutions that are agentless, software-defined and tailored to the unique constraints of OT. Otherwise, you’re only protecting half the house. ... Microsegmentation also plays a critical role in enabling zero trust. It enforces least privilege at the network level. It’s not just about who gets in; it’s about what they can touch once they’re inside. For agencies building toward zero trust, microsegmentation isn’t an afterthought. It’s a foundation. Despite all this, microsegmentation remains underutilized. According to TechTarget’s Enterprise Strategy Group, only 36% of organizations use it today, even though it’s foundational to zero trust. Why? Because 28% believe it’s too complex. But that perception is often rooted in outdated tooling.


Beyond Chatbots: What Makes an AI Agent Truly Autonomous

Autonomous agents must retain and use context over time. Memory enables an agent to recall previous interactions, data, and decisions—allowing it to continue a process seamlessly without restarting each time. That persistence turns single exchanges into long-running workflows. In enterprise settings, it means an agent can track a contract review across multiple sessions or follow a complex support case without losing context. ... Traditional automation runs on fixed, rule-based workflows. Autonomous agents build and revise their own plans on the fly, adapting to results and feedback. This ability to plan dynamically—think, act, observe, and adjust—is what differentiates agentic AI from robotic process automation (RPA) or prompt chaining. In practice, an agent might be tasked with analyzing a set of contracts, then automatically decide how to proceed: extract key terms, assess risk, and summarize results. ... Resilient agents are designed to operate across models, retry failed actions, or launch sub-agents to handle specialized work—all within defined guardrails. That adaptability is what separates a proof of concept (POC) from a production-ready system. ... All the reasoning in the world means little if an agent can’t execute. Tools are what translate intelligence into impact. They’re the functions, APIs, and integrations that allow agents to interact with business systems—searching systems, generating documents, updating records, or triggering workflows across CRMs, ERPs, and analytics platforms.

Daily Tech Digest - December 31, 2025


Quote for the day:

“To be successful you need friends and to be very successful you need enemies.” -- Sidney Sheldon



AI agents to outnumber humans, warns Token Security

Many agents now run in controlled, non-production environments. Token Security predicts that organisations will soon connect them directly to live systems. The company says this will force enterprises to manage agent permissions and lifecycle controls more actively. It also expects new processes for assigning accountability when an autonomous system carries out an action on behalf of a team or individual. Apelblat believes established compliance structures will not cope with this change in the workforce. Traditional frameworks assume that humans sit at the centre of most workflows. ... "Despite innovation in agentic AI, enterprises will enter 2026 still relying on static API keys and long-term credentials. These legacy mechanisms will quietly weaken agent identity integrity, creating fragile trust chains that attackers can easily exploit," said Shlomo. Shlomo also predicts a reversal of some progress made in reducing secrets stored on endpoints. Many companies have moved staff onto single sign-on and centralised identity systems. He believes poor management of local Model Context Protocol servers will now cause a resurgence of cleartext service credentials on employee devices. ... "The industry is shifting from breaches caused by human identity failures to breaches rooted in AI agent identity compromise. As agents become operational backbones, attacks targeting their tokens, personas, and delegated authority will define the next wave of high-impact incidents," said Shlomo.


AI killed the cloud-first strategy: Why hybrid computing is the only way forward now

Existing infrastructures now configured with cloud services simply may not be ready for emerging AI demands, a recent analysis from Deloitte warned. "The infrastructure built for cloud-first strategies can't handle AI economics," the report, penned by a team of Deloitte analysts led by Nicholas Merizzi, said. "Processes designed for human workers don't work for agents. Security models built for perimeter defense don't protect against threats operating at machine speed. IT operating models built for service delivery don't drive business transformation." ... AI token costs have dropped 280-fold in two years, they observe -- yet "some enterprises are seeing monthly bills in the tens of millions." The overuse of cloud-based AI services "can lead to frequent API hits and escalating costs." There's even a tipping point in which on-premises deployments make more sense. ... AI often demands near-zero latency to deliver actions. "Applications requiring response times of 10 milliseconds or below cannot tolerate the inherent delays of cloud-based processing," the Deloitte authors point out. ... Resilience is also part of the pressing requirements for fully functional AI processes. These include "mission-critical tasks that cannot be interrupted require on-premises infrastructure in case connection to the cloud is interrupted," the analysts state. ... Whether employing cloud or on-premises systems, companies should always take direct responsibility for security and monitoring, Rana said.


Agentic AI breaks out of the lab and forces enterprises to grow up

The first major stride is the shift from improvisation to repeatable patterns. Early agentic projects were nearly all “glue code”, prompt chains stitched together with brittle tool wiring and homegrown memory hacks. Every workflow was a snowflake. But now, mature organizations are creating shared agentic primitives that development teams can reuse. ... The second major stride is the rise of enterprise grade governance and safety frameworks designed specifically for agentic workflows. Traditional AI governance wasn’t built for systems that take autonomous actions, call tools, modify infrastructure, and reason over long sequences. Enterprises are now treating governance as a first class engineering challenge. ... The third stride is a philosophical and architectural shift in where enterprises choose to invest. Many companies spent months crafting custom planning modules, memory layers, tool registries, and agent routers, believing these would become strategic assets. But experience is proving otherwise. ... The fourth and most important stride is the move toward building durable components that will matter long after orchestration layers become commoditized. Enterprises increasingly understand that their competitive advantage will come from institutional intelligence: domain specific tool schemas, curated datasets, validated decision policies, and deep integration with their existing SDLC, incident response, and SOC workflows.


Businesses have always complained about compliance obligations. Could they automate themselves out of it?

Compliance can often seem like an exercise in Kafkaesque absurdity. Nutanix’s director of systems engineering, James Sturrock, says it’s not uncommon for two in-house experts to have differing opinions on how to solve the same thorny regulatory conundrum. That isn’t even getting into how competing jurisdictions might view the problem. ... Equally important are potential unknowns such as contaminated soil or sewers that don’t appear on maps or where data is incomplete. These don’t just represent potential holdups to work – and resulting penalties – but represent further risks in themselves. ... Automating alerts or making it easier to spot compliance headaches early is one thing. But what might AI contribute toward simplifying more complex compliance conundrums, like those encountered by the financial services industry? In that sector, explains Pegasystems’ global banking industry lead Steve Morgan, such models have to be readily explainable not only to customers, but internal audit teams and regulators, too. Even then, it’s already clear that certain types of AI applications aren’t completely suitable for insertion into compliance workflows – most notably, GenAI. “Unless you have a very special model that’s trained” on a specific use case, says Morgan, the answers that such models provide compliance experts just aren’t predictable or accurate enough to meet the high standards demanded of banks.


Security coverage is falling behind the way attackers behave

Cybercriminals keep tweaking their procedures, trying out new techniques, and shifting tactics across campaigns. Coverage that worked yesterday may miss how those behaviors appear today. ... Activity expanded from ransomware driven campaigns into espionage aligned behavior, with targets including telecom, energy, military, and government organizations. Researchers tracked changes in tooling, credential access, and detection evasion, including expanded use of advanced techniques against cloud and enterprise environments. ... The report describes zero-day use as commoditized. Exploits move quickly from discovery into active abuse. This compresses defender response windows from weeks into days. Early detection depends on identifying behavior tied to exploitation rather than waiting for vulnerability disclosures or patches. ... Identity became a primary target. Campaigns focused on SaaS access, cloud administration, and single sign-on abuse. Luna Moth evolved from simple callback phishing into multi-channel operations combining voice, email, and infrastructure control. ... One theme that runs through the findings is the presence of defensive gaps at the procedure level. Many organizations track techniques and tools, while execution details that signal intent receive less attention. The research connects observed procedures directly to detection and prevention controls, showing where coverage holds and where it breaks down.


Widely Used Malicious Extensions Steal ChatGPT, DeepSeek Conversations

Stolen browser history data includes not only the complete URLs from all Chrome tabs, but also search queries containing sensitive keywords and research topics, URL parameters that could contain session tokens, user IDs, and authentication data, and internal corporate URLs revealing organizational structure and tools. ... Extensions are used to improve and customize users’ browsing experience. More people are using browsers, which can expand the attack surface of the individual and the companies they work for, according to security experts. “Browser extensions aren’t niche tools anymore; they’re deeply embedded in how people work,” Grip Security researchers Ben Robertson and Guy Katzir wrote earlier this year. “But that convenience comes with risk, especially when security teams don’t have visibility into what’s installed, what it can access, or how it behaves after login. The attack surface has shifted. And while endpoint agents and network controls still matter, they can’t see what’s happening inside the browser. That’s where threats like token hijacking and data leakage quietly take shape.” ... In the most recent case, the hackers created malicious extensions that impersonated a legitimate browser created by a company called AITOPIA. The extension puts a sidebar onto any website to give users the ability to chat with popular AI LLMs, OX Security’s Siman and Bustan wrote. 


2026: The year we stop trusting any single cloud

The real story is not that cloud platforms failed; it’s that enterprises quietly allowed those platforms to become single points of failure for entire business models. In 2025, many organizations discovered that their digital transformation had traded physical single points of failure for logical ones in the form of a single region, a single provider, or even a single managed database. When a hyperscaler region had trouble, companies learned the hard way that “highly available within a region” is not the same as “business resilient.” What caught even seasoned teams off guard was the hidden dependency chain. ... Expect to see targeted workload shifts that move critical customer-facing systems from single-region to multi-region or cross-cloud setups, re-architecting data platforms with replicated storage and active-active databases (meaning that we have two running, with one backing up the other). Also, relocating some systems to private or colocation environments based on risk. ... In 2026, smart enterprises will start asking their vendors the hard questions. Which regions and providers do you use? Do you have a tested failover strategy across regions or providers? What happens to my data and SLAs if your primary cloud has a regional incident? Many will diversify not just across hyperscalers, but across SaaS and managed services, deliberately avoiding over-concentration on any provider that cannot demonstrate meaningful redundancy.


AI Is Forcing Businesses To Rethink Their Data Strategies

One of the biggest misconceptions about cloud repatriation is that it’s a simple reversal of a cloud migration. In reality, AI workloads frequently exceed the capabilities of existing on-prem infrastructure. “Servers that were procured three years ago may not be able to handle what these applications require,” Brodsky says. As a result, repatriation decisions often trigger broader modernization efforts, including new hardware, increased power and cooling capacity, and redesigned architectures. Before making those investments, organizations need a clear understanding of their current environment and future requirements. ... “You have to evaluate whether your on-prem environment can actually ingest and protect what you’re bringing down from the cloud,” he says. Timelines and approaches vary. Some organizations opt for high-level assessments to guide strategy, while others pursue deeper technical workshops or phased transitions based on business priorities and service-level agreements. Despite the renewed interest in on-prem infrastructure, cloud repatriation doesn’t signal a retreat from cloud computing. Instead, it reflects a more mature understanding of hybrid IT. “Five years ago, we had daily conversations with customers who wanted to be 100% cloud,” Brodsky says. “Very few actually got there.” Today, most organizations operate hybrid environments by necessity, balancing cloud flexibility with on-prem performance, cost predictability and governance. 


AI-Driven CLM: The New Standard for Enterprise Contracts

Most enterprises still rely on fragmented approaches to contract management. Agreements live in email threads, local folders, and legacy systems that do not communicate with each other. Legal teams spend hours searching for documents that should be accessible in seconds. This disorganization creates real business consequences. Contracts expire without renewal. Compliance obligations go untracked. Revenue recognition gets delayed because finance cannot locate the signed agreement. ... AI-driven contract lifecycle management takes a fundamentally different approach. Instead of treating contracts as paperwork to be stored, modern CLM platforms treat them as data to be analyzed, monitored, and optimized. The shift starts with intelligent data extraction. When a contract enters the system, AI automatically identifies and extracts key terms, dates, obligations, and clauses. No more manual data entry. No more inconsistent tagging. The system understands what it is reading and organizes information accordingly. ... Every contract carries risk. Hidden indemnification clauses, unfavorable liability terms, and non-standard language can expose organizations to significant liability. Catching these issues manually requires experienced legal reviewers and substantial time. AI changes this equation. Modern CLM platforms scan agreements against predefined playbooks and flag deviations instantly. 


How to Do Enterprise Autonomy Right

Autonomous enterprise agents are architected differently. They integrate language understanding, taking calls, planning and orchestration into a closed loop. This allows the agent to assess goals, interpret inputs, break them down into tasks and execute across multiple systems. It can adapt when conditions change and learn from feedback over time. The shift from automation to autonomy requires moving from flow-based design to intent-based execution. For enterprises, this means embedding capabilities that allow agents to sense, decide and act in real time. ... It's non-negotiable for agents to function only within clearly defined domains, with visibility restricted to authorized data and systems. Second, their decision-making logic should be transparent and traceable, ensuring that every outcome can be audited and explained. Third, controls must exist to intervene in real time, whether to pause, override or shut down the agent entirely. Lastly, it is crucial for agents to be built to fail safely. If context shifts beyond their training, the agent must escalate or defer. This is not a fallback but rather is a core design principle that reinforces responsible AI posture. ... The line between productive autonomy and dangerous overgeneralization is best drawn where explainability ends. If a system's actions can no longer be explained in business terms, it is no longer serving the enterprise. Control is central to it and autonomy should expand only when safeguards, governance and organizational readiness evolve alongside it.