Daily Tech Digest - January 06, 2026


Quote for the day:

"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera



Data 2026 outlook: The rise of semantic spheres of influence

While data started to garnering attention last year, AI and agents continued to suck up the oxygen. Why the urgency of agents? Maybe it’s “fear of missing out.” Or maybe there’s a more rational explanation. According to Amazon Web Services Inc. CEO Matt Garman, agents are the technology that will finally make AI investments pay off. Go to the 12-minute mark in his recent AWS re:Invent conference keynote, and you’ll hear him say just that. But are agents yet ready for prime time? ... And of course, no discussion of agentic interaction with databases is complete without mention of Model Context Protocol. The open-source MCP framework, which Anthropic PBC recently donated to the Linux Foundation, came out of nowhere over the past year to become the de facto standard for how AI models connect with data. ... There were early advances for extending governance to unstructured data, primarily documents. IBM watsonx.governance introduced a capability for curating unstructured data that transforms documents and enriches them by assigning classifications, data classes and business terms to prepare them for retrieval-augmented generation, or RAG. ... But for most organizations lacking deep skills or rigorous enterprise architecture practices, the starting points for defining semantics is going straight to the sources: enterprise applications and/or, alternatively, the newer breed of data catalogs that are branching out from their original missions of locating and/or providing the points of enforcement for data governance. In most organizations, the solution is not going to be either-or.


Engineering Speed at Scale — Architectural Lessons from Sub-100-ms APIs

Speed shapes perception long before it shapes metrics. Users don’t measure latency with stopwatches - they feel it. The difference between a 120 ms checkout step and an 80 ms one is invisible to the naked eye, yet emotionally it becomes the difference between "smooth" and "slightly annoying". ... In high-throughput platforms, latency amplifies. If a service adds 30 ms in normal conditions, it might add 60 ms during peak load, then 120 ms when a downstream dependency wobbles. Latency doesn’t degrade gracefully; it compounds. ... A helpful way to see this is through a "latency budget". Instead of thinking about performance as a single number - say, "API must respond in under 100 ms" - modern teams break it down across the entire request path: 10 ms at the edge; 5 ms for routing; 30 ms for application logic; 40 ms for data access; and 10–15 ms for network hops and jitter. Each layer is allocated a slice of the total budget. This transforms latency from an abstract target into a concrete architectural constraint. Suddenly, trade-offs become clearer: "If we add feature X in the service layer, what do we remove or optimize so we don’t blow the budget?" These conversations - technical, cultural, and organizational - are where fast systems are born. ... Engineering for low latency is really engineering for predictability. Fast systems aren’t built through micro-optimizations - they’re built through a series of deliberate, layered decisions that minimize uncertainty and keep tail latency under control.


Everything you need to know about FLOPs

A FLOP is a single floating‑point operation, meaning one arithmetic calculation (add, subtract, multiply, or divide) on numbers that have decimals. Compute benchmarking is done in floating point/fractional rather than integer/whole numbers because floating point is far more accurate of a measure than integers. A prefix is added to FLOPs to measure how many are performed in a second, starting with mega- (millions) the giga- (billions), tera- (trillions), peta- (quadrillions), and now exaFLOPs (quintillions). ... Floating point in computing starts at FP4, or 4 bits of floating point, and doubles all the way to FP64. There is a theoretical FP128, but it is never used as a measure. FP64 is also referred to as double-precision floating-point format, a 64-bit standard under IEEE 754 for representing real numbers with high accuracy. ... With petaFLOPS and exaFLOPs becoming a marketing term, some hardware vendors have been less than scrupulous in disclosing what level of floating-point operation their benchmarks use. It’s not it’s not uncommon for a company to promote exascale performance and then saying the little fine print that they’re talking about FP8, according to Snell. “It used to be if someone said exaFLOP, you could be pretty confident that they meant exaFLOP according to 64-bit scientific computing, but not anymore, especially in the field of AI, you need to look at what’s going behind that FLOP,” said Snell.


From SBOM to AI BOM: Rethinking supply chain security for AI native software

An effective AI BOM is not a static document generated at release time. It is a lifecycle artifact that evolves alongside the system. At ingestion, it records dataset sources, classifications, licensing constraints, and approval status. During training or fine-tuning, it captures model lineage, parameter changes, evaluation results, and known limitations. At deployment, it documents inference endpoints, identity and access controls, monitoring hooks, and downstream integrations. Over time, it reflects retraining events, drift signals, and retirement decisions. Crucially, each element is tied to ownership. Someone approved the data. Someone selected the base model. Someone accepted the residual risk. This mirrors how mature organizations already think about code and infrastructure, but extends that discipline to AI components that have historically been treated as experimental or opaque. To move from theory to practice, I encourage teams to treat the AI BOM as a “Digital Bill of Lading, a chain-of-custody record that travels with the artifact and proves what it is, where it came from, and who approved it. The most resilient operations cryptographically sign every model checkpoint and the hash of every dataset. By enforcing this chain of custody, they’ve transitioned from forensic guessing to surgical precision. When a researcher identifies a bias or security flaw in a specific open-source dataset, an organization with a mature AI BOM can instantly identify every downstream product affected by that “raw material” and act within hours, not weeks.


Beyond the Firehose: Operationalizing Threat Intelligence for Effective SecOps

Effective operationalization doesn't happen by accident. It requires a structured approach that aligns intelligence gathering with business risks. A framework for operationalizing threat intelligence structures the process from raw data to actionable defence, involving key stages like collection, processing, analysis, and dissemination, often using models like MITRE ATT&CK and Cyber Kill Chain. It transforms generic threat info into relevant insights for your organization by enriching alerts, automating workflows (via SOAR), enabling proactive threat hunting, and integrating intelligence into tools like SIEM/EDR to improve incident response and build a more proactive security posture. ... As intel maturity develops, the framework continuously incorporates feedback mechanisms to refine and adapt to the evolving threat environment. Cross-departmental collaboration is vital, enabling effective information sharing and coordinated response capabilities. The framework also emphasizes contextual integration, allowing organizations to prioritize threats based on their specific impact potential and relevance to critical assets. This ultimately drives more informed security decisions. ... Operationalization should be regarded as an ongoing process rather than a linear progression. If intelligence feeds result in an excessive number of false positives that overwhelm Tier 1 analysts, this indicates a failure in operationalization. It is imperative to institute a formal feedback mechanism from the Security Operations Center to the Intelligence team.


Compliance vs. Creativity: Why Security Needs Both Rule Books and Rebels

One of the most common tensions in the SOC arises from mismatched expectations. Compliance officers focus on control documentation when security teams are focusing on operational signals. For example, a policy may require multi-factor authentication (MFA), but if the system doesn’t generate alerts on MFA fatigue or unusual login patterns, attackers can slip past controls without detection. It’s important to also remember that just because something’s written in a policy doesn’t mean it’s being protected. A control isn’t a detection. It only matters if it shows up in the data. Security teams need to make sure that every big control, like MFA, logging, or encryption, has a signal that tells them when it’s being misused, misconfigured, or ignored. ... In a modern SOC, competing priorities are expected. Analysts want manageable alert volumes, red teams want room to experiment, and managers need to show compliance is covered. And at the top, CISOs need metrics that make sense to the board. However, high-performing teams aren’t the ones that ignore these differences. They, again, focus on alignment. ... The most effective security programs don’t rely solely on rigid policy or unrestricted innovation. They recognize that compliance offers the framework for repeatable success, while creativity uncovers gaps and adapts to evolving threats. When organizations enable both, they move beyond checklist security. 


AI governance through controlled autonomy and guarded freedom

Controlled autonomy in AI governance refers to granting AI systems and their development teams a defined level of independence within clear, pre-established boundaries. The organization sets specific guidelines, standards and checkpoints, allowing AI initiatives to progress without micromanagement but still within a tightly regulated framework. The autonomy is “controlled” in the sense that all activities are subject to oversight, periodic review and strict adherence to organizational policies. ... In practice, controlled autonomy might involve delegated decision-making authority to AI project teams, but with mandatory compliance to risk assessment protocols, ethical guidelines and regulatory requirements. For example, an organization may allow its AI team to choose algorithms and data sources, but require regular reports and audits to ensure transparency and accountability. Automated systems may operate independently, yet their outputs are monitored for biases, errors or security vulnerabilities. ... Deciding between controlled autonomy and guarded freedom in AI governance largely depends on the nature of the enterprise, its industry and the specific risks involved. Controlled autonomy is best suited for sectors where regulatory compliance and risk mitigation are paramount, such as banking, healthcare or government services. ... Both controlled autonomy and guarded freedom offer valuable frameworks for AI governance, each with distinct strengths and potential drawbacks. 


The 20% that drives 80%: Uncovering the secrets of organisational excellence

There are striking universalities in what truly drives impact. The first, which all three prioritise, is the belief that employee experience is inseparable from customer experience. Whether it is called EX = CX or framed differently, the sharp focus on making the workplace purposeful and engaging is foundational. Each business does this in a unique way, but the intent is the same: great employee experience leads to great customer experience. ... The second constant is an unwavering drive for business excellence. This is a nuanced but powerful 20% that shapes 80% of outcomes. Take McDonald’s, for instance: the consistency of quality and service, whether you are in Singapore, India, Japan or the US, is remarkable. Even as we localise, the core excellence remains unchanged. The same is true for Google, where the reliability of Search and breakthroughs in AI define the brand, and for PepsiCo, where high standards across foods and beverages define the brand.  ... The third—and perhaps most challenging—is connectedness. For giants of this scale, fostering deep connections across global, regional and country boundaries, and within and across teams, is crucial. It is about psychological safety, collaboration, and creating space for people to connect and recognise each other. This focus on connectedness enables the other two priorities to flourish. If organisations keep these three at the heart of their practice, they remain agile, resilient, and, as I like to put it, the giants keep dancing.


Turning plain language into firewall rules

A central feature of the design is an intermediate representation that captures firewall policy intent in a vendor agnostic format. This representation resembles a normalized rule record that includes the five tuple plus additional metadata such as direction, logging, and scheduling. This layer separates intent from device syntax. Security teams can review the intermediate representation directly, since it reflects the policy request in structured form. Each field remains explicit and machine checkable. After the intermediate representation is built, the rest of the pipeline operates through deterministic logic. The current prototype includes a compiler that translates the representation into Palo Alto PAN OS command line configuration. The design supports additional firewall platforms through separate back end modules. ... A vendor specific linter applies rules tied to the target firewall platform. In the prototype, this includes checks related to PAN OS constraints, zone usage, and service definitions. These checks surface warnings that operators can review. A separate safety gate enforces high level security constraints. This component evaluates whether a policy meets baseline expectations such as defined sources, destinations, zones, and protocols. Policies that fail these checks stop at this stage. After compilation, the system runs the generated configuration through a Batfish based simulator. The simulator validates syntax and object references against a synthetic device model. Results appear as warnings and errors for inspection.


Why cybersecurity needs to focus more on investigation and less on just detection and response

The real issue? Many of today’s most dangerous threats are the ones that don’t show up easily on detection radars. Think about the advanced persistent threats (APTs) that remain hidden for months or the zero-day attacks that exploit vulnerabilities no one even knew existed. These threats may slip right past the detection systems because they don’t act in obvious ways. That’s why, in these cases, detection alone isn’t enough. It’s just the first step. ... Think of investigation as the part where you understand the full story. It’s like detective work: not just looking at the footprints, but figuring out where they came from, who’s leaving them, and why they’re trying to break in in the first place. You can’t stop a cyberattack with detection alone if you don’t understand what caused it or how it worked. And if you don’t know the cause, you can’t appropriately respond to the detected threat. ... The cost of neglecting investigation goes beyond just missing a threat. It’s about missed opportunities for learning and growth. Every attack offers a lesson. By investigating the full scope of a breach, you gain insights that not only help in responding to that incident but also prepare you to defend against future ones. It’s about building resilience, not just reaction. Think about it: If you never investigate an incident thoroughly, you’re essentially ignoring the underlying risk that allowed the threat to flourish. You might fix the hole that was exploited, but you won’t have a clear understanding of why it was there in the first place. 

Daily Tech Digest - January 05, 2026


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe



How to make AI agents reliable

Easier said than done. After all, the way genAI works, we’re trying to build deterministic software on top of probabilistic models. Large language models (LLMs), cool though they may be, are non-deterministic by nature. Chaining them together into autonomous loops amplifies that randomness. If you have a model that is 90% accurate, and you ask it to perform a five-step chain of reasoning, your total system accuracy drops to roughly 59%. That isn’t an enterprise application; it’s a coin toss—and that coin toss can cost you. Whereas a coding assistant can suggest a bad function, an agent can actually take a bad action. ... Breunig highlights “context poisoning” as a major reliability killer, where an agent gets confused by its own history or irrelevant data. We tend to treat the context window like a magical, infinite scratchpad. It isn’t. It is a database of the agent’s current state. If you fill that database with garbage (unstructured logs, hallucinated prior turns, or unauthorized data), you get garbage out. ... Finally, we need to talk about the user. One reason Breunig cites for the failure of internal agent pilots is that employees simply don’t like using them. A big part of this is what I call the rebellion against robot drivel. When we try to replace human workflows with fully autonomous agents, we often end up with verbose, hedging, soulless text, and it’s increasingly obvious to the recipient that AI wrote it, not you. And if you can’t be bothered to write it, why should they bother to read it?


Three Cybersecurity predictions that will define the CISO agenda in 2026

Different tools report different versions of “critical” risk. One team escalates an issue while another deprioritises it based on alternative scoring models. Decisions become subjective, slow and inconsistent without a coherent strategy - and critical attack paths remain open. If cyber risk is not presented consistently in the context of business impact, it’s nearly impossible to align cybersecurity with broader business objectives. In 2026, leaders will no longer tolerate this ambiguity. Boards and executives don’t want more dashboards. ... Social engineering campaigns are already more convincing, more personalised and harder for users to detect. Messages sound legitimate. Voices and content appear authentic. The line between real and fake is blurring at scale. In 2026, mature organisations will take a more disciplined approach. They will map AI initiatives to business objectives, identify which revenue streams and operational processes depend on them, and quantify the value at risk. This allows CISOs to demonstrate where existing investments meaningfully reduce exposure — and where they don’t — while maintaining operational integrity and trust. ... AI agents will take over high-volume, repetitive tasks — continuously analysing vast streams of telemetry, correlating signals across environments, and surfacing the handful of risks that truly matter. They will identify the needle in the haystack. Humans will remain firmly in the loop. 


The Hidden Costs of Silent Technology Failures

"Most CIOs see failures as negative experiences that undermine their credibility, effectiveness and ultimate growth within the organization," Koeppel said. Under those conditions, escalation is rationally delayed. CIOs attempt recovery first, including new baseline plans, renegotiations of vendor commitments and a narrower scope before formally declaring failure. ... CIOs, Dunkin noted, frequently underplay failure to shield their teams from blame. Few leaders want finger-pointing to cascade through already strained organizations. But Dunkin pointed out that the same instincts are shaped by fear of job loss, budget erosion or internal power shifts. And, she warns, bad news does not age well. Beyond politics and incentives, decision-making psychology compounds the problem. Jim Anderson, founder of Blue Elephant Consulting, describes how sunk-cost bias distorts executive judgment. Admitting a mistake publicly opens leaders to criticism, so past decisions are defended rather than reassessed. ... But not all organizations respond this way. Koeppel said that in his experience, boards and CEOs are receptive to clear, concise explanations when technology initiatives deviate from plan. Over time, disclosure improves because consequences change. Sethi described the shift to openness that followed a major outage in one organization. It resulted in mandatory, blameless post-mortem reviews that focused on systemic and process breakdowns rather than individual fault.


2026 Low-Code/No-Code Predictions

The promise of low-code platforms will finally materialize by the end of 2026. AI will let business users create bespoke applications without writing code, while professional developers guide standards, security, and integration. The line between "developer" and "user" will blur as agentic systems become part of daily work. ... No code's extinction: No code's on its last legs — it's being snuffed out by vibe coding. AI-driven development tools will be the final knell for no code as we know it, with its remit curtailed in this new coding landscape. In this future, the focus will transition entirely to model orchestration and high-level knowledge work, where humans express their intent and expertise through abstract models rather than explicit code. The human role becomes centered on the plan to build. Specifically, ensuring the problem is correctly scoped and defined. ... In 2026, low-code/no-code interfaces will rapidly shift from drag and drop canvases to natural language interfaces, as user expectations rapidly adopt to the changing landscape. As this transition occurs, application vendors will struggle to provide transparency into how the application has interpreted the users' intent. ... While it's proved remarkable for supercharging development speed and allowing non-technical individuals to produce functional software, its outputs are less than perfect. This year, we've continued to uncover that much of AI-generated code turns out fragile or flat-out wrong once it faces real workflows or customers. 


AI security risks are also cultural and developmental

The research shows that AI systems increasingly shape cultural expression, religious understanding, and historical narratives. Generative tools summarize belief systems, reproduce artistic styles, and simulate cultural symbols at scale. Errors in these representations influence trust and behavior. Communities misrepresented by AI outputs disengage from digital systems or challenge their legitimacy. In political or conflict settings, distorted cultural narratives contribute to disinformation, polarization, and identity-based targeting. Security teams working on information integrity and influence operations encounter these risks directly. The study positions cultural misrepresentation as a structural condition that adversaries exploit rather than an abstract ethics issue. ... Systems designed with assumptions of reliable connectivity or standardized data pipelines fail in regions where those conditions do not hold. Healthcare, education, and public service applications show measurable performance drops when deployed outside their original development context. These failures expose organizations to cascading risks. Decision support tools generate flawed outputs. Automated services exclude segments of the population. Security monitoring systems miss signals embedded in local language or behavior. ... Models operate on statistical patterns and lack awareness of missing data. Cultural knowledge, minority histories, and local practices often remain absent from training sets. This limitation affects detection accuracy. 


The Board’s Duty in the Age of the Black Box

Today, when this Board approves the acquisition of a Generative AI startup or authorizes a billion-dollar investment in GPU infrastructure, you are acquiring a Black Box. You are purchasing a system defined not by logical rules, but by billions of specific weights, biases, and probabilistic outcomes. These systems are inherently unstable; they “hallucinate,” they drift, and they contain latent biases that no static audit can fully reveal. They are closer to biological organisms than to traditional software. ... Critics may argue that applying financial volatility models to operational AI risk is a conceptual leap. There is no perfect mathematical bridge between “Model Drift” and “WACC” (Weighted Average Cost of Capital). However, in the absence of a liquid market for “Algorithm Liability Insurance” or standardized auditing protocols, the Board must rely on empirical proxies to gauge risk. ... The single largest destroyer of capital in the current AI cycle is the misidentification of a “Wrapper” as a “Moat.” The Board must rigorously interrogate the strategic durability of the asset. ... The Risk Committee’s role is shifting from passive monitoring to active defense. The risks associated with AI are “Fat-Tailed”—meaning that while day-to-day operations might be smooth, the rare failure modes are catastrophic. ... For the Chief Information Officer (CIO), the concept of “Model Risk” translates directly into operational reality. It is critical to differentiate between “Valuation Risk” and “Maintenance Cost.”


Cybersecurity leaders’ resolutions for 2026

Any new initiative will start with a clear architectural plan and a deep understanding of end-to-end dependencies and potential points of failure. “By taking a thoughtful, engineering-driven approach — rather than reacting to outages or disruptions — we aim to strengthen the stability, scalability, and reliability of our systems,” he says. “This foundation enables the business to move with confidence, knowing our technology and security investments are built to endure and evolve.” ... As new attack surfaces emerge with AI-driven applications and systems, Piekarski’s priorities will focus on defending and hardening the environment against AI-enabled threats and tactics.  ... In practice, SaaS management and discovery tools will be used to get a handle on shadow IT and unsanctioned AI usage. Automation for compliance and reporting will be important as customer and regulatory requirements around ESG and security continue to grow, along with threat intelligence feeds and vulnerability management solutions that help Gallagher and the team stay ahead of what’s happening in the wild. “The common thread is visibility and control; we need to know what’s in our environment, how it’s being used, and that we can respond quickly when things change,” he tells CSO. ... “Quantum computing poses significant cyber risks by potentially breaking current encryption methods, impacting data security, and enabling new attack vectors,” says Piekarski.


Enterprise Digital Twin: Why Your AI Doesn’t Understand Your Organization

Agentic AI systems are moving from research papers to production pilots, taking critical business actions such as processing invoices, scheduling meetings, drafting communications, and coordinating workflows across teams. They operate with increasing autonomy. When an agent misunderstands organizational context, it does not just give a wrong answer. It takes wrong actions, such as approving expenses that violate policy, scheduling meetings with people who should not be in the room, routing decisions to the wrong authority, and creating compliance exposure at machine speed. The industry is catching up to this reality. ... An AI system reviewing a staffing request might confirm that the budget exists, the policy allows the hire, and the hiring manager has authority. All technically correct. But without Constraint Topology, the system does not know that HR cannot process new hires until Q2 due to a systems migration, that the only approved vendor for background checks has a six-week backlog, or that three other departments have competing requisitions for the same job grade and only two can be filled this quarter. ... Most AI frameworks focus on making models smarter. CTRS focuses on making organizations faster. Technically correct outputs that do not translate into action are not actually useful. The bottleneck is not AI capability. It is the distance between what AI recommends and what the organization can execute.


The agentic infrastructure overhaul: 3 non-negotiable pillars for 2026

If 2025 was about the brain (the LLM), 2026 must be about the nervous system. You cannot bolt a self-correcting, multi-step agent onto a 2018 ERP and expect it to function. To move from isolated pilots to enterprise-wide autonomous workflows, we must overhaul our architectural blueprint. We are moving from a world of rigid, synchronous commands to a world of asynchronous, event-driven fluidity. ... We build dashboards with red and green lights so a DevOps engineer can identify a spike in latency. However, an AI agent cannot “look” at a Grafana dashboard. If an agent encounters an error mid-workflow, it needs to understand why in a format it can digest. ... Stop “bolting on” agents to legacy REST APIs. Instead, build an abstraction layer — an “agent gateway” — that converts synchronous legacy responses into asynchronous events that your agents can subscribe to. ... The old mantra was “Data is the new oil.” In 2026, data is just the raw material; Metadata is the fuel. Businesses have spent millions “cleaning” data in snowflakes and lakes, but clean data lacks the intent that agents require to make decisions. ... Invest in a data catalog that supports semantic tagging. Ensure your data engineers are not just moving rows and columns, but are defining the “meaning” of those rows in a way that is accessible via your RAG pipelines. ... The temptation in 2026 will be to build “bespoke” agents for every department — a HR agent, a finance agent, a sales agent. This is a recipe for a new kind of “shadow IT” and massive technical debt.


The New Front Line Of Digital Trust: Deepfake Security

AI-generated deepfakes are ruining the way we perceive one another, as well as undermining institutions’ ways of ensuring identity, verifying intent and maintaining trust. For CISOs and IT security risk leaders, this is a new and pressing frontier for us to focus on: defending against attacks not on systems but on beliefs. ... Deepfakes are coming to the forefront just as CISOs have more risk to manage than ever. Here are some of the other key pressures driving the financial cybersecurity environment today: Multicloud misconfigurations and API exposure; Ransomware shift to triple extortion; Expanding third-party and fourth-party dependencies; Insider threats facing hybrid workforces; Barriers to zero-trust implementation and Regulatory fragmentation. ... Deepfake security isn’t a fringe issue anymore; it’s now a foremost challenge to digital trust and systemic financial resilience. In today’s world, where synthetic voices can create markets and fake identities can trigger transactions, authenticity reigns as the currency of banking. Tomorrow’s front-runners will be those building the next-generation financial systems—secured, transparent and globally trusted. Those systems will include reconfigured trust frameworks, deepfake detection, AI governance that drives model integrity and a resilient-by-design approach. In this world, where anyone can create an AI-generated identity, the ultimate competitive differentiator is proving what’s real.

Daily Tech Digest - January 03, 2026


Quote for the day:

“Some people dream of great accomplishments, while others stay awake and do them.” -- Anonymous


Cloud costs now No. 2 expense at midsize IT companies behind labor

The Cloud Capital survey shows midsize IT vendor CFOs and their CIO partners struggling to contain cloud spending, with significant cost volatility from month to month. Three-quarters of IT org CFOs report cloud spending forecasts varying between 5% and 10% of company revenues each month, Pingry notes. Costs of AI workloads are harder to predict than traditional SaaS infrastructure, Pingry adds, and organizations running major AI workloads are more likely to report margin declines tied to cloud spending than those with moderate AI exposure. “Training spikes, usage-driven inference, and experimentation noise introduce non-linear patterns that break the forecasting assumptions finance relies on,” says a report from Cloud Capital. “The challenge will intensify as AI’s share of cloud spend continues scaling.” ... Cloud services in themselves aren’t inherently too expensive, but many organizations shoot themselves in the foot through unintentional consumption, Clark adds. “Costs rise when the system is built without a clear understanding of the value it is meant to deliver,” he adds. ... “No CxO wants to explain to the board why another company used AI to leap ahead,” Clark adds. “This has created a no-holds-barred spending spree on training, inference, and data movement, often layered on top of architectures that were already economically incoherent.”


Securing Integration of AI into OT Technology

For critical infrastructure owners and operators, the goal is to use AI to increase efficiency and productivity, enhance decision-making, save costs, and improve customer experience – much like digitalization. However, despite the many benefits, integrating AI into operational technology (OT) environments that manage essential public services also introduces significant risks – such as OT process models drifting over time or safety-process bypasses – that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure. ... Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle. ... Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration. ... Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance. ... Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans.The agencies said critical infrastructure owners and operators should review this guidance so they can safely and securely integrate AI into OT systems. 


Rethinking Risk in a Connected World

As consumer behavior data proliferates and becomes increasingly available, it presents both an opportunity and a challenge for actuaries, Samuell says. Actuaries have the opportunity to better align expected and actual outcomes, while also facing the challenge of accounting for new sources of variability that traditional data does not capture. ... Keep in mind that incorporating behavioral factors into risk models does not guarantee certainty. A customer whom the model predicts to be at high risk of dishonesty may actually act honestly. “Ethical insurers must avoid treating predictive categories as definitive labels,” Samuell says. “Operational guidelines should ensure that all customers are treated with fairness and dignity, even as insurers make better use of available data.” ... Behavioral analytics is also changing how insurers engage with their customers. For example, by understanding how policyholders interact with digital platforms—including how often they log in, which features they use, and where they disengage—insurers can identify friction points and design more intuitive, personalized services. ... Consumer behavior data can also inform communication strategies for insurers. For example, “actuaries often want to be very precise, but data shows that can diminish comprehension of communications,” Stevenson says. ... In addition to data generated by insured individuals through technology, some insurance companies also use data from government and other sources in risk modeling. 


Inside the Cyber Extortion Boom: Phishing Gangs and Crime-as-a-Service Trends

Phishing attempts are growing in volume partly because organized crime groups no longer need technical knowledge to launch ransomware or other forms of cyber extortion: they can simply buy in the services they need. This ongoing trend is combined with emerging social engineering techniques, including multi-channel attacks, deep fakes and ClickFix exploits. Cybercriminals are also using AI to fine tune their operations, with more persuasive personalization, better translation into other languages and easier reconnaissance against high-value targets. It is becoming harder to detect and block attacks, and harder to train workforces to spot suspicious activity. ... “AI has increased the accuracy of a lot of phishing emails. Everybody was familiar with phishing emails you could spot it by the bad grammar and the poor formatting and stuff like that. Previously, a good attacker could create a good phishing email. All AI has done is allowed the attacker to generate good quality phishing emails at speed and at scale,” explained Richard Meeus, EMEA director of security strategy and technology at Akamai. ... For CISOs, wider cybersecurity and fraud prevention teams, recent developments in phishing and cyber extortion schemes will pose real challenges in the coming year. “User awareness still matters, but it isn’t enough,” cautioned Forescout’s Ferguson. “In a world of deepfake video, cloned voices and perfect written English, your control point can’t be ‘would our users spot this?’”


AI Fatigue: Is the backlash against AI already here?

The problem of AI fatigue is inevitable, but also to be expected, according to Dr Clare Walsh, director of education at the Institute of Analytics (IoA). “For those working in digital long enough. They know there is always a period after the initial excitement at the launch of a new technology when ordinary users start to see the costs and limitations of the latest technologies,” she says. “After 10 years of non-stop exciting advancements – from the first neural nets in 2016 to RAG solutions today – we may have forgotten this phase of disappointment was coming. It doesn’t negate the potential of AI technology – it is just an inevitable part of the adoption curve.” ... Holding back the tide of AI fatigue is also about not presenting it as the only solution to every problem, warns Claus Jepsen, Unit4’s CTO. “It is absolutely critical the IT team is asking the right questions and thoroughly interrogating the brief from the business,” he explains. “Quite often, AI is not the right answer. If you foist AI onto the business when they don’t want or need it, you’ll get a backlash. You can avoid the threat of AI fatigue if you listen carefully to your team and really appreciate how they want to interact with technology, where its use can be improved, and where it adds absolutely no value.” ... “AI fatigue is not just a productivity issue; it is a board-level risk,” she says. “When workflows are interrupted, or systems overlap, trust in technology erodes, driving disengagement, errors, and higher attrition. ...”


Why Cybersecurity Risk Management Will Continue to Increase in Complexity in 2026

The year 2026 ushers in tougher rules across regions and industries. Compliance pressure continues to build from multiple directions. By 2026, sector-specific and regional rules will grow tighter, from NIS2 enforcement across Europe to updated PCI DSS controls, alongside firmer privacy and AI oversight. Privacy laws continue tightening while new AI regulations add requirements around algorithmic transparency and data handling. Organizations are now juggling NIST frameworks, ISO 27001 certifications, and sector-specific mandates simultaneously. Each framework arrives with a valid intent, yet together they create layers of obligation that rarely align cleanly. This tension surfaced clearly in 2025, when more than forty CISOs from global enterprises urged the G7 and OECD to push for closer regulatory coordination. Their message was simple. Fragmented rules drain limited security resources and weaken collective response. ... The majority of organizations no longer run security in isolation. Daily operations depend on cloud providers, managed service partners, niche SaaS tools, and open-source libraries pulled into production without much ceremony. The problem keeps compounding: your vendors have their own vendors, creating chains of dependency that stretch impossibly far. You can secure your own network perfectly and still get breached because a third-party contractor left credentials exposed.


Seven steps to AI supply chain visibility — before a breach forces the issue

NIST’s AI Risk Management Framework, released in 2023, explicitly calls for AI-BOMs as part of its “Map” function, acknowledging that traditional software SBOMs don’t capture model-specific risks. But software dependencies resolve at build time and stay fixed. Conversely, model dependencies resolve at runtime, often fetching weights from HTTP endpoints during initialization, and mutate continuously through retraining, drift correction, and feedback loops. LoRA adapters modify weights without version control, making it impossible to track which model version is actually running in production. ... AI-BOMs are forensics, not firewalls. When ReversingLabs discovered nullifAI-compromised models, documented provenance would have immediately identified which organizations downloaded them. That’s invaluable to know for incident response, while being practically useless for prevention. Budgeting for protecting AI-BOMs needs to take that factor into account. The ML-BOM tooling ecosystem is maturing fast, but it's not where software SBOMs are yet. Tools like Syft and Trivy generate complete software inventories in minutes. ML-BOM tooling is earlier in that curve. Vendors are shipping solutions, but integration and automation still require additional steps and more effort. Organizations starting now may need manual processes to fill gaps. AI-BOMs won't stop model poisoning as that happens during training, often before an organization ever downloads the model.


Power, compute, and sovereignty: Why India must build its own AI infrastructure in 2026

Digital infrastructure decisions made in 2026 will shape India’s technological posture well into the 2040s. Data centers, power systems, and AI platforms are not short-cycle investments; they are multi-decade commitments. In this context, policy clarity becomes a prerequisite for execution rather than an afterthought. Clear, stable frameworks around data governance, AI regulation, cross-border compute flows, and energy integration reduce long-term risk and enable infrastructure to be designed correctly the first time. Ambiguity forces fragmentation capital hesitates, architectures become reactive, and systems are retrofitted instead of engineered. As India accelerates its AI ambitions, predictability in policy will be as important as speed in deployment. ... In India’s context, sovereignty does not imply isolation. It implies resilience. Compliance, data residency, and AI governance cannot be retrofitted into infrastructure after it is built. They must be embedded from inception governing where data resides, how it moves, how workloads are isolated, audited, and secured, and how infrastructure responds to evolving regulatory expectations. Systems designed this way reduce friction for enterprises operating in regulated environments and provide governments with confidence in domestic digital capability. This reality also reframes the role of domestic technology firms. 


Why AI Risk Visibility Is the Future of Enterprise Cybersecurity Strategy

Vulnerabilities arise from two sources: internal infrastructure and third-party tools that companies rely on. Organizations typically have stronger control over internally developed systems. The complexity stems from third-party software that introduces new risks whenever a new version or patch is released. A comprehensive asset inventory is essential for documenting the software and hardware resources in use. Once the enterprise knows what it has, it can evaluate which systems pose the highest risk. Asset management, infrastructure, and information security teams, along with audit functions, all contribute to that assessment. Together, they can determine where remediation must occur first. Cloud service providers are responsible for cloud-based Software as a Service (SaaS) applications. It’s vital, however, for the company to take on data governance and service offboarding responsibilities. Contracts must clearly specify how data is handled, transferred, or destroyed at the end of the relationship. ... Alignment between business and IT leadership is essential. The chief information officer (CIO) approves the IT project kickoff and allocates the required budget and other resources. The business analysis team translates those needs into technical requirements. Quarterly scorecards and governance checkpoints create visibility, enabling leaders to make decisions that balance business outcomes and technical realities.


Why are IT leaders optimistic about future AI governance

IT leaders are optimistic about AI’s transformative potential. This optimism extends to AI governance, where the strategic integration of NHI management enhances security and enables organizations to confidently pursue AI initiatives. It’s essential to ensure that security measures evolve alongside technological advancements, safeguarding AI systems without stifling innovation. ... Can robust security and innovation coexist harmoniously? The answer lies in striking a balance between rigorous security measures and fostering an environment conducive to innovation. Properly managing NHIs equips organizations with the flexibility to innovate while maintaining a fortified security posture. With advancements in artificial intelligence and automation progress, machine identities play an increasingly pivotal role in enabling these technologies. By ensuring that machine interactions are secure and transparent, businesses can confidently explore the transformative potential of AI without compromising on security. Herein lies the essence of responsible AI governance: leveraging data-driven insights to enable ethical and sustainable technological growth while safeguarding against inherent risks. ... What can organizations do to harness the collective expertise of stakeholders? Where cyber threats are increasingly sophisticated, collaboration becomes the cornerstone of a resilient cybersecurity framework. 

Daily Tech Digest - January 02, 2026


Quote for the day:

“If your ship doesn’t come in, swim out to meet it!” -- Jonathan Winters



Delivering resilience and continuity for AI

Think of it as technical debt, suggests IDC group VP Daniel Saroff as most enterprises underestimate the strain AI puts on connectivity and compute. Siloed infrastructure won’t deliver what AI needs and CIOs need to think about these and other things in a more integrated way to make AI successful. “You have to look at your GPU infrastructure, bandwidth, network availability, and connectivity between respective applications,” he says. “If you have environments not set up for highly transactional, GPU-intensive environments, you’re going to have a problem,” Saroff warns. “And having very fragmented infrastructure means you need to pull data and integrate multiple different systems, especially when you start to look at agentic AI.” ... Making AI scale will almost certainly mean taking a hard look at your data architecture. Every database adds features for AI. And lakehouses promise you can bring operational data and analytics together without affecting the SLAs of production workloads. Or you can go further with data platforms like Azure Fabric that bring in streaming and time series data to use for AI applications. If you’ve already tried different approaches, you likely need to rearchitect your data layer to get away from the operational sprawl of fragmented microservices, where every data hand-off between separate vector stores, graph databases, and document silos introduces latency and governance gaps. Too many points of failure make it hard to deliver high availability guarantees.


Technological Disruption: Strategic Inflection Points From 2026 - 2036

From a defensive standpoint, AI-driven security solutions will provide continuous surveillance, automated remediation, and predictive threat modeling at a scale unattainable by human analysts. Simultaneously, attackers will utilize AI to create polymorphic malware, execute influence operations, and exploit holes at machine speed. The outcome will be an environment where cyber war progresses more rapidly than conventional command-and-control systems can regulate. As we approach 2036, the primary concern will be AI governance rather than AI capacity. ... From 2026 to 2030, enterprises will increasingly recognize that cryptographic agility is vital. The move to post-quantum cryptography standards means that old systems, especially those in critical infrastructure, financial services, and government networks, need to be fully inventoried, evaluated, and upgraded. By the early 2030s, quantum innovation will transcend cryptography, impacting optimization, materials science, logistics, and national security applications. ... In the forthcoming decade, supply chain security will transition from compliance-based evaluations to ongoing risk intelligence. Transparency methods, including software bills of materials, hardware traceability, and real-time vendor risk assessment, will evolve into standard expectations rather than just best practices. Supply chain resilience will strategically impact national competitiveness.


True agentic AI is years away - here's why and how we get there

We're not there yet. We're not even close. Today's bots are limited to chat interactions and often fail outside that narrow operating context. For example, what Microsoft calls an "agent" in the Microsoft 365 productivity suite, probably the best-known instance of an agent, is simply a way to automatically generate a Word document. Market data shows that agents haven't taken off. ... Simple automations can certainly bring about benefits, such as assisting a call center operator or rapidly handling numerous invoices. However, a growing body of scholarly and technical reports has highlighted the limitations of today's agents, which have failed to advance beyond these basic automations. ... Before agents can live up to the "fully autonomous code" hype of Microsoft and others, they must overcome two primary technological shortcomings. Ongoing research across the industry is focused on these two challenges: Developing a reinforcement learning approach to designing agents; and Re-engineering AI's use of memory -- not just memory chips such as DRAM, but the whole phenomenon of storing and retrieving information. Reinforcement learning, which has been around for decades, has demonstrated striking results in enabling AI to carry out tasks over a very long time horizon. ... On the horizon looms a significant shift in reinforcement learning itself, which could be a boon or further complicate matters. Can AI do a better job of designing reinforcement learning than humans?


Why Developer Experience Matters More Than Ever in Banking

Effective AI assistance, in fact, meets developers where they are—or where they work. Some prefer a command-line interface, others live inside an IDE, and still others rely heavily on sample code and language-specific SDKs. A strong DX strategy supports all of these modes, using AI to surface accurate, context-aware guidance without forcing developers into a single workflow. When AI reinforces clarity, it becomes a force multiplier. ... As AI-assisted development becomes more common, the quality of documentation takes on new importance. Because it is no longer read only by humans, documentation increasingly serves as the knowledge base that enables AI agents that help developers search, generate, and validate code. When documentation is vague or poorly structured, it introduces confusion, often in ways that actively undermine developer confidence. ... In highly regulated environments, developers want, and expect, guardrails—but not at the expense of speed and consistency. One of the most effective ways to balance those demands is by codifying business rules and compliance requirements directly into the platform, rather than relying on manual, human-driven review at key milestones. Talluri describes this approach as “policy as code”: embedding rules, validations, and regional requirements into the system so developers receive immediate, actionable prompts and feedback as they work. ... The business case for exceptional developer experience rests on a simple truth: trust drives productivity.


AI-powered testing for strategic leadership

Nearly half of teams still release untested code due to time pressure, creating fragile systems and widening risk exposure. Legacy architectures further compound this, making modernisation difficult and slowing down automated validation,” he said. AI-generated code also introduces new vulnerabilities. Without strong validation pipelines, testing quickly becomes the bottleneck of transformation. Developers often view testing as tedious, and with modern codebases spanning multiple interconnected applications, the challenge intensifies. At the same time, misalignment between leadership and engineering teams leads to unclear priorities and rushed decisions. While the pace of development already feels fast, it is only set to accelerate. To overcome barriers, CIOs can adopt model-based, codeless AI testing that reduces dependence on fragile code-level automation and cuts ongoing maintenance. This approach can reduce manual effort by 80%–90% and enables non-technical experts to participate through natural-language and visual test generation. For Wong, strong governance is vital. This entails domain-trained, testing-specific AI that avoids hallucinations and supports safe, transparent validation. Instead of becoming autonomous, AI can act as a co-pilot working alongside developers. “By aligning teams, modernising toolchains, and embedding guardrails, CIOs can shift from reactive firefighting to proactive, AI-driven quality engineering,” he said.


The Architect’s Dilemma: Choose a Proven Path or Pave Your Own Way?

Platforms and frameworks are like paved roads that may help a team progress faster on their journey, with well-defined "exit ramps" or extension points where a team can extend the platform to meet their needs, but they come with side-effects that may make them undesirable. Teams need to decide when, if ever, they need to leave the path others have paved and find their own way by developing extensions to the platform or framework, or by developing new platforms or frameworks. The challenge teams face when they use platforms or frameworks as the basis for their software architectures is to choose the "paved road" (platform or framework) that gets them closest to their desired destination with minimal diversions or new construction. ... Many platform decisions are innocuous and can be accepted and ignored when they don’t affect the QARs that the team needs to meet. The only way to know whether the decisions are harmful is through experiments that expose when the platform is failing to meet the goals of the system. Since the decisions made by the platform developers are often undocumented and/or unknowable, it’s imperative that teams be able to test their system (including the platforms on which they are built) to make sure that their architectural goals (i.e. QARs) are being met. ... Using the "paved road" metaphor, the LLM provides a proven path but it does not take the team where they need to go. When this happens, they have no choice but to either start extending the platform (if they can), finding a different platform, or building their own platform.


Supply chains, AI, and the cloud: The biggest failures (and one success) of 2025

By compromising a single target with a large number of downstream users—say a cloud service or maintainers or developers of widely used open source or proprietary software—attackers can infect potentially millions of the target’s downstream users. ... Another significant security story cast both Meta and Yandex as the villains. Both companies were caught exploiting an Android weakness that allowed them to de-anonymize visitors so years of their browsing histories could be tracked. The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allowed Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. ... The outage with the biggest impact came in October, when a single point of failure inside Amazon’s sprawling network took out vital services worldwide. It lasted 15 hours and 32 minutes. The root cause that kicked off a chain of events was a software bug in the software that monitors the stability of load balances by, among other things, periodically creating new DNS configurations for endpoints within the Amazon Web Services network. A race condition—a type of bug that makes a process dependent on the timing or sequence of events that are variable and outside the developers’ control—caused a key component inside the network to experience “unusually high delays needing to retry its update on several of the DNS endpoint,” Amazon said in a post-mortem.


The Evolving Cybersecurity Challenge for Critical Infrastructure

Convergence between OT, IT and the cloud is providing cybercriminal groups with the opportunity to target critical infrastructure. Operators, and regulators, are wrestling with new technology and new manufacturers, outside the traditional OT/ICS supply chain. “With the geopolitical tensions and the way that the world will look in maybe a few years, they're starting to scratch their heads and think, ‘okay, is it secure? Is it safe? How was it developed? Is there any remote access? How is it being configured?’ There are things that are being done now, that will have an effect in a few years’ time,” cautioned Daniel dos Santos, head of security research at Forescout's Vedere Labs. Given the lifespans of operational technology, installing insecure equipment now can have long-term consequences. Meanwhile, CISOs face dealing with older hardware that was not designed for modern threats. Even where vendors release patches, CNI operators do not always apply them, either because of concerns about business interruption, or a lack of visibility. ... Threats to CNI are not likely to abate in 2026. Legislators are putting more emphasis on cyber resilience and directives, such as the EU’s Cyber Resilience Act, will improve the security of connected devices. But these upgrades take time. “Threats from criminal groups continue to grow exponentially,” said Phil Tonkin, CTO at OT security specialists Dragos


The changing role of the MSP: What does this mean for security?

MSPs hold a unique position within the IT ecosystem, as they are often responsible for managing and supporting the IT infrastructures, cloud services, and cybersecurity of many different organizations. These trusted partners often have privileged access to the inner workings of the organizations they support, including access to the critical systems, sensitive information, and intellectual property of their clients. ... Research shows that over half of MSP leaders globally believe that their customers are at more risk today than this time last year when it comes to cyber threats, with AI-based attack vectors, ransomware/malware, and insider threats the most commonly faced threats. As a result of this uptick in threats, more organizations than ever are leaning on MSPs for cyber support. In fact, in 2025, 84% of MSPs managed either their clients’ cyber infrastructure or their cyber and IT estates combined. This increased significantly, from 64% the previous year. What this shows is that SMEs are realising that they cannot handle cybersecurity alone, turning to MSPs for additional help. Cybersecurity is no longer an optional extra or add-on; it’s becoming a core, expected service for MSPs. MSP leaders are transitioning from general IT support to becoming essential cybersecurity guardians. ... MSPs that adapt by investing in specialized cybersecurity expertise, advanced technologies, and a proactive security posture will thrive, becoming indispensable partners to businesses navigating the complex world of cyber risk. 


What’s next for Azure containers?

Until now, even though Azure has had deep eBPF support, you’ve had to bring your own eBPF tools and manage them yourself, which does require expertise to run at scale. Not everyone is a Kubernetes platform engineer, and with tools like AKS providing a managed environment for cloud-native applications, having a managed eBPF environment is an important upgrade. The new Azure Managed Cilium tool provides a quick way of getting that benefit in your applications, using it for host routing and significantly reducing the overhead that comes with iptables-based networking. ... Declarative policies let Azure lock down container features to reduce the risk of compromised container images affecting other users. At the same time, it’s working to secure the underlying host OS, which for ACI is Linux. SELinux allows Microsoft to lock that image down, providing an immutable host OS. However, those SELinux policies don’t cross the boundary into containers, leaving their userspace vulnerable. ... Having a policy-driven approach to security helps quickly remediate issues. If, say, a common container layer has a vulnerability, you can build and verify a patch layer and deploy it quickly. There’s no need to patch everything in the container, only the relevant components. Microsoft has been doing this for OS features for some time now as part of its internal Project Copacetic, and it’s extending the process to common runtimes and libraries, building patches with updated packages for tools like Python.

Daily Tech Digest - January 01, 2026


Quote for the day:

"It always seems impossible until it’s done." -- Nelson Mandela



Why data trust is the missing link in digital transformation

Data trust is often framed as a technical issue, delegated to IT or data teams. In reality, it is a business capability with direct implications for growth, risk, and reputation. Trusted data enables organisations to: Confidently automate customer and operational workflows; Personalise experiences without introducing errors; Improve forecasting and performance reporting; and Reduce operational rework and exception handling When data cannot be trusted, leaders are forced to rely on manual checks, conservative assumptions, and duplicated processes. This increases cost and slows decision-making - the opposite of what digital transformation aims to achieve. .... Establishing data trust is not a one-time project. It requires a shift in mindset across the organisation. Data quality should be viewed as a shared responsibility, supported by the right processes and tools. Leading organisations embed data validation into their digital workflows, measure data quality as part of system health, and treat trusted data as a strategic asset. Over time, this creates a culture where decisions are made with confidence and transformation initiatives are more likely to succeed. ... Digital transformation is ultimately about enabling better decisions, faster execution, and stronger customer relationships. None of these goals can be achieved without trusted data. As organisations continue to modernise their platforms and processes, data quality should be treated as core infrastructure, not an afterthought. 


Health Data Privacy, Cyber Regs: What to Watch in 2026

When federal regulators hesitate, states often jump into filing privacy and security gaps involving health data. That includes mandates in New York to shore up cybersecurity at certain hospitals (see: New York Hospitals Are Facing Tougher Cyber Rules Than HIPAA). Also worth watching is the New York Health Information Privacy Act, Greene said. "It was passed by both New York legislative chambers in January but has not yet been formally submitted to the governor for signature, with lobbying efforts underway to amend it." "In its most recent version, it would be the toughest health privacy law in the country in many respects, including a controversial prohibition on obtaining consents for secondary uses of data until at least 24 hours after an individual creates an account or first uses the requested product or service," Greene said.  ... Greene predicted HIPAA resolution agreements and civil monetary penalties will continue much as they have in years past, with one to two dozen such cases next year. HHS has recently indicated that it intends to begin enforcing the Information Blocking Rule. "The primary target will be health IT developers," Greene said. "I expect that there are less information blocking issues with health information networks and believe that the statute and regulation's knowledge standard makes it more challenging to enforce against healthcare providers because the government must prove that a healthcare provider knew its practice to be unreasonable."


From integration pain to partnership gain: How collaboration strengthens cybersecurity

When collaborators leverage data in specific cybersecurity work, they unlock several valuable benefits, especially since no organization has complete insight into every possible threat. A shared, data-driven cybersecurity framework can offer both sides a better understanding of existing and emerging threats that could undermine one or both collaborators. Data-driven collaboration also enables partners to become more proactive in their cybersecurity posture. Coordinated data can give business partners insights into where there’s greater exposure for a cyberattack, allowing partners to work together with data-backed guidance on how to better prepare. ... The Vested model — an innovative approach based on research from the University of Tennessee — focuses on shared goals and outcomes rather than traditional transactional buyer and seller agreements. Both companies agreed on a specific set of KPIs they could use to measure the health of the partnership and keep their security goals on track, allowing them to continue to adapt cybersecurity initiatives as needs and threats evolve. “You have to build, maintain and exercise the right partnerships with business units and shared services across the enterprise so continuity plans identify the issue quickly, deploy appropriate mitigations, and ultimately restore client and business services as quickly as possible,” says Royce Curtin, IBM’s former VP of corporate security.


AI governance: A risk and audit perspective on responsible AI adoption

AI governance refers to the policies, procedures, and oversight mechanisms that guide how AI systems are developed, deployed, and monitored. It ensures that AI aligns with business objectives, complies with applicable laws, and operates in a way that is ethical and transparent. Regulatory scrutiny is increasing. The EU AI Act is setting a precedent for global standards, and U.S. agencies are signaling more aggressive enforcement, particularly in sectors like healthcare, finance, and employment. Organizations are expected to demonstrate accountability in how AI systems make decisions, manage data, and interact with users. Beyond regulation, there is growing pressure from customers, employees, and investors. ... Audit teams also help boards and audit committees understand the risks associated with AI. Their work supports transparency and builds trust with regulators and stakeholders. As AI becomes more embedded in business operations, internal audit must expand its scope to include model governance, data lineage, and ethical risk. ... Organizations that treat AI as a strategic risk are better positioned to scale it responsibly. Risk and internal audit teams have a central role in ensuring that AI systems are secure, compliant, and aligned with business goals. Citrin Cooperman helps organizations navigate AI adoption with confidence by combining deep risk expertise, practical governance frameworks, and advanced technology solutions that support secure, scalable, and compliant growth.


Six data shifts that will shape enterprise AI in 2026

While RAG won't entirely disappear in 2026, one approach that will likely surpass it in terms of usage for agentic AI is contextual memory, also known as agentic or long-context memory. This technology enables LLMs to store and access pertinent information over extended periods. Multiple such systems emerged over the course of 2025 including Hindsight, A-MEM framework, General Agentic Memory (GAM), LangMem, and Memobase. RAG will remain useful for static data, but agentic memory is critical for adaptive assistants and agentic AI workflows that must learn from feedback, maintain state, and adapt over time. In 2026, contextual memory will no longer be a novel technique; it will become table stakes for many operational agentic AI deployments. ... In 2025, we saw numerous innovations, like the notion that an AI is able to parse data from an unstructured data source like a PDF. That's a capability that has existed for several years, but proved harder to operationalize at scale than many assumed. Databricks now has an advanced parser, and other vendors, including Mistral, have emerged with their own improvements. The same is true with natural language to SQL translation. While some might have assumed that was a solved problem, it's one that continued to see innovation in 2025 and will see more in 2026. It's critical for enterprises to stay vigilant in 2026. 


Communicating AI Risk to the Board With Confidence

Most board members can comprehend that AI will drive growth. What they fail to grasp concretely is how the technology introduces a massive amount of exposure. This predicament is typically a result of how information is presented. Security and risk managers (SRMs) often describe AI incidents in the vocabulary of adversarial inputs, model drift, and architecture choices, which matter deeply but rarely answer the questions that directors tackle during their meetings. High-level stakeholders, in reality, are concerned with issues such as revenue protection, operational continuity, and competitive differentiation, creating a gap that requires more than translating acronyms. ... Traditional discussions about technology risk revolve around the triad of confidentiality, integrity, and availability. Boards know these categories well, and over the past few decades, they have learned that cybersecurity failures directly affect the business along these lines. GenAI has formidably challenged this familiar structure, with its associated risks not limited to one of these three domains.  ... When the conversation begins with the business consequence, though, the relevance is immediate. The most effective approach involves replacing those mechanics that mean so much to the internal teams with the strategic information boards need to operate. These details open a path for meaningful conversations that encourage directors to think through the implications and make more informed decisions. 


The six biggest security challenges coming in 2026

For many organizations, cybersecurity and resilience is a compliance exercise. But it must evolve into “a core intentional cybersecurity capability”, says Dimitriadis. “In 2026, organizations will need to build the capacity to anticipate regulatory changes, understand their strategic implications, and embed them into long-term planning.” ... Attackers are leveraging AI to create convincing email templates and fake websites “almost indistinguishable” from real ones – and without the common warning signs employees are trained to identify, says Mitchell. AI is also being used in vishing attacks, with deepfakes making it easier to clone the voice of high-ranking company executives to trick victims. In 2026, there will be more attacks utilizing realistic voice cloning and high-quality video deepfakes, says Joshua Walsh ... There is a current shift towards agentic AI that can take real-world actions, such as adjusting configurations, interacting with APIs, booking services and initiating financial tasks. This can increase efficiency, but it can also lead to unsafe decisions made at speed, says rradar’s Walsh. An agent told to "optimize performance" might disable logging or bypass authentication because it views security controls as delays, he suggests. Prompt injection is a hidden issue to look out for, he adds. “If a threat actor slips hidden instructions into data that the agent consumes, they can make it run actions on internal systems without anyone realising.” 


5 Changes That Will Define AI-Native Enterprises in 2026

As enterprises scale to multi-agent systems, the engineering focus will shift from creating prompts to architecting context. Multi-agent workflows rapidly expand requirements with tool definitions, conversation history, and data from multiple sources. This creates two challenges: context windows fill up, and models suffer from “context rot,” forgetting information buried in lengthy prompts. By mid-2026, context engineering will emerge as a distinct discipline with dedicated teams and specialized infrastructure, serving the minimal but complete information agents need. The best context engineers will understand both LLM constraints and their business domain’s semantic structure. ... Enterprises are realizing that AI agents need both data and meaning. Companies that spent years perfecting data lakes are already finding those assets are insufficient. AI can retrieve data, but without semantic context, it can’t interpret action or intent. That’s why teams will move beyond vector search toward building knowledge graphs, ontologies, and metadata-driven maps that teach AI how their business works. The battleground will shift from owning raw data to owning its interpretation. Off-the-shelf agents will struggle in complex domains because semantics are domain-specific. ... The AI-native enterprise looks very different from what came before. It serves machine customers, treats context as critical infrastructure, and has the tools to escape decades of technical debt. 


Microsegmentation: the unsung hero of cybersecurity (and why it should be your top priority)

Think of your network like an apartment building. You’ve got a locked front door — that’s your perimeter. But once someone gets inside, there’s no front desk checking IDs, no elevator security and the same outdated lock on every unit. An intruder can roam freely, entering any apartment they choose. Microsegmentation is the internal security system. It’s the keycard for the elevator, the camera in the hallway, the unique lock on your door. It’s what stops one compromised device from becoming a full-blown breach. ... OT environments are different. They’re often built on legacy systems, lack patching and operate in real-time. You can’t just drop an agent or reroute traffic without risking downtime. That’s why agencies need solutions that are agentless, software-defined and tailored to the unique constraints of OT. Otherwise, you’re only protecting half the house. ... Microsegmentation also plays a critical role in enabling zero trust. It enforces least privilege at the network level. It’s not just about who gets in; it’s about what they can touch once they’re inside. For agencies building toward zero trust, microsegmentation isn’t an afterthought. It’s a foundation. Despite all this, microsegmentation remains underutilized. According to TechTarget’s Enterprise Strategy Group, only 36% of organizations use it today, even though it’s foundational to zero trust. Why? Because 28% believe it’s too complex. But that perception is often rooted in outdated tooling.


Beyond Chatbots: What Makes an AI Agent Truly Autonomous

Autonomous agents must retain and use context over time. Memory enables an agent to recall previous interactions, data, and decisions—allowing it to continue a process seamlessly without restarting each time. That persistence turns single exchanges into long-running workflows. In enterprise settings, it means an agent can track a contract review across multiple sessions or follow a complex support case without losing context. ... Traditional automation runs on fixed, rule-based workflows. Autonomous agents build and revise their own plans on the fly, adapting to results and feedback. This ability to plan dynamically—think, act, observe, and adjust—is what differentiates agentic AI from robotic process automation (RPA) or prompt chaining. In practice, an agent might be tasked with analyzing a set of contracts, then automatically decide how to proceed: extract key terms, assess risk, and summarize results. ... Resilient agents are designed to operate across models, retry failed actions, or launch sub-agents to handle specialized work—all within defined guardrails. That adaptability is what separates a proof of concept (POC) from a production-ready system. ... All the reasoning in the world means little if an agent can’t execute. Tools are what translate intelligence into impact. They’re the functions, APIs, and integrations that allow agents to interact with business systems—searching systems, generating documents, updating records, or triggering workflows across CRMs, ERPs, and analytics platforms.