Daily Tech Digest - January 07, 2026


Quote for the day:

“If you're not prepared to be wrong, you'll never come up with anything original.” -- Ken Robinson



Strategy is dying from learning lag, not market change

At first, you might think this is about being more agile, more innovative, or more aggressive. However, those are reactions, not solutions. The real shift is deeper: strategy no longer scales when the underlying assumptions expire too quickly. The advantage erodes because the environment moves faster than the organization’s ability to sense, understand and adapt to it. ... Strategic failure today is less about being wrong and more about staying wrong for too long. ... One way and perhaps the only one, out of uncertainty is to learn faster and closer to where the actual signals appear. Learning to me is the disciplined updating of beliefs when new evidence arrives. Every decision is a prediction about how things will work. When reality proves you wrong, learning is how you fix that prediction. In a stable environment, you can afford to learn slowly. However, in unstable ones, like today’s, slow learning becomes existential. ... Organizations don’t fall behind all at once. They fall behind step by step: first in what they notice, then in how they interpret it, then in how long it takes to decide what to do and finally in how slowly they act. ... Strategy stalls not because people refuse to change, but because they can’t agree on the story beneath the change. They chased precision in interpretation when the real advantage would have come from running small tests to find out faster which interpretation is correct.


The new tech job doesn't require a degree. It starts in a data center

The answer won't be found in Silicon Valley or Data Center Alley. It's closer to home. Veterans, trade workers, and high school graduates not headed to college don't come through traditional pipelines, but they bring the right aptitude and mindset to the data center. Veterans have discipline and process-driven thinking that fits naturally into our operations — and for many, these roles offer a transition into a stable career. Someone who kept an aircraft carrier running knows what it means to manage infrastructure that can't fail. Many arrive with experience in related systems and are comfortable with shift work and high stakes. ... Young adults without college plans are often overlooked, but some excel in hands-on settings and just need an opportunity to prove it. Once they learn about a data center career and where it can take them, it becomes a chance to build a middle-class lifestyle close to home. ... Hiring nontraditional candidates is only the first step. What keeps them is a promotion track that works. After four weeks of hands-on and self-guided onboarding, techs can pursue certifications in battery backup systems, tower clearance, generator safety, and more. When qualified, they show it in the field and move up. This kind of investment has a ripple effect. A paycheck can lead to a mortgage and financial stability. And as techs move up or out, someone else steps in — maybe through a local program that appeared once your jobs did.


Automated data poisoning proposed as a solution for AI theft threat

The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what’s known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM. Injecting poisoned or adulterated data into a data system for protection against theft isn’t new. What’s new in this tool – dubbed AURA (Active Utility Reduction via Adulteration)– is that authorized users have a secret key that filters out the fake data so the LLM’s answer to a query is usable. If the knowledge graph is stolen, however, it’s unusable by the attacker unless they know the key, because the adulterants will be retrieved as context, causing deterioration in the LLM’s reasoning and leading to factually incorrect responses. The researchers say AURA degrades the performance of unauthorized systems to an accuracy of just 5.3%, while maintaining 100% fidelity for authorized users, with “negligible overhead,” defined as a maximum query latency increase of under 14%. ... As the use of AI spreads, CSOs have to remember that artificial intelligence and everything needed to make it work also make it much harder to recover from bad data being put into a system, Steinberg noted. ... “For now, many AI systems are being protected in similar manners to the ways we protected non-AI systems. That doesn’t yield the same level of protection, because if something goes wrong, it’s much harder to know if something bad has happened, and its harder to get rid of the implications of an attack.”


From Zero Trust to Cyber Resilience: Why Architecture Alone Will Not Protect Enterprises in 2026

The core challenge facing CISOs is not whether Zero Trust is implemented, but whether the organization can continue to operate when, inevitably, controls fail. Modern threat actors no longer focus exclusively on breaching defenses; they aim to disrupt operations, degrade trust, and extend business impact over time. In this context, architecture alone is insufficient. What enterprises require is cyber resilience: the ability to anticipate, withstand, recover from, and adapt to cyber disruption. ... Zero Trust answers the question “Who can access what?” Cyber resilience answers a more consequential one: “How quickly can the business recover when access controls are no longer the primary failure point?” ... Resilience engineering reframes cybersecurity as a property of complex socio-technical systems. In this model, failure is not an anomaly; it is an expected condition. The objective shifts from breach avoidance to disruption management. In practice, this means evolving from an assume breach mindset to an assume disruption operating model, one where systems, teams, and leadership are prepared to function under degraded conditions. ... To prepare for 2026, CISOs should: Treat cyber resilience as a continuous operating capability, not a project; Integrate cybersecurity with business continuity and crisis management; Train executives and board members through realistic disruption scenarios; and Invest in recovery validation, not just control deployment. 


Generative AI and the future of databases

The data is at the heart of your line of business application, but it is also changing all the time, and if you keep extracting the data into some other corpus it gets stale. You can view it as two approaches: replication or federation. Am I going to replicate out of the database to some other thing or am I going to federate into the database? ... engineers know how to write good SQL queries. Whether they know how to write good English language description of the SQL queries is a completely different matter, but let’s assume for a second we can or we can have AI do it for us. Then the AI can figure out which tool to call for the user request and then generate the parameters. There are some things to worry about in terms of security. How can you set the right secure parameters? What parameters are the LLM allowed to set versus not allowed to set? ... When you combine structured and unstructured data, the next step is that it’s not just about exact results but about the most relevant results. In this sense databases start to have some of the capabilities of search engines, which is about relevance and ranking, and what becomes important is almost like precision versus recall for information retrieval systems. But how do you make all of this happen? One key piece is vector indexing. ... AI search is a key attribute of an AI-native database. And the other key attribute is AI functions. 


Cyber Risk Trends for 2026: Building Resilience, Not Just Defenses

On the defensive side, AI can accelerate detection and response, but tooling without guardrails will create fresh exposures. Your questions as a board should be: Where have we embedded AI in critical workflows? How do we assure the provenance and integrity of the data those models touch? Are we red-teaming our AI-enabled processes, not just our perimeter? ... Second, third party ecosystems present attack surface. The risk isn’t abstract: it’s a payroll provider outage that stops salaries, a logistics partner breach that stalls distribution, or a SaaS compromise that leaks your crown jewels. ... Third is quantum computing. Some will say it’s too early; some will say it’s too late. The pragmatic position is this: crypto agility is a business requirement now. Inventory where and how you use cryptography—applications, devices, certificates, key management, data at rest and in transit. Prioritize crown-jewel systems and long-lived data that must remain confidential for years. ... Fourth is the risk posed by geopolitics. We live in a more unstable world, and digital risk doesn’t respect borders. Conflicts spill into cyberspace, data sovereignty rules tighten, and critical components can become chokepoints overnight. ... We won’t repel every attack in 2026. But we can decide to bend rather than break. Resilience comes of age when it stops being a slogan and becomes a practiced capability—where governance, operations, technology, and people move as one.


Will there be a technology policy epiphany in 2026?

The UK government still seems implacably opposed to bringing forward any cross-sector, comprehensive AI legislation. Its one-liner in the 2024 King’s Speech said the government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” That seemed sparing at the time, and now seems extraordinarily overblown. ... Turning to crypto-asset regulation, 2026 will continue the journey from draft legislation being published on 15 December last year through to 25 October 2027- yes, that’s meant to say 2027 - for the current “go live” date. Already we have seen some definitional clarification and the arrival of new provisions related to market abuse, public offers and disclosures. ... A critical thread to all of this is cyber. The Cyber Security Bill receives its second reading in the Commons today, 6 January. I’m very much looking forward to the bill arriving in the Lords later in the Spring and would welcome your thoughts on what’s in and what currently is not. If that wasn’t enough for week one of 2026, we have the committee stage of the Crime and Policing Bill in the Lords tomorrow, Wednesday 7 January. ... By contrast, there is much chat on digital ID. A consultation is said to be coming this month with a draft bill in May’s speech. This has hardly been helped by the government last year hanging its digital ID coat all around illegal immigration - a more than unfortunate decision.


The Big Shift: Five Trends Show Why 2026 is About Getting to Value

The conversation shifts from “What can this AI do?” to “What problem does it solve, and how much value does it unlock?”—and the technology that wins won’t be the most sophisticated. Still, the one that directly accelerates revenue, reduces friction in customer-facing workflows, or demonstrably improves employee productivity within a 12-month payback window. Crawford says this is “getting back to brass tacks. “Organizations will carefully define their business objectives, whether customer engagement, revenue growth, employee productivity, or whatever it needs to be, before selecting a technology,” he says. ... In 2026, if your digital transformation project can’t demonstrate meaningful return within twelve months, it competes for oxygen with projects that can, and many won’t survive that fight, Batista says. This compression of payback expectations reflects a fundamental shift in how CFOs and boards view technology investments. Still, initiatives based on regulatory or compliance requirements—things mandated by law, for example—still justify longer timelines, but discretionary projects face much stricter scrutiny, Batista says. ... When it comes to limiting factors in scaling successful AI deployments, Crawford says the top issue will be failures in AI governance. “AI governance will be the bottleneck that constrains an enterprise’s ability to scale AI, not AI capability itself. And enterprises rushing to deploy autonomous agents without governance infrastructure will face either painful reworks or serious operational issues.


Why CES 2026 Signals The End Of ‘AI As A Tool’

The idea of AI as a coordinating layer or “ambient background” across entire ecosystems of tools and devices was also prominent this year. Samsung outlined its vision of AI companions for everyday life, demonstrating how smart appliances will form an intelligent background fabric to our day-to-day activities. As well as in the home, Samsung is a key player in industrial technology, where the same principle will see AI coordinating and optimizing operations across smart, connected enterprise systems. ... First, it’s clear that today’s leading manufacturers and developers believe that the future of AI lies in agentic, always-on systems, rather than free-standing, isolated tools and applications. Just as consumer AI now coordinates home and entertainment technology, enterprise AI will orchestrate workflows, schedules, documents, data and codebases, anticipating business needs and proactively solving problems before they occur. Another thing that can’t be overlooked is that consumer technology clearly shapes our expectations and tolerances of enterprise technology. Workplace AI that doesn’t live up to the seamless, friction-free experiences provided by consumer AI will quickly cause frustration, limiting adoption and buy-in. ... As this AI infrastructure becomes more capable, the role of employees will shift, too, from executing routine tasks to supervising automated processes, as well as applying uniquely human skills to challenges that machines still can’t tackle. 


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... If you’re starting from scratch, standardize on OpenTelemetry libraries for services and send everything through a collector so you can change backends without code churn. Sampling should be responsive to pain—raise trace sampling when p95 latency jumps or error rates spike. Reducing cardinality in labels (looking at you, per-user IDs) will keep storage and costs sane. Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.

Daily Tech Digest - January 06, 2026


Quote for the day:

"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera



Data 2026 outlook: The rise of semantic spheres of influence

While data started to garnering attention last year, AI and agents continued to suck up the oxygen. Why the urgency of agents? Maybe it’s “fear of missing out.” Or maybe there’s a more rational explanation. According to Amazon Web Services Inc. CEO Matt Garman, agents are the technology that will finally make AI investments pay off. Go to the 12-minute mark in his recent AWS re:Invent conference keynote, and you’ll hear him say just that. But are agents yet ready for prime time? ... And of course, no discussion of agentic interaction with databases is complete without mention of Model Context Protocol. The open-source MCP framework, which Anthropic PBC recently donated to the Linux Foundation, came out of nowhere over the past year to become the de facto standard for how AI models connect with data. ... There were early advances for extending governance to unstructured data, primarily documents. IBM watsonx.governance introduced a capability for curating unstructured data that transforms documents and enriches them by assigning classifications, data classes and business terms to prepare them for retrieval-augmented generation, or RAG. ... But for most organizations lacking deep skills or rigorous enterprise architecture practices, the starting points for defining semantics is going straight to the sources: enterprise applications and/or, alternatively, the newer breed of data catalogs that are branching out from their original missions of locating and/or providing the points of enforcement for data governance. In most organizations, the solution is not going to be either-or.


Engineering Speed at Scale — Architectural Lessons from Sub-100-ms APIs

Speed shapes perception long before it shapes metrics. Users don’t measure latency with stopwatches - they feel it. The difference between a 120 ms checkout step and an 80 ms one is invisible to the naked eye, yet emotionally it becomes the difference between "smooth" and "slightly annoying". ... In high-throughput platforms, latency amplifies. If a service adds 30 ms in normal conditions, it might add 60 ms during peak load, then 120 ms when a downstream dependency wobbles. Latency doesn’t degrade gracefully; it compounds. ... A helpful way to see this is through a "latency budget". Instead of thinking about performance as a single number - say, "API must respond in under 100 ms" - modern teams break it down across the entire request path: 10 ms at the edge; 5 ms for routing; 30 ms for application logic; 40 ms for data access; and 10–15 ms for network hops and jitter. Each layer is allocated a slice of the total budget. This transforms latency from an abstract target into a concrete architectural constraint. Suddenly, trade-offs become clearer: "If we add feature X in the service layer, what do we remove or optimize so we don’t blow the budget?" These conversations - technical, cultural, and organizational - are where fast systems are born. ... Engineering for low latency is really engineering for predictability. Fast systems aren’t built through micro-optimizations - they’re built through a series of deliberate, layered decisions that minimize uncertainty and keep tail latency under control.


Everything you need to know about FLOPs

A FLOP is a single floating‑point operation, meaning one arithmetic calculation (add, subtract, multiply, or divide) on numbers that have decimals. Compute benchmarking is done in floating point/fractional rather than integer/whole numbers because floating point is far more accurate of a measure than integers. A prefix is added to FLOPs to measure how many are performed in a second, starting with mega- (millions) the giga- (billions), tera- (trillions), peta- (quadrillions), and now exaFLOPs (quintillions). ... Floating point in computing starts at FP4, or 4 bits of floating point, and doubles all the way to FP64. There is a theoretical FP128, but it is never used as a measure. FP64 is also referred to as double-precision floating-point format, a 64-bit standard under IEEE 754 for representing real numbers with high accuracy. ... With petaFLOPS and exaFLOPs becoming a marketing term, some hardware vendors have been less than scrupulous in disclosing what level of floating-point operation their benchmarks use. It’s not it’s not uncommon for a company to promote exascale performance and then saying the little fine print that they’re talking about FP8, according to Snell. “It used to be if someone said exaFLOP, you could be pretty confident that they meant exaFLOP according to 64-bit scientific computing, but not anymore, especially in the field of AI, you need to look at what’s going behind that FLOP,” said Snell.


From SBOM to AI BOM: Rethinking supply chain security for AI native software

An effective AI BOM is not a static document generated at release time. It is a lifecycle artifact that evolves alongside the system. At ingestion, it records dataset sources, classifications, licensing constraints, and approval status. During training or fine-tuning, it captures model lineage, parameter changes, evaluation results, and known limitations. At deployment, it documents inference endpoints, identity and access controls, monitoring hooks, and downstream integrations. Over time, it reflects retraining events, drift signals, and retirement decisions. Crucially, each element is tied to ownership. Someone approved the data. Someone selected the base model. Someone accepted the residual risk. This mirrors how mature organizations already think about code and infrastructure, but extends that discipline to AI components that have historically been treated as experimental or opaque. To move from theory to practice, I encourage teams to treat the AI BOM as a “Digital Bill of Lading, a chain-of-custody record that travels with the artifact and proves what it is, where it came from, and who approved it. The most resilient operations cryptographically sign every model checkpoint and the hash of every dataset. By enforcing this chain of custody, they’ve transitioned from forensic guessing to surgical precision. When a researcher identifies a bias or security flaw in a specific open-source dataset, an organization with a mature AI BOM can instantly identify every downstream product affected by that “raw material” and act within hours, not weeks.


Beyond the Firehose: Operationalizing Threat Intelligence for Effective SecOps

Effective operationalization doesn't happen by accident. It requires a structured approach that aligns intelligence gathering with business risks. A framework for operationalizing threat intelligence structures the process from raw data to actionable defence, involving key stages like collection, processing, analysis, and dissemination, often using models like MITRE ATT&CK and Cyber Kill Chain. It transforms generic threat info into relevant insights for your organization by enriching alerts, automating workflows (via SOAR), enabling proactive threat hunting, and integrating intelligence into tools like SIEM/EDR to improve incident response and build a more proactive security posture. ... As intel maturity develops, the framework continuously incorporates feedback mechanisms to refine and adapt to the evolving threat environment. Cross-departmental collaboration is vital, enabling effective information sharing and coordinated response capabilities. The framework also emphasizes contextual integration, allowing organizations to prioritize threats based on their specific impact potential and relevance to critical assets. This ultimately drives more informed security decisions. ... Operationalization should be regarded as an ongoing process rather than a linear progression. If intelligence feeds result in an excessive number of false positives that overwhelm Tier 1 analysts, this indicates a failure in operationalization. It is imperative to institute a formal feedback mechanism from the Security Operations Center to the Intelligence team.


Compliance vs. Creativity: Why Security Needs Both Rule Books and Rebels

One of the most common tensions in the SOC arises from mismatched expectations. Compliance officers focus on control documentation when security teams are focusing on operational signals. For example, a policy may require multi-factor authentication (MFA), but if the system doesn’t generate alerts on MFA fatigue or unusual login patterns, attackers can slip past controls without detection. It’s important to also remember that just because something’s written in a policy doesn’t mean it’s being protected. A control isn’t a detection. It only matters if it shows up in the data. Security teams need to make sure that every big control, like MFA, logging, or encryption, has a signal that tells them when it’s being misused, misconfigured, or ignored. ... In a modern SOC, competing priorities are expected. Analysts want manageable alert volumes, red teams want room to experiment, and managers need to show compliance is covered. And at the top, CISOs need metrics that make sense to the board. However, high-performing teams aren’t the ones that ignore these differences. They, again, focus on alignment. ... The most effective security programs don’t rely solely on rigid policy or unrestricted innovation. They recognize that compliance offers the framework for repeatable success, while creativity uncovers gaps and adapts to evolving threats. When organizations enable both, they move beyond checklist security. 


AI governance through controlled autonomy and guarded freedom

Controlled autonomy in AI governance refers to granting AI systems and their development teams a defined level of independence within clear, pre-established boundaries. The organization sets specific guidelines, standards and checkpoints, allowing AI initiatives to progress without micromanagement but still within a tightly regulated framework. The autonomy is “controlled” in the sense that all activities are subject to oversight, periodic review and strict adherence to organizational policies. ... In practice, controlled autonomy might involve delegated decision-making authority to AI project teams, but with mandatory compliance to risk assessment protocols, ethical guidelines and regulatory requirements. For example, an organization may allow its AI team to choose algorithms and data sources, but require regular reports and audits to ensure transparency and accountability. Automated systems may operate independently, yet their outputs are monitored for biases, errors or security vulnerabilities. ... Deciding between controlled autonomy and guarded freedom in AI governance largely depends on the nature of the enterprise, its industry and the specific risks involved. Controlled autonomy is best suited for sectors where regulatory compliance and risk mitigation are paramount, such as banking, healthcare or government services. ... Both controlled autonomy and guarded freedom offer valuable frameworks for AI governance, each with distinct strengths and potential drawbacks. 


The 20% that drives 80%: Uncovering the secrets of organisational excellence

There are striking universalities in what truly drives impact. The first, which all three prioritise, is the belief that employee experience is inseparable from customer experience. Whether it is called EX = CX or framed differently, the sharp focus on making the workplace purposeful and engaging is foundational. Each business does this in a unique way, but the intent is the same: great employee experience leads to great customer experience. ... The second constant is an unwavering drive for business excellence. This is a nuanced but powerful 20% that shapes 80% of outcomes. Take McDonald’s, for instance: the consistency of quality and service, whether you are in Singapore, India, Japan or the US, is remarkable. Even as we localise, the core excellence remains unchanged. The same is true for Google, where the reliability of Search and breakthroughs in AI define the brand, and for PepsiCo, where high standards across foods and beverages define the brand.  ... The third—and perhaps most challenging—is connectedness. For giants of this scale, fostering deep connections across global, regional and country boundaries, and within and across teams, is crucial. It is about psychological safety, collaboration, and creating space for people to connect and recognise each other. This focus on connectedness enables the other two priorities to flourish. If organisations keep these three at the heart of their practice, they remain agile, resilient, and, as I like to put it, the giants keep dancing.


Turning plain language into firewall rules

A central feature of the design is an intermediate representation that captures firewall policy intent in a vendor agnostic format. This representation resembles a normalized rule record that includes the five tuple plus additional metadata such as direction, logging, and scheduling. This layer separates intent from device syntax. Security teams can review the intermediate representation directly, since it reflects the policy request in structured form. Each field remains explicit and machine checkable. After the intermediate representation is built, the rest of the pipeline operates through deterministic logic. The current prototype includes a compiler that translates the representation into Palo Alto PAN OS command line configuration. The design supports additional firewall platforms through separate back end modules. ... A vendor specific linter applies rules tied to the target firewall platform. In the prototype, this includes checks related to PAN OS constraints, zone usage, and service definitions. These checks surface warnings that operators can review. A separate safety gate enforces high level security constraints. This component evaluates whether a policy meets baseline expectations such as defined sources, destinations, zones, and protocols. Policies that fail these checks stop at this stage. After compilation, the system runs the generated configuration through a Batfish based simulator. The simulator validates syntax and object references against a synthetic device model. Results appear as warnings and errors for inspection.


Why cybersecurity needs to focus more on investigation and less on just detection and response

The real issue? Many of today’s most dangerous threats are the ones that don’t show up easily on detection radars. Think about the advanced persistent threats (APTs) that remain hidden for months or the zero-day attacks that exploit vulnerabilities no one even knew existed. These threats may slip right past the detection systems because they don’t act in obvious ways. That’s why, in these cases, detection alone isn’t enough. It’s just the first step. ... Think of investigation as the part where you understand the full story. It’s like detective work: not just looking at the footprints, but figuring out where they came from, who’s leaving them, and why they’re trying to break in in the first place. You can’t stop a cyberattack with detection alone if you don’t understand what caused it or how it worked. And if you don’t know the cause, you can’t appropriately respond to the detected threat. ... The cost of neglecting investigation goes beyond just missing a threat. It’s about missed opportunities for learning and growth. Every attack offers a lesson. By investigating the full scope of a breach, you gain insights that not only help in responding to that incident but also prepare you to defend against future ones. It’s about building resilience, not just reaction. Think about it: If you never investigate an incident thoroughly, you’re essentially ignoring the underlying risk that allowed the threat to flourish. You might fix the hole that was exploited, but you won’t have a clear understanding of why it was there in the first place. 

Daily Tech Digest - January 05, 2026


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe



How to make AI agents reliable

Easier said than done. After all, the way genAI works, we’re trying to build deterministic software on top of probabilistic models. Large language models (LLMs), cool though they may be, are non-deterministic by nature. Chaining them together into autonomous loops amplifies that randomness. If you have a model that is 90% accurate, and you ask it to perform a five-step chain of reasoning, your total system accuracy drops to roughly 59%. That isn’t an enterprise application; it’s a coin toss—and that coin toss can cost you. Whereas a coding assistant can suggest a bad function, an agent can actually take a bad action. ... Breunig highlights “context poisoning” as a major reliability killer, where an agent gets confused by its own history or irrelevant data. We tend to treat the context window like a magical, infinite scratchpad. It isn’t. It is a database of the agent’s current state. If you fill that database with garbage (unstructured logs, hallucinated prior turns, or unauthorized data), you get garbage out. ... Finally, we need to talk about the user. One reason Breunig cites for the failure of internal agent pilots is that employees simply don’t like using them. A big part of this is what I call the rebellion against robot drivel. When we try to replace human workflows with fully autonomous agents, we often end up with verbose, hedging, soulless text, and it’s increasingly obvious to the recipient that AI wrote it, not you. And if you can’t be bothered to write it, why should they bother to read it?


Three Cybersecurity predictions that will define the CISO agenda in 2026

Different tools report different versions of “critical” risk. One team escalates an issue while another deprioritises it based on alternative scoring models. Decisions become subjective, slow and inconsistent without a coherent strategy - and critical attack paths remain open. If cyber risk is not presented consistently in the context of business impact, it’s nearly impossible to align cybersecurity with broader business objectives. In 2026, leaders will no longer tolerate this ambiguity. Boards and executives don’t want more dashboards. ... Social engineering campaigns are already more convincing, more personalised and harder for users to detect. Messages sound legitimate. Voices and content appear authentic. The line between real and fake is blurring at scale. In 2026, mature organisations will take a more disciplined approach. They will map AI initiatives to business objectives, identify which revenue streams and operational processes depend on them, and quantify the value at risk. This allows CISOs to demonstrate where existing investments meaningfully reduce exposure — and where they don’t — while maintaining operational integrity and trust. ... AI agents will take over high-volume, repetitive tasks — continuously analysing vast streams of telemetry, correlating signals across environments, and surfacing the handful of risks that truly matter. They will identify the needle in the haystack. Humans will remain firmly in the loop. 


The Hidden Costs of Silent Technology Failures

"Most CIOs see failures as negative experiences that undermine their credibility, effectiveness and ultimate growth within the organization," Koeppel said. Under those conditions, escalation is rationally delayed. CIOs attempt recovery first, including new baseline plans, renegotiations of vendor commitments and a narrower scope before formally declaring failure. ... CIOs, Dunkin noted, frequently underplay failure to shield their teams from blame. Few leaders want finger-pointing to cascade through already strained organizations. But Dunkin pointed out that the same instincts are shaped by fear of job loss, budget erosion or internal power shifts. And, she warns, bad news does not age well. Beyond politics and incentives, decision-making psychology compounds the problem. Jim Anderson, founder of Blue Elephant Consulting, describes how sunk-cost bias distorts executive judgment. Admitting a mistake publicly opens leaders to criticism, so past decisions are defended rather than reassessed. ... But not all organizations respond this way. Koeppel said that in his experience, boards and CEOs are receptive to clear, concise explanations when technology initiatives deviate from plan. Over time, disclosure improves because consequences change. Sethi described the shift to openness that followed a major outage in one organization. It resulted in mandatory, blameless post-mortem reviews that focused on systemic and process breakdowns rather than individual fault.


2026 Low-Code/No-Code Predictions

The promise of low-code platforms will finally materialize by the end of 2026. AI will let business users create bespoke applications without writing code, while professional developers guide standards, security, and integration. The line between "developer" and "user" will blur as agentic systems become part of daily work. ... No code's extinction: No code's on its last legs — it's being snuffed out by vibe coding. AI-driven development tools will be the final knell for no code as we know it, with its remit curtailed in this new coding landscape. In this future, the focus will transition entirely to model orchestration and high-level knowledge work, where humans express their intent and expertise through abstract models rather than explicit code. The human role becomes centered on the plan to build. Specifically, ensuring the problem is correctly scoped and defined. ... In 2026, low-code/no-code interfaces will rapidly shift from drag and drop canvases to natural language interfaces, as user expectations rapidly adopt to the changing landscape. As this transition occurs, application vendors will struggle to provide transparency into how the application has interpreted the users' intent. ... While it's proved remarkable for supercharging development speed and allowing non-technical individuals to produce functional software, its outputs are less than perfect. This year, we've continued to uncover that much of AI-generated code turns out fragile or flat-out wrong once it faces real workflows or customers. 


AI security risks are also cultural and developmental

The research shows that AI systems increasingly shape cultural expression, religious understanding, and historical narratives. Generative tools summarize belief systems, reproduce artistic styles, and simulate cultural symbols at scale. Errors in these representations influence trust and behavior. Communities misrepresented by AI outputs disengage from digital systems or challenge their legitimacy. In political or conflict settings, distorted cultural narratives contribute to disinformation, polarization, and identity-based targeting. Security teams working on information integrity and influence operations encounter these risks directly. The study positions cultural misrepresentation as a structural condition that adversaries exploit rather than an abstract ethics issue. ... Systems designed with assumptions of reliable connectivity or standardized data pipelines fail in regions where those conditions do not hold. Healthcare, education, and public service applications show measurable performance drops when deployed outside their original development context. These failures expose organizations to cascading risks. Decision support tools generate flawed outputs. Automated services exclude segments of the population. Security monitoring systems miss signals embedded in local language or behavior. ... Models operate on statistical patterns and lack awareness of missing data. Cultural knowledge, minority histories, and local practices often remain absent from training sets. This limitation affects detection accuracy. 


The Board’s Duty in the Age of the Black Box

Today, when this Board approves the acquisition of a Generative AI startup or authorizes a billion-dollar investment in GPU infrastructure, you are acquiring a Black Box. You are purchasing a system defined not by logical rules, but by billions of specific weights, biases, and probabilistic outcomes. These systems are inherently unstable; they “hallucinate,” they drift, and they contain latent biases that no static audit can fully reveal. They are closer to biological organisms than to traditional software. ... Critics may argue that applying financial volatility models to operational AI risk is a conceptual leap. There is no perfect mathematical bridge between “Model Drift” and “WACC” (Weighted Average Cost of Capital). However, in the absence of a liquid market for “Algorithm Liability Insurance” or standardized auditing protocols, the Board must rely on empirical proxies to gauge risk. ... The single largest destroyer of capital in the current AI cycle is the misidentification of a “Wrapper” as a “Moat.” The Board must rigorously interrogate the strategic durability of the asset. ... The Risk Committee’s role is shifting from passive monitoring to active defense. The risks associated with AI are “Fat-Tailed”—meaning that while day-to-day operations might be smooth, the rare failure modes are catastrophic. ... For the Chief Information Officer (CIO), the concept of “Model Risk” translates directly into operational reality. It is critical to differentiate between “Valuation Risk” and “Maintenance Cost.”


Cybersecurity leaders’ resolutions for 2026

Any new initiative will start with a clear architectural plan and a deep understanding of end-to-end dependencies and potential points of failure. “By taking a thoughtful, engineering-driven approach — rather than reacting to outages or disruptions — we aim to strengthen the stability, scalability, and reliability of our systems,” he says. “This foundation enables the business to move with confidence, knowing our technology and security investments are built to endure and evolve.” ... As new attack surfaces emerge with AI-driven applications and systems, Piekarski’s priorities will focus on defending and hardening the environment against AI-enabled threats and tactics.  ... In practice, SaaS management and discovery tools will be used to get a handle on shadow IT and unsanctioned AI usage. Automation for compliance and reporting will be important as customer and regulatory requirements around ESG and security continue to grow, along with threat intelligence feeds and vulnerability management solutions that help Gallagher and the team stay ahead of what’s happening in the wild. “The common thread is visibility and control; we need to know what’s in our environment, how it’s being used, and that we can respond quickly when things change,” he tells CSO. ... “Quantum computing poses significant cyber risks by potentially breaking current encryption methods, impacting data security, and enabling new attack vectors,” says Piekarski.


Enterprise Digital Twin: Why Your AI Doesn’t Understand Your Organization

Agentic AI systems are moving from research papers to production pilots, taking critical business actions such as processing invoices, scheduling meetings, drafting communications, and coordinating workflows across teams. They operate with increasing autonomy. When an agent misunderstands organizational context, it does not just give a wrong answer. It takes wrong actions, such as approving expenses that violate policy, scheduling meetings with people who should not be in the room, routing decisions to the wrong authority, and creating compliance exposure at machine speed. The industry is catching up to this reality. ... An AI system reviewing a staffing request might confirm that the budget exists, the policy allows the hire, and the hiring manager has authority. All technically correct. But without Constraint Topology, the system does not know that HR cannot process new hires until Q2 due to a systems migration, that the only approved vendor for background checks has a six-week backlog, or that three other departments have competing requisitions for the same job grade and only two can be filled this quarter. ... Most AI frameworks focus on making models smarter. CTRS focuses on making organizations faster. Technically correct outputs that do not translate into action are not actually useful. The bottleneck is not AI capability. It is the distance between what AI recommends and what the organization can execute.


The agentic infrastructure overhaul: 3 non-negotiable pillars for 2026

If 2025 was about the brain (the LLM), 2026 must be about the nervous system. You cannot bolt a self-correcting, multi-step agent onto a 2018 ERP and expect it to function. To move from isolated pilots to enterprise-wide autonomous workflows, we must overhaul our architectural blueprint. We are moving from a world of rigid, synchronous commands to a world of asynchronous, event-driven fluidity. ... We build dashboards with red and green lights so a DevOps engineer can identify a spike in latency. However, an AI agent cannot “look” at a Grafana dashboard. If an agent encounters an error mid-workflow, it needs to understand why in a format it can digest. ... Stop “bolting on” agents to legacy REST APIs. Instead, build an abstraction layer — an “agent gateway” — that converts synchronous legacy responses into asynchronous events that your agents can subscribe to. ... The old mantra was “Data is the new oil.” In 2026, data is just the raw material; Metadata is the fuel. Businesses have spent millions “cleaning” data in snowflakes and lakes, but clean data lacks the intent that agents require to make decisions. ... Invest in a data catalog that supports semantic tagging. Ensure your data engineers are not just moving rows and columns, but are defining the “meaning” of those rows in a way that is accessible via your RAG pipelines. ... The temptation in 2026 will be to build “bespoke” agents for every department — a HR agent, a finance agent, a sales agent. This is a recipe for a new kind of “shadow IT” and massive technical debt.


The New Front Line Of Digital Trust: Deepfake Security

AI-generated deepfakes are ruining the way we perceive one another, as well as undermining institutions’ ways of ensuring identity, verifying intent and maintaining trust. For CISOs and IT security risk leaders, this is a new and pressing frontier for us to focus on: defending against attacks not on systems but on beliefs. ... Deepfakes are coming to the forefront just as CISOs have more risk to manage than ever. Here are some of the other key pressures driving the financial cybersecurity environment today: Multicloud misconfigurations and API exposure; Ransomware shift to triple extortion; Expanding third-party and fourth-party dependencies; Insider threats facing hybrid workforces; Barriers to zero-trust implementation and Regulatory fragmentation. ... Deepfake security isn’t a fringe issue anymore; it’s now a foremost challenge to digital trust and systemic financial resilience. In today’s world, where synthetic voices can create markets and fake identities can trigger transactions, authenticity reigns as the currency of banking. Tomorrow’s front-runners will be those building the next-generation financial systems—secured, transparent and globally trusted. Those systems will include reconfigured trust frameworks, deepfake detection, AI governance that drives model integrity and a resilient-by-design approach. In this world, where anyone can create an AI-generated identity, the ultimate competitive differentiator is proving what’s real.

Daily Tech Digest - January 03, 2026


Quote for the day:

“Some people dream of great accomplishments, while others stay awake and do them.” -- Anonymous


Cloud costs now No. 2 expense at midsize IT companies behind labor

The Cloud Capital survey shows midsize IT vendor CFOs and their CIO partners struggling to contain cloud spending, with significant cost volatility from month to month. Three-quarters of IT org CFOs report cloud spending forecasts varying between 5% and 10% of company revenues each month, Pingry notes. Costs of AI workloads are harder to predict than traditional SaaS infrastructure, Pingry adds, and organizations running major AI workloads are more likely to report margin declines tied to cloud spending than those with moderate AI exposure. “Training spikes, usage-driven inference, and experimentation noise introduce non-linear patterns that break the forecasting assumptions finance relies on,” says a report from Cloud Capital. “The challenge will intensify as AI’s share of cloud spend continues scaling.” ... Cloud services in themselves aren’t inherently too expensive, but many organizations shoot themselves in the foot through unintentional consumption, Clark adds. “Costs rise when the system is built without a clear understanding of the value it is meant to deliver,” he adds. ... “No CxO wants to explain to the board why another company used AI to leap ahead,” Clark adds. “This has created a no-holds-barred spending spree on training, inference, and data movement, often layered on top of architectures that were already economically incoherent.”


Securing Integration of AI into OT Technology

For critical infrastructure owners and operators, the goal is to use AI to increase efficiency and productivity, enhance decision-making, save costs, and improve customer experience – much like digitalization. However, despite the many benefits, integrating AI into operational technology (OT) environments that manage essential public services also introduces significant risks – such as OT process models drifting over time or safety-process bypasses – that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure. ... Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle. ... Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration. ... Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance. ... Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans.The agencies said critical infrastructure owners and operators should review this guidance so they can safely and securely integrate AI into OT systems. 


Rethinking Risk in a Connected World

As consumer behavior data proliferates and becomes increasingly available, it presents both an opportunity and a challenge for actuaries, Samuell says. Actuaries have the opportunity to better align expected and actual outcomes, while also facing the challenge of accounting for new sources of variability that traditional data does not capture. ... Keep in mind that incorporating behavioral factors into risk models does not guarantee certainty. A customer whom the model predicts to be at high risk of dishonesty may actually act honestly. “Ethical insurers must avoid treating predictive categories as definitive labels,” Samuell says. “Operational guidelines should ensure that all customers are treated with fairness and dignity, even as insurers make better use of available data.” ... Behavioral analytics is also changing how insurers engage with their customers. For example, by understanding how policyholders interact with digital platforms—including how often they log in, which features they use, and where they disengage—insurers can identify friction points and design more intuitive, personalized services. ... Consumer behavior data can also inform communication strategies for insurers. For example, “actuaries often want to be very precise, but data shows that can diminish comprehension of communications,” Stevenson says. ... In addition to data generated by insured individuals through technology, some insurance companies also use data from government and other sources in risk modeling. 


Inside the Cyber Extortion Boom: Phishing Gangs and Crime-as-a-Service Trends

Phishing attempts are growing in volume partly because organized crime groups no longer need technical knowledge to launch ransomware or other forms of cyber extortion: they can simply buy in the services they need. This ongoing trend is combined with emerging social engineering techniques, including multi-channel attacks, deep fakes and ClickFix exploits. Cybercriminals are also using AI to fine tune their operations, with more persuasive personalization, better translation into other languages and easier reconnaissance against high-value targets. It is becoming harder to detect and block attacks, and harder to train workforces to spot suspicious activity. ... “AI has increased the accuracy of a lot of phishing emails. Everybody was familiar with phishing emails you could spot it by the bad grammar and the poor formatting and stuff like that. Previously, a good attacker could create a good phishing email. All AI has done is allowed the attacker to generate good quality phishing emails at speed and at scale,” explained Richard Meeus, EMEA director of security strategy and technology at Akamai. ... For CISOs, wider cybersecurity and fraud prevention teams, recent developments in phishing and cyber extortion schemes will pose real challenges in the coming year. “User awareness still matters, but it isn’t enough,” cautioned Forescout’s Ferguson. “In a world of deepfake video, cloned voices and perfect written English, your control point can’t be ‘would our users spot this?’”


AI Fatigue: Is the backlash against AI already here?

The problem of AI fatigue is inevitable, but also to be expected, according to Dr Clare Walsh, director of education at the Institute of Analytics (IoA). “For those working in digital long enough. They know there is always a period after the initial excitement at the launch of a new technology when ordinary users start to see the costs and limitations of the latest technologies,” she says. “After 10 years of non-stop exciting advancements – from the first neural nets in 2016 to RAG solutions today – we may have forgotten this phase of disappointment was coming. It doesn’t negate the potential of AI technology – it is just an inevitable part of the adoption curve.” ... Holding back the tide of AI fatigue is also about not presenting it as the only solution to every problem, warns Claus Jepsen, Unit4’s CTO. “It is absolutely critical the IT team is asking the right questions and thoroughly interrogating the brief from the business,” he explains. “Quite often, AI is not the right answer. If you foist AI onto the business when they don’t want or need it, you’ll get a backlash. You can avoid the threat of AI fatigue if you listen carefully to your team and really appreciate how they want to interact with technology, where its use can be improved, and where it adds absolutely no value.” ... “AI fatigue is not just a productivity issue; it is a board-level risk,” she says. “When workflows are interrupted, or systems overlap, trust in technology erodes, driving disengagement, errors, and higher attrition. ...”


Why Cybersecurity Risk Management Will Continue to Increase in Complexity in 2026

The year 2026 ushers in tougher rules across regions and industries. Compliance pressure continues to build from multiple directions. By 2026, sector-specific and regional rules will grow tighter, from NIS2 enforcement across Europe to updated PCI DSS controls, alongside firmer privacy and AI oversight. Privacy laws continue tightening while new AI regulations add requirements around algorithmic transparency and data handling. Organizations are now juggling NIST frameworks, ISO 27001 certifications, and sector-specific mandates simultaneously. Each framework arrives with a valid intent, yet together they create layers of obligation that rarely align cleanly. This tension surfaced clearly in 2025, when more than forty CISOs from global enterprises urged the G7 and OECD to push for closer regulatory coordination. Their message was simple. Fragmented rules drain limited security resources and weaken collective response. ... The majority of organizations no longer run security in isolation. Daily operations depend on cloud providers, managed service partners, niche SaaS tools, and open-source libraries pulled into production without much ceremony. The problem keeps compounding: your vendors have their own vendors, creating chains of dependency that stretch impossibly far. You can secure your own network perfectly and still get breached because a third-party contractor left credentials exposed.


Seven steps to AI supply chain visibility — before a breach forces the issue

NIST’s AI Risk Management Framework, released in 2023, explicitly calls for AI-BOMs as part of its “Map” function, acknowledging that traditional software SBOMs don’t capture model-specific risks. But software dependencies resolve at build time and stay fixed. Conversely, model dependencies resolve at runtime, often fetching weights from HTTP endpoints during initialization, and mutate continuously through retraining, drift correction, and feedback loops. LoRA adapters modify weights without version control, making it impossible to track which model version is actually running in production. ... AI-BOMs are forensics, not firewalls. When ReversingLabs discovered nullifAI-compromised models, documented provenance would have immediately identified which organizations downloaded them. That’s invaluable to know for incident response, while being practically useless for prevention. Budgeting for protecting AI-BOMs needs to take that factor into account. The ML-BOM tooling ecosystem is maturing fast, but it's not where software SBOMs are yet. Tools like Syft and Trivy generate complete software inventories in minutes. ML-BOM tooling is earlier in that curve. Vendors are shipping solutions, but integration and automation still require additional steps and more effort. Organizations starting now may need manual processes to fill gaps. AI-BOMs won't stop model poisoning as that happens during training, often before an organization ever downloads the model.


Power, compute, and sovereignty: Why India must build its own AI infrastructure in 2026

Digital infrastructure decisions made in 2026 will shape India’s technological posture well into the 2040s. Data centers, power systems, and AI platforms are not short-cycle investments; they are multi-decade commitments. In this context, policy clarity becomes a prerequisite for execution rather than an afterthought. Clear, stable frameworks around data governance, AI regulation, cross-border compute flows, and energy integration reduce long-term risk and enable infrastructure to be designed correctly the first time. Ambiguity forces fragmentation capital hesitates, architectures become reactive, and systems are retrofitted instead of engineered. As India accelerates its AI ambitions, predictability in policy will be as important as speed in deployment. ... In India’s context, sovereignty does not imply isolation. It implies resilience. Compliance, data residency, and AI governance cannot be retrofitted into infrastructure after it is built. They must be embedded from inception governing where data resides, how it moves, how workloads are isolated, audited, and secured, and how infrastructure responds to evolving regulatory expectations. Systems designed this way reduce friction for enterprises operating in regulated environments and provide governments with confidence in domestic digital capability. This reality also reframes the role of domestic technology firms. 


Why AI Risk Visibility Is the Future of Enterprise Cybersecurity Strategy

Vulnerabilities arise from two sources: internal infrastructure and third-party tools that companies rely on. Organizations typically have stronger control over internally developed systems. The complexity stems from third-party software that introduces new risks whenever a new version or patch is released. A comprehensive asset inventory is essential for documenting the software and hardware resources in use. Once the enterprise knows what it has, it can evaluate which systems pose the highest risk. Asset management, infrastructure, and information security teams, along with audit functions, all contribute to that assessment. Together, they can determine where remediation must occur first. Cloud service providers are responsible for cloud-based Software as a Service (SaaS) applications. It’s vital, however, for the company to take on data governance and service offboarding responsibilities. Contracts must clearly specify how data is handled, transferred, or destroyed at the end of the relationship. ... Alignment between business and IT leadership is essential. The chief information officer (CIO) approves the IT project kickoff and allocates the required budget and other resources. The business analysis team translates those needs into technical requirements. Quarterly scorecards and governance checkpoints create visibility, enabling leaders to make decisions that balance business outcomes and technical realities.


Why are IT leaders optimistic about future AI governance

IT leaders are optimistic about AI’s transformative potential. This optimism extends to AI governance, where the strategic integration of NHI management enhances security and enables organizations to confidently pursue AI initiatives. It’s essential to ensure that security measures evolve alongside technological advancements, safeguarding AI systems without stifling innovation. ... Can robust security and innovation coexist harmoniously? The answer lies in striking a balance between rigorous security measures and fostering an environment conducive to innovation. Properly managing NHIs equips organizations with the flexibility to innovate while maintaining a fortified security posture. With advancements in artificial intelligence and automation progress, machine identities play an increasingly pivotal role in enabling these technologies. By ensuring that machine interactions are secure and transparent, businesses can confidently explore the transformative potential of AI without compromising on security. Herein lies the essence of responsible AI governance: leveraging data-driven insights to enable ethical and sustainable technological growth while safeguarding against inherent risks. ... What can organizations do to harness the collective expertise of stakeholders? Where cyber threats are increasingly sophisticated, collaboration becomes the cornerstone of a resilient cybersecurity framework. 

Daily Tech Digest - January 02, 2026


Quote for the day:

“If your ship doesn’t come in, swim out to meet it!” -- Jonathan Winters



Delivering resilience and continuity for AI

Think of it as technical debt, suggests IDC group VP Daniel Saroff as most enterprises underestimate the strain AI puts on connectivity and compute. Siloed infrastructure won’t deliver what AI needs and CIOs need to think about these and other things in a more integrated way to make AI successful. “You have to look at your GPU infrastructure, bandwidth, network availability, and connectivity between respective applications,” he says. “If you have environments not set up for highly transactional, GPU-intensive environments, you’re going to have a problem,” Saroff warns. “And having very fragmented infrastructure means you need to pull data and integrate multiple different systems, especially when you start to look at agentic AI.” ... Making AI scale will almost certainly mean taking a hard look at your data architecture. Every database adds features for AI. And lakehouses promise you can bring operational data and analytics together without affecting the SLAs of production workloads. Or you can go further with data platforms like Azure Fabric that bring in streaming and time series data to use for AI applications. If you’ve already tried different approaches, you likely need to rearchitect your data layer to get away from the operational sprawl of fragmented microservices, where every data hand-off between separate vector stores, graph databases, and document silos introduces latency and governance gaps. Too many points of failure make it hard to deliver high availability guarantees.


Technological Disruption: Strategic Inflection Points From 2026 - 2036

From a defensive standpoint, AI-driven security solutions will provide continuous surveillance, automated remediation, and predictive threat modeling at a scale unattainable by human analysts. Simultaneously, attackers will utilize AI to create polymorphic malware, execute influence operations, and exploit holes at machine speed. The outcome will be an environment where cyber war progresses more rapidly than conventional command-and-control systems can regulate. As we approach 2036, the primary concern will be AI governance rather than AI capacity. ... From 2026 to 2030, enterprises will increasingly recognize that cryptographic agility is vital. The move to post-quantum cryptography standards means that old systems, especially those in critical infrastructure, financial services, and government networks, need to be fully inventoried, evaluated, and upgraded. By the early 2030s, quantum innovation will transcend cryptography, impacting optimization, materials science, logistics, and national security applications. ... In the forthcoming decade, supply chain security will transition from compliance-based evaluations to ongoing risk intelligence. Transparency methods, including software bills of materials, hardware traceability, and real-time vendor risk assessment, will evolve into standard expectations rather than just best practices. Supply chain resilience will strategically impact national competitiveness.


True agentic AI is years away - here's why and how we get there

We're not there yet. We're not even close. Today's bots are limited to chat interactions and often fail outside that narrow operating context. For example, what Microsoft calls an "agent" in the Microsoft 365 productivity suite, probably the best-known instance of an agent, is simply a way to automatically generate a Word document. Market data shows that agents haven't taken off. ... Simple automations can certainly bring about benefits, such as assisting a call center operator or rapidly handling numerous invoices. However, a growing body of scholarly and technical reports has highlighted the limitations of today's agents, which have failed to advance beyond these basic automations. ... Before agents can live up to the "fully autonomous code" hype of Microsoft and others, they must overcome two primary technological shortcomings. Ongoing research across the industry is focused on these two challenges: Developing a reinforcement learning approach to designing agents; and Re-engineering AI's use of memory -- not just memory chips such as DRAM, but the whole phenomenon of storing and retrieving information. Reinforcement learning, which has been around for decades, has demonstrated striking results in enabling AI to carry out tasks over a very long time horizon. ... On the horizon looms a significant shift in reinforcement learning itself, which could be a boon or further complicate matters. Can AI do a better job of designing reinforcement learning than humans?


Why Developer Experience Matters More Than Ever in Banking

Effective AI assistance, in fact, meets developers where they are—or where they work. Some prefer a command-line interface, others live inside an IDE, and still others rely heavily on sample code and language-specific SDKs. A strong DX strategy supports all of these modes, using AI to surface accurate, context-aware guidance without forcing developers into a single workflow. When AI reinforces clarity, it becomes a force multiplier. ... As AI-assisted development becomes more common, the quality of documentation takes on new importance. Because it is no longer read only by humans, documentation increasingly serves as the knowledge base that enables AI agents that help developers search, generate, and validate code. When documentation is vague or poorly structured, it introduces confusion, often in ways that actively undermine developer confidence. ... In highly regulated environments, developers want, and expect, guardrails—but not at the expense of speed and consistency. One of the most effective ways to balance those demands is by codifying business rules and compliance requirements directly into the platform, rather than relying on manual, human-driven review at key milestones. Talluri describes this approach as “policy as code”: embedding rules, validations, and regional requirements into the system so developers receive immediate, actionable prompts and feedback as they work. ... The business case for exceptional developer experience rests on a simple truth: trust drives productivity.


AI-powered testing for strategic leadership

Nearly half of teams still release untested code due to time pressure, creating fragile systems and widening risk exposure. Legacy architectures further compound this, making modernisation difficult and slowing down automated validation,” he said. AI-generated code also introduces new vulnerabilities. Without strong validation pipelines, testing quickly becomes the bottleneck of transformation. Developers often view testing as tedious, and with modern codebases spanning multiple interconnected applications, the challenge intensifies. At the same time, misalignment between leadership and engineering teams leads to unclear priorities and rushed decisions. While the pace of development already feels fast, it is only set to accelerate. To overcome barriers, CIOs can adopt model-based, codeless AI testing that reduces dependence on fragile code-level automation and cuts ongoing maintenance. This approach can reduce manual effort by 80%–90% and enables non-technical experts to participate through natural-language and visual test generation. For Wong, strong governance is vital. This entails domain-trained, testing-specific AI that avoids hallucinations and supports safe, transparent validation. Instead of becoming autonomous, AI can act as a co-pilot working alongside developers. “By aligning teams, modernising toolchains, and embedding guardrails, CIOs can shift from reactive firefighting to proactive, AI-driven quality engineering,” he said.


The Architect’s Dilemma: Choose a Proven Path or Pave Your Own Way?

Platforms and frameworks are like paved roads that may help a team progress faster on their journey, with well-defined "exit ramps" or extension points where a team can extend the platform to meet their needs, but they come with side-effects that may make them undesirable. Teams need to decide when, if ever, they need to leave the path others have paved and find their own way by developing extensions to the platform or framework, or by developing new platforms or frameworks. The challenge teams face when they use platforms or frameworks as the basis for their software architectures is to choose the "paved road" (platform or framework) that gets them closest to their desired destination with minimal diversions or new construction. ... Many platform decisions are innocuous and can be accepted and ignored when they don’t affect the QARs that the team needs to meet. The only way to know whether the decisions are harmful is through experiments that expose when the platform is failing to meet the goals of the system. Since the decisions made by the platform developers are often undocumented and/or unknowable, it’s imperative that teams be able to test their system (including the platforms on which they are built) to make sure that their architectural goals (i.e. QARs) are being met. ... Using the "paved road" metaphor, the LLM provides a proven path but it does not take the team where they need to go. When this happens, they have no choice but to either start extending the platform (if they can), finding a different platform, or building their own platform.


Supply chains, AI, and the cloud: The biggest failures (and one success) of 2025

By compromising a single target with a large number of downstream users—say a cloud service or maintainers or developers of widely used open source or proprietary software—attackers can infect potentially millions of the target’s downstream users. ... Another significant security story cast both Meta and Yandex as the villains. Both companies were caught exploiting an Android weakness that allowed them to de-anonymize visitors so years of their browsing histories could be tracked. The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allowed Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. ... The outage with the biggest impact came in October, when a single point of failure inside Amazon’s sprawling network took out vital services worldwide. It lasted 15 hours and 32 minutes. The root cause that kicked off a chain of events was a software bug in the software that monitors the stability of load balances by, among other things, periodically creating new DNS configurations for endpoints within the Amazon Web Services network. A race condition—a type of bug that makes a process dependent on the timing or sequence of events that are variable and outside the developers’ control—caused a key component inside the network to experience “unusually high delays needing to retry its update on several of the DNS endpoint,” Amazon said in a post-mortem.


The Evolving Cybersecurity Challenge for Critical Infrastructure

Convergence between OT, IT and the cloud is providing cybercriminal groups with the opportunity to target critical infrastructure. Operators, and regulators, are wrestling with new technology and new manufacturers, outside the traditional OT/ICS supply chain. “With the geopolitical tensions and the way that the world will look in maybe a few years, they're starting to scratch their heads and think, ‘okay, is it secure? Is it safe? How was it developed? Is there any remote access? How is it being configured?’ There are things that are being done now, that will have an effect in a few years’ time,” cautioned Daniel dos Santos, head of security research at Forescout's Vedere Labs. Given the lifespans of operational technology, installing insecure equipment now can have long-term consequences. Meanwhile, CISOs face dealing with older hardware that was not designed for modern threats. Even where vendors release patches, CNI operators do not always apply them, either because of concerns about business interruption, or a lack of visibility. ... Threats to CNI are not likely to abate in 2026. Legislators are putting more emphasis on cyber resilience and directives, such as the EU’s Cyber Resilience Act, will improve the security of connected devices. But these upgrades take time. “Threats from criminal groups continue to grow exponentially,” said Phil Tonkin, CTO at OT security specialists Dragos


The changing role of the MSP: What does this mean for security?

MSPs hold a unique position within the IT ecosystem, as they are often responsible for managing and supporting the IT infrastructures, cloud services, and cybersecurity of many different organizations. These trusted partners often have privileged access to the inner workings of the organizations they support, including access to the critical systems, sensitive information, and intellectual property of their clients. ... Research shows that over half of MSP leaders globally believe that their customers are at more risk today than this time last year when it comes to cyber threats, with AI-based attack vectors, ransomware/malware, and insider threats the most commonly faced threats. As a result of this uptick in threats, more organizations than ever are leaning on MSPs for cyber support. In fact, in 2025, 84% of MSPs managed either their clients’ cyber infrastructure or their cyber and IT estates combined. This increased significantly, from 64% the previous year. What this shows is that SMEs are realising that they cannot handle cybersecurity alone, turning to MSPs for additional help. Cybersecurity is no longer an optional extra or add-on; it’s becoming a core, expected service for MSPs. MSP leaders are transitioning from general IT support to becoming essential cybersecurity guardians. ... MSPs that adapt by investing in specialized cybersecurity expertise, advanced technologies, and a proactive security posture will thrive, becoming indispensable partners to businesses navigating the complex world of cyber risk. 


What’s next for Azure containers?

Until now, even though Azure has had deep eBPF support, you’ve had to bring your own eBPF tools and manage them yourself, which does require expertise to run at scale. Not everyone is a Kubernetes platform engineer, and with tools like AKS providing a managed environment for cloud-native applications, having a managed eBPF environment is an important upgrade. The new Azure Managed Cilium tool provides a quick way of getting that benefit in your applications, using it for host routing and significantly reducing the overhead that comes with iptables-based networking. ... Declarative policies let Azure lock down container features to reduce the risk of compromised container images affecting other users. At the same time, it’s working to secure the underlying host OS, which for ACI is Linux. SELinux allows Microsoft to lock that image down, providing an immutable host OS. However, those SELinux policies don’t cross the boundary into containers, leaving their userspace vulnerable. ... Having a policy-driven approach to security helps quickly remediate issues. If, say, a common container layer has a vulnerability, you can build and verify a patch layer and deploy it quickly. There’s no need to patch everything in the container, only the relevant components. Microsoft has been doing this for OS features for some time now as part of its internal Project Copacetic, and it’s extending the process to common runtimes and libraries, building patches with updated packages for tools like Python.