Daily Tech Digest - January 30, 2026


Quote for the day:

"In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it." -- Jane Smiley



Crooks are hijacking and reselling AI infrastructure: Report

In a report released Wednesday, researchers at Pillar Security say they have discovered campaigns at scale going after exposed large language model (LLM) and MCP endpoints – for example, an AI-powered support chatbot on a website. “I think it’s alarming,” said report co-author Ariel Fogel. “What we’ve discovered is an actual criminal network where people are trying to steal your credentials, steal your ability to use LLMs and your computations, and then resell it.” ... How big are these campaigns? In the past couple of weeks alone, the researchers’ honeypots captured 35,000 attack sessions hunting for exposed AI infrastructure. “This isn’t a one-off attack,” Fogel added. “It’s a business.” He doubts a nation-state it behind it; the campaigns appear to be run by a small group. ... Defenders need to treat AI services with the same rigor as APIs or databases, he said, starting with authentication, telemetry, and threat modelling early in the development cycle. “As MCP becomes foundational to modern AI integrations, securing those protocol interfaces, not just model access, must be a priority,” he said.  ... Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. “Do not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI/LLM in a safe way that benefits the business,” he advised.


AI-Powered DevSecOps: Automating Security with Machine Learning Tools

Here's the uncomfortable truth: AI is both causing and solving the same problem. A Snyk survey from early 2024 found that 77% of technology leaders believe AI gives them a competitive advantage in development speed. That's great for quarterly demos and investor decks. It's less great when you realize that faster code production means exponentially more code to secure, and most organizations haven't figured out how to scale their security practice at the same rate. ... Don't try to AI-ify your entire security stack at once. Pick one high-pain problem — maybe it's the backlog of static analysis findings nobody has time to triage, or maybe it's spotting secrets accidentally committed to repos — and deploy a focused tool that solves just that problem. Learn how it behaves. Understand its failure modes. Then expand. ... This is non-negotiable, at least for now. AI should flag, suggest, and prioritize. It should not auto-merge security fixes or automatically block deployments without human confirmation. I've seen two different incidents in the past year where an overzealous ML system blocked a critical hotfix because it misclassified a legitimate code pattern as suspicious. Both cases were resolved within hours, but both caused real business impact. The right mental model is "AI as junior analyst." ... You need clear policies around which AI tools are approved for use, who owns their output, and how to handle disagreements between human judgment and AI recommendations.


AI & the Death of Accuracy: What It Means for Zero-Trust

The basic idea is that as the signal quality degrades over time through junk training data, models can remain fluent and fully interact with the user while becoming less reliable. From a security standpoint, this can be dangerous, as AI models are positioned to generate confident-yet-plausible errors when it comes to code reviews, patch recommendations, app coding, security triaging, and other tasks. More critically, model degradation can erode and misalign system guardrails, giving attackers the opportunity exploit the opening through things like prompt injection. ... "Most enterprises are not training frontier LLMs from scratch, but they are increasingly building workflows that can create self-reinforcing data stores, like internal knowledge bases, that accumulate AI-generated text, summaries, and tickets over time," she tells Dark Reading.  ... Gartner said that to combat the potential impending issue of model degradation, organizations will need a way to identify and tag AI-generated data. This could be addressed through active metadata practices (such as establishing real-time alerts for when data may require recertification) and potentially appointing a governance leader that knows how to responsibly work with AI-generated content. ... Kelley argues that there are pragmatic ways to "save the signal," namely through prioritizing continuous model behavior evaluation and governing training data.


The Friction Fix: Change What Matters

Friction is the invisible current that sinks every transformation. Friction isn’t one thing, it’s systemic. Relationships produce friction: between the people, teams and technology. ... When faced with a systemic challenge, our human inclination is to blame. Unfortunately, we blame the wrong things. We blame the engineering team for failing to work fast enough or decide the team is too small, rather than recognize that our Gantt chart was fiction, which is an oversimplification of a complex dynamic. ... The fix is to pause and get oriented. Begin by identifying the core domain, the North Star. What is the goal of the system? For Fedex, it is fast package delivery. Chances are, when you are experiencing counterintuitive behavior, it is because people are navigating in different directions while using the same words. ... Every organization trying to change has that guy: the gatekeeper, the dungeon master, the self-proclaimed 10x engineer who knows where the bodies are buried. They also wield one magic word: No. ... It’s easy to blame that guy’s stubborn personality. But he embodies behavior that has been rewarded and reinforced. ... Refusal to change is contagious. When that guy shuts down curiosity, others drift towards a fixed mindset. Doubt becomes the focus, not experimentation. The organization can’t balance avoiding risk with trying something new. The transformation is dead in the water.


From devops to CTO: 8 things to start doing now

Devops leaders have the opportunity to make a difference in their organization and for their careers. Lead a successful AI initiative, deploy to production, deliver business value, and share best practices for other teams to follow. Successful devops leaders don’t jump on the easy opportunities; they look for the ones that can have a significant business impact. ... Another area where devops engineers can demonstrate leadership skills is by establishing standards for applying genAI tools throughout the software development lifecycle (SDLC). Advanced tools and capabilities require effective strategies to extend best practices beyond early adopters and ensure that multiple teams succeed. ... If you want to be recognized for promotions and greater responsibilities, a place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. Devops engineers can position themselves for a leadership role by focusing on initiatives that deliver business value. ... One of the hardest mindset transitions for CTOs is shifting from being the technology expert and go-to problem-solver to becoming a leader facilitating the conversation about possible technology implementations. If you want to be a CTO, learn to take a step back to see the big picture and engage the team in recommending technology solutions.


The stakes rise for the CIO role in 2026

The CIO's days as back-office custodian of IT are long gone, to be sure, but that doesn't mean the role is settled. Indeed, Seewald and others see plenty of changes still underway. In 2026, the CIO's role in shaping how the business operates and performs is still expanding. It reflects a nuanced change in expectations, according to longtime CIOs, analysts and IT advisors -- and one that is showing up in many ways as CIOs become more directly involved in nailing down competitive advantage and strategic success across their organizations. ... "While these core responsibilities remain the same, the environment in which CIOs operate has become far more complex," Tanowitz added. Conal Gallagher, CIO and CISO at Flexera, said the CIO in 2026 is now "accountable for outcomes: trusted data, controlled spend, managed risk and measurable productivity. "The deliverable isn't a project plan," Gallagher said. "It's proof that the business runs faster, safer and more cost-disciplined because of the operating model IT enables." ... In 2026, the CIO role is less about being the technology owner and more about being a business integrator, Hoang said. At Commvault, that shift places greater emphasis on governance and orchestration across ecosystems. "We're operating in a multicloud, multivendor, AI-infused environment," she said. "A big part of my job is building guardrails and partnerships that enable others to move fast -- safely," she said. 


Inside the Shift to High-Density, AI-Ready Data Centres

As density increases, design philosophy must evolve. Power infrastructure, backup systems, and cooling can no longer be treated as independent layers; they have to be tightly integrated. Our facilities use modular and scalable power and cooling architectures that allow us to expand capacity without disrupting live environments. Rated-4 resilience is non-negotiable, even under continuous, high-density AI workloads. The real focus is flexibility. Customers shouldn’t be forced into an all-or-nothing transition. Our approach allows them to move gradually to higher densities while preserving uptime, efficiency, and performance. High-density AI infrastructure is less about brute force and more about disciplined engineering that sustains reliability at scale. ... The most common misconception is that AI data centres are fundamentally different entities. While AI workloads do increase density, power, and cooling demands, the core principles of reliability, uptime, and efficiency remain unchanged. AI readiness is not about branding; it’s about engineering and operations. Supporting AI workloads requires scalable and resilient power delivery, precision cooling, and flexible designs that can handle GPUs and accelerators efficiently over sustained periods. Simply adding more compute without addressing these fundamentals leads to inefficiency and risk. The focus must remain on mission-critical resilience, cost-effective energy management, and sustainability. 


Software Supply Chain Threats Are on the OWASP Top Ten—Yet Nothing Will Change Unless We Do

As organizations deepen their reliance on open-source components and embrace AI-enabled development, software supply chain risks will become more prevalent. In the OWASP survey, 50% of respondents ranked software supply chain failures number one. The awareness is there. Now the pressure is on for software manufacturers to enhance software transparency, making supply chain attacks far less likely and less damaging. ... Attackers only need one forgotten open-source component from 2014 that still lives quietly inside software to execute a widespread attack. The ability to cause widespread damage by targeting the software supply chain makes these vulnerabilities alluring for attackers. Why break into a hardened product when one outdated dependency—often buried several layers down—opens the door with far less effort? The SolarWinds software supply chain attack that took place in 2020 demonstrated the access adversaries gain when they hijack the build process itself. ... “Stable” legacy components often go uninspected for years. These aging libraries, firmware blocks, and third-party binaries frequently contain memory-unsafe constructs and unpatched vulnerabilities that could be exploited. Be sure to review legacy code and not give it the benefit of the doubt. ... With an SBOM in hand, generated at every build, you can scan software for vulnerabilities and remediate issues before they are exploited. 


What the first 24 hours of a cyber incident should look like

When a security advisory is published, the first question is whether any assets are potentially exposed. In the past, a vendor’s claim of exploitation may have sufficed. Given the precedent set over the past year, it is unwise to rely solely on a vendor advisory for exploited-in-the-wild status. Too often, advisories or exploitation confirmations reach teams too late or without the context needed to prioritise the response. CISA’s KEV, trusted third-party publications, and vulnerability researchers should form the foundation of any remediation programme. ... Many organisations will leverage their incident response (IR) retainers to assess the extent of the compromise or, at a minimum, perform a rudimentary threat hunt for indicators of compromise (IoCs) before involving the IR team. As with the first step, accurate, high-fidelity intelligence is critical. Simply downloading IoC lists filled with dual-use tools from social media will generate noise and likely lead to inaccurate conclusions. Arguably, the cornerstone of the initial assessment is ensuring that intelligence incorporates decay scoring to validate command-and-control (C2) infrastructure. For many, the term ‘threat hunt’ translates to little more than a log search on external gateways. ... The approach at this stage will be dependent on the results of the previous assessments. There is no default playbook here; however, an established decision framework that dictates how a company reacts is key.


NIST’s AI guidance pushes cybersecurity boundaries

For CISOs, what should matter is that NIST is shifting from a broad, principle-based AI risk management framework toward more operationally grounded expectations, especially for systems that act without constant human oversight. What is emerging across NIST’s AI-related cybersecurity work is a recognition that AI is no longer a distant or abstract governance issue, but a near-term security problem that the nation’s standards-setting body is trying to tackle in a multifaceted way. ... NIST’s instinct to frame AI as an extension of traditional software allows organizations to reuse familiar concepts — risk assessment, access control, logging, defense in depth — rather than starting from zero. Workshop participants repeatedly emphasized that many controls do transfer, at least in principle. But some experts argue that the analogy breaks down quickly in practice. AI systems behave probabilistically, not deterministically, they say. Their outputs depend on data that may change continuously after deployment. And in the case of agents, they may take actions that were not explicitly scripted in advance. ... “If you were a consumer of all of these documents, it was very difficult for you to look at them and understand how they relate to what you are doing and also understand how to identify where two documents may be talking about the same thing and where they overlap.”

Daily Tech Digest - January 29, 2026


Quote for the day:

"Great leaders start by leading themselves and to do that you need to know who you are" -- @GordonTredgold



Digital sovereignty feels good, but is it really?

There are no European equivalents of the American hyperscalers, let alone national ones. Although OVHcloud, Intermax, and BIT can be proposed as alternative managed locations for Azure, AWS, or Google Cloud, they are not comparable to those services. They lack the same huge ecosystem of partners, are less scalable, and are simply less user-friendly, especially when adopting new services. The reality is that many software packages also accompany the move to the cloud with a departure from on-premises. ... It is as much a ‘start’ of a digital migration as it is an end. Good luck transferring a system with deep AWS integrations to another location (even another public cloud). Although cloud-native principles would allow the same containerized workloads to run elsewhere, that has no bearing on the licenses purchased, compatibility and availability of applications, scalability, or ease of use. A self-built variant inside one’s own data center requires new expertise and almost assuredly a larger IT team. ... In some areas, European alternatives will be perfectly capable of replacing American software. However, there is no guarantee that a secure, consistent, and mature offering will be available in every area, from networking to AI inferencing and from CRM solutions to server hardware. The reality is not only that IT players from the US are prominent, but that the software ecosystem is globally integrated. Those who limit their choices must be prepared to encounter problems.


Operational data: Giving AI agents the senses to succeed

Agents need continuous streams of telemetry, logs, events, and metrics across the entire technology stack. This isn't batch processing; it is live data flowing from applications, infrastructure, security tools, and cloud platforms. When a security agent detects anomalous behavior, it needs to see what is happening right now, not what happened an hour ago ... Raw data streams aren't enough. Agents need the ability to correlate information across domains instantly. A spike in failed login attempts means nothing in isolation. But correlate it with a recent infrastructure change and unusual network traffic, and suddenly you have a confirmed security incident. This context separates signal from noise. ... The data infrastructure required for successful agentic AI has been on the "we should do that someday" list for years. In traditional analytics, poor data quality results in slower insights. Frustrating, but not catastrophic. ... Sophisticated organizations are moving beyond raw data collection to delivering data that arrives enriched with context. Relationships between systems, dependencies across services, and the business impact of technical components must be embedded in the data workflow. This ensures agents spend less time discovering context and more time acting on it. ... "Can our agents sense what is actually happening in our environment accurately, continuously, and with full context?" If the answer is no, get ready for agentic chaos. The good news is that this infrastructure isn't just valuable for AI agents. 


Identity, Data Security Converging Into Trouble for Security Teams: Report

Adversaries are shifting their focus from individual credentials to identity orchestration, federation trust, and misconfigured automation, it continued. Since access to critical data stores starts with identity, unified visibility across identity and data security is required to detect misconfigurations, reduce blind spots, and respond faster. That shift, experts warned, dramatically increases the potential impact of identity failures. ... AI automation is often a chain of agents, Schrader explained. “Each agent is a non-human identity that needs lifecycle governance, and each step accesses, transforms, or hands off data,” he said. “That means a mistake in identity governance — over-permissioned agent, weak token control, missing attestation — immediately becomes a data security incident — at machine speed and at scale — because the workflow keeps executing and propagating access and data downstream.” “As AI automation runs continuously, authorization becomes a live control system, not a quarterly review,” he continued. “Agent chains amplify failures. One over-permissioned non-human identity can propagate access and data downstream like workflow-shaped lateral movement. Non-human identities sprawl fast via APIs and OAuth. Data risk also shifts dynamically as agents transform and enrich outputs.” ... “Risk multiplies with automation,” he told TechNewsWorld. “A compromised service identity can cause automated data exfiltration, model poisoning, or large-scale misconfiguration in seconds, which is far faster than manual attacks.”


Why your AI agents need a trust layer before it’s too late

While traditional ML pipelines require human oversight at every step — data validation, model training, deployment and monitoring — modern agentic AI systems enable autonomous orchestration of complex workflows involving multiple specialized agents. But with this autonomy comes a critical question: How do we trust these agents? ... DNS transformed the internet by mapping human-readable names to IP addresses. ANS does something similar for AI agents, but with a crucial addition: it maps agent names to their cryptographic identity, their capabilities and their trust level. Here’s how it works in practice. Instead of agents communicating through hardcoded endpoints like “http://10.0.1.45:8080,” they use self-describing names like “a2a://concept-drift-detector.drift-detection.research-lab.v2.prod.” This naming convention immediately tells you the protocol (agent-to-agent), the function (drift detection), the provider (research-lab), the version (v2) and the environment (production). But the real innovation lies beneath this naming layer. ... The technical implementation leverages what’s called a zero-trust architecture. Every agent interaction requires mutual authentication using mTLS with agent-specific certificates. Unlike traditional service mesh mTLS, which only proves service identity, ANS mTLS includes capability attestation in the certificate extensions. An agent doesn’t just prove “I am agent X” — it proves “I am agent X and I have the verified capability to retrain models.” ... The broader implications extend beyond just ML operations. 


3 things cost-optimized CIOs should focus on to achieve maximum value

For Lenovo CIO Art Hu, optimization involves managing a funnel of business-focused ideas. His company’s portfolio-based approach to AI includes over 1,000 registered projects across all business areas. Hu has established a policy for AI exploration and optimization that allows thousands of flowers to bloom before focusing on value. “It’s important I don’t over-prioritize on quality initially, because we have so many projects,” he says. ... “There’s a technology thing, where you probably need multiple types of models and tools to work together,” he says. “So Microsoft or OpenAI on their own probably won’t do very well. However, when you combine Databricks, Microsoft, and your agents, then you get a solution.” ... But another key area is revenue growth management. Schildhouse’s team has developed an in-house diagnostic and predictive tool to help employees make pricing decisions quicker. They tracked usage to ensure the technology was effective, and the tool was scaled globally. This success has sponsored AI-powered developments in related areas, such as promotion and calendar optimization technology. “Scale is important at a company the size and breadth of Colgate-Palmolive, because one-off solutions in individual markets aren’t going to drive that value we need,” she says. “I travel around to our key markets, and it’s nice to be in India or Brazil and have the teams show how they’re using these tools, and how it’s making a difference on the ground.”


Gauging the real impact of AI agents

Enterprises aren’t totally sold on AI, but they’re increasingly buying into AI agents. Not the cloud-hosted models we hear so much about, but smaller, distributed models that fit into IT as it has been used by enterprises for decades. Given this, you surely wonder how it’s going. Are agents paying back? Yes. How do they impact hosting, networking, operations? That’s complicated. ... There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. ... As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. 


OT–IT Cybersecurity: Navigating The New Frontier Of Risk

IT systems managing data and corporate services and OT systems managing physical operations like energy, manufacturing, transportation, and utilities were formerly distinct worlds, but they are now intricately linked. ... Organizations can no longer treat IT and OT as distinct security areas as long as this interconnection persists. Instead, they must embrace comprehensive strategies that integrate protection, visibility, and risk management in both domains. ... It is evident to attackers that OT systems are valuable targets. Data, electricity grids, pipelines, industrial facilities, and public safety are all at risk from breaches that formerly affected traditional IT settings and increasingly spread to physical process networks. According to recent incident statistics, an increasing number of firms report breaches that affect both IT and OT systems; this is indicative of adversaries taking use of legacy vulnerabilities and interconnected routes. ... The dynamic threat environment created by contemporary OT-IT convergence is incompatible with traditional perimeter defenses and flat network trusts. In order to prevent threats from moving laterally both within and between IT/OT ecosystems, zero trust designs place a strong emphasis on segmentation, stringent access control, and continuous authentication. ... OT cybersecurity is an organizational issue rather than just a technological one. IT security leaders and OT teams have always worked in distinct silos with different goals and cultures.


SolarWinds, again: Critical RCE bugs reopen old wounds for enterprise security teams

SolarWinds is yet again disclosing security vulnerabilities in one of its widely-used products. The company has released updates to patch six critical authentication bypass and remote command execution vulnerabilities in its Web Help Desk (WHD) IT software. ... The four critical bugs are typically very reliable to exploit due to their deserialization and authentication logic flaws, noted Ryan Emmons, security researcher at Rapid7. “For attackers, that’s good news, because it means avoiding lots of bespoke exploit development work like you’d see with other less reliable bug classes.” Instead, attackers can use a standardized malicious payload across many vulnerable targets, Emmons noted. “If exploitation is successful, the attackers gain full control of the software and all the information stored by it, along with the potential ability to move laterally into other systems.” Meanwhile, the high-severity vulnerability CVE-2025-40536 would allow threat actors to bypass security controls and gain access to certain functionalities that should be restricted only to authenticated users. ... While this incident is bad news, the good news is it’s not the same error, he noted. ... Vendors must get down past the symptom layer and address the root cause of vulnerabilities in programming logic, he said, pointing out, “they plug the hole, but don’t figure out why they keep having holes.”


Policy to purpose: How HR can design sustainable scale in DPI

“In DPI, the human impact is immediate and profound: our systems touch citizens, markets, and national platforms every single day,” Anand says. The proximity to public outcomes, he notes, heightens expectations across the organisation. Employees are no longer insulated from the downstream effects of their work. “Employees increasingly recognise that their choices—technical, operational, and ethical—directly influence outcomes for millions,” he says. ... “The opportunity is to reframe governance as an enabler of meaningful, durable impact rather than a constraint,” he says. Systems that millions rely on require deep technical excellence and responsible design—work that appeals to professionals who value longevity over novelty. ... As DPI platforms scale and regulatory attention intensifies, Anand believes HR must rethink what agility really means. “As scale and scrutiny intensify, HR must design organisations where agility is achieved through clarity and discipline,” he says. Flexibility, in this framing, is not ad hoc. It must be institutionalised—across workforce models, talent mobility and capability development—within clearly articulated guardrails. ... “The role of HR will evolve from custodians of policy to architects of sustainable scale,” Anand says. In DPI contexts, that means ensuring growth, governance and human potential advance together, rather than pulling against one another.


Adversity Isn’t a Setback. It’s the Advantage That Separates Real Entrepreneurs

The entrepreneurs who endure are not defined by how fast they scale when conditions are ideal. They are defined by how they respond when conditions turn hostile. When capital dries up. When reputations are challenged. When markets shift and expectations falter. When systems resist them. ... The paradox is that entrepreneurs who face sustained adversity early often become the most capable operators later. They learn to conserve resources. They read people accurately. They pivot without panic. They make decisions grounded in reality rather than optimism. Resilience is not taught. It is earned through determination, risk and adversity. History shows time and time again that those who prevailed were often those who were hit with life’s toughest issues, but kept getting back up, adapting and keeping on their path ahead. ... Every entrepreneurial journey eventually reaches the same point. Something breaks. A deal collapses. A partner lets you down. A market turns. A personal crisis collides with professional pressure. Sometimes it is a mistake. Sometimes it is failure. Sometimes it is a disaster or trauma with no clear explanation and no easy way through. At that moment, the question is no longer about intelligence, credentials, or ambition. It is about response. Do you take the hit and adapt, or does it flatten you? Do you get back up and keep moving, or do you stay down and explain why this time was different? Does adversity sharpen your determination, or does it quietly drain your belief?

Daily Tech Digest - January 27, 2026


Quote for the day:

"Supreme leaders determine where generations are going and develop outstanding leaders they pass the baton to." -- Anyaele Sam Chiyson



Why code quality should be a C-suite concern

At first, speed feels like progress. Then the hidden costs begin to surface: escalating maintenance effort, rising incident frequency, delayed roadmaps and growing organizational tension. The expense of poor code slowly eats into return on investment — not always in ways that show up neatly on a spreadsheet, but always in ways that become painfully visible in daily operations. ... During the planning phase, rushed architectural decisions often lead to tightly coupled, monolithic systems that are expensive and risky to change. During development, shortcuts accumulate into what we call technical debt: duplicated logic, brittle integrations and outdated dependencies that appear harmless at first but quietly erode system stability over time. Like financial debt, technical debt compounds. ... Architecture always comes first. I advocate for modular growth — whether through a well- structured modular monolith that can later evolve into microservices, or through service-oriented architectures with clear domain boundaries. Platforms such as Kubernetes enable independent scaling of components, but only when the underlying architecture is cleanly segmented. Language and framework choices matter more than most leaders realize. ... The technologies we select, the boundaries we define and the failure modes we anticipate all place invisible limits on how far an organization can grow. From what I’ve seen, you simply cannot scale a product on a foundation that was never designed to evolve.


How to regulate social media for teens (and make it stick)

Noting that age assurance proposals have broad support from parents and educators, Allen says “the question is not whether children deserve safeguarding (they do) but whether prohibition is an effective tool for achieving it.” “History suggests that bans succeed or fail not on the basis of intention, but on whether they align with demand, supply, moral legitimacy and enforcement capacity. Prohibition does not remove human desire; it reallocates who fulfils it. Whether that reallocation reduces harm or increases it depends on how well policy engages with the underlying economics and psychology of behaviour.” ... “There is little evidence that young people themselves view social media as morally repugnant. On the contrary, it is where friendships are maintained, identities are explored and social status is negotiated. That does not mean it is harmless. It means it is meaningful.” “This creates a problem for prohibition. Where demand remains strong, supply will be found.” Here, Allen’s argument falters somewhat, in that it follows the logic that says bans push kids onto less regulated and more dangerous platforms. I.e., “the risk is not simply that prohibition fails. It is that it succeeds in changing who supplies children’s social connectivity.” The difference is that, while a basket of plums and some ingenuity are all you need to produce alcohol, social media platforms have their value in the collective. Like Star Trek’s Borg, they are more powerful the more people they assimilate. 


The era of agentic AI demands a data constitution, not better prompts

If a data pipeline drifts today, an agent doesn't just report the wrong number. It takes the wrong action. It provisions the wrong server type. It recommends a horror movie to a user watching cartoons. It hallucinates a customer service answer based on corrupted vector embeddings. ... In traditional SQL databases, a null value is just a null value. In a vector database, a null value or a schema mismatch can warp the semantic meaning of the entire embedding. Consider a scenario where metadata drifts. Suppose your pipeline ingests video metadata, but a race condition causes the "genre" tag to slip. Your metadata might tag a video as "live sports," but the embedding was generated from a "news clip." When an agent queries the database for "touchdown highlights," it retrieves the news clip because the vector similarity search is operating on a corrupted signal. The agent then serves that clip to millions of users. At scale, you cannot rely on downstream monitoring to catch this. By the time an anomaly alarm goes off, the agent has already made thousands of bad decisions. Quality controls must shift to the absolute "left" of the pipeline. ... Engineers generally hate guardrails. They view strict schemas and data contracts as bureaucratic hurdles that slow down deployment velocity. When introducing a data constitution, leaders often face pushback. Teams feel they are returning to the "waterfall" era of rigid database administration.


QA engineers must think like adversaries

Test engineers are now expected to understand pipelines, cloud-native architectures, and even prompt engineering for AI tools. The mindset has become more preventive than detective. AI has become part of QA’s toolkit, helping predict weak spots and optimise testing. At the same time, QA must validate the integrity and fairness of AI systems — making it both a user and a guardian of AI. ... With DevOps, QA became embedded into the pipeline — automated test execution, environment provisioning, and feedback loops are all part of CI/CD now. With SecOps, we’re adding security scans and penetration checks earlier, creating a DevTestSecOps model. QA is no longer a separate stage. It’s a mindset that exists throughout the lifecycle — from requirements to observability in production. ... Regression testing has become AI-augmented and data-driven. Instead of re-running all test cases, systems now prioritise based on change impact analysis. The SDET role is also evolving — they now bridge coding, observability, and automation frameworks, often owning quality gates within CI/CD. ... Security checks are now embedded as automated gates within pipelines. Performance testing, too, is moving earlier — with synthetic monitoring and API-level load simulations. In effect, security and speed can coexist, provided teams integrate validation rather than treat it as an afterthought.


The biggest AI bottleneck isn’t GPUs. It’s data resilience

The risks of poor data resilience will be magnified as agentic AI enters the mainstream. Whereas generative AI applications respond to a prompt with an answer in the same manner as a search engine, agentic systems are woven into production workflows, with models calling each other, exchanging data, triggering actions and propagating decisions across networks. Erroneous data can be amplified or corrupted as it moves between agents, like the party game “telephone.” ... Experts cite numerous reasons data protection gets short shrift in many organizations. A key one is an overly intense focus on compliance at the expense of operational excellence. That’s the difference between meeting a set of formal cybersecurity metrics and being able to survive real-world disruption. Compliance guidelines specify policies, controls and audits, while resilience is about operational survivability, such as maintaining data integrity, recovering full business operations, replaying or rolling back actions and containing the blast radius when systems fail or are attacked. ... “Resilience and compliance-oriented security are handled by different teams within enterprises, leading to a lack of coordination,” said Forrester’s Ellis. “There is a disconnect between how prepared people think they are and how prepared they actually are.” ... Missing or corrupted data can lead models to make decisions or recommendations that appear plausible but are far off the mark. 


When open science meets real-world cybersecurity

If there is no collaboration, usually the product that emerges is a great scientific specimen with very risky implementations. The risk is usually caught by normal cyber processes and reduced accordingly; however, scientists who see the value in IT/cyber collaboration usually also end up with a great scientific specimen. There is also managed risk in the implementation with almost no measurable negative impacts or costs. We’ve seen that if collaboration is planned into the project very early on, cybersecurity can provide value. ... Cybersecurity researchers often are confused and look for issues on the internet where they stumble onto the laboratory IT footprint and make claims that we are leaking non-public information. We clearly label and denote information that is releasable to the public, but it always seems there are folks who are quicker to report than to read the dissemination labels. ... Encryption at rest (EIR) is really a control to prevent data loss when the storage medium is no longer in your control. So, when the data has been reviewed for public release, we don’t spend the extra time, effort, and money to apply a control to data stores that provide no value to either the implementation or to a cyber control. ... You can imagine there are many custom IT and OT parts that run that machine. The replacement of components is not on a typical IT replacement schedule. This can present longer than ideal technology refresh cycles. The risk here is that integrating modern cyber technology into an older IT/OT technology stack has its challenges.


4 issues holding back CISOs’ security agendas

CISOs should aim to have team members know when and how to make prioritization calls for their own areas of work, “so that every single team is focusing on the most important stuff,” Khawaja says. “To do that, you need to create clear mechanisms and instructions for how you do decision-support,” he explains. “There should be criteria or factors that says it’s high, medium, low priority for anything delivered by the security team, because then any team member can look at any request that comes to them and they can confidently and effectively prioritize it.” ... According to Lee, the CISOs who keep pace with their organization’s AI strategy take a holistic approach, rather than work deployment to deployment. They establish a risk profile for specific data, so security doesn’t spend much time evaluating AI deployments that use low-risk data and can prioritize work on AI use cases that need medium- or high-risk data. They also assign security staffers to individual departments to stay on top of AI needs, and they train security teams on the skills needed to evaluate and secure AI initiatives. ... the challenge for CISOs not being about hiring for technical skills or even soft skills, but what he called “middle skills,” such as risk management and change management. These skills he sees becoming more crucial for aligning security to the business, getting users to adopt security protocols, and ultimately improving the organization’s security posture. “If you don’t have [those middle skills], there’s only so far the security team can go,” he says.


Rethinking data center strategy for AI at scale

Traditional data centers were engineered for predictable, transactional workloads. Your typical enterprise rack ran at 8kW, cooled with forced air, powered through 12-volt systems. This worked fine for databases, web applications, and cloud storage. Yet, AI workloads are pushing rack densities past 120kW. That's not an incremental change—it's a complete reimagining of what a data center needs to be. At these densities, air cooling becomes physically impossible. ... Walk into a typical data center today. The HVAC system has its own monitoring dashboard. Power distribution runs through a separate SCADA system. Compute performance lives in yet another tool. Network telemetry? Different stack entirely. Each subsystem operates in isolation, reporting intermittently through proprietary interfaces that don't talk to each other. Operators see dashboards, not decisions. ... Cooling systems can respond instantly to thermal changes, and power orchestration becomes adaptive rather than provisioned for theoretical peaks. AI clusters can scale based not just on demand, but in coordination with available power, cooling capacity, and network bandwidth. ... Real-time visibility, unified data architectures, and adaptive control will define performance, efficiency, and competitiveness in AI-ready data centers. The organizations that thrive in the AI era won't necessarily be those with the most data centers or the biggest chips; they'll be the ones that treat infrastructure as an intelligent, responsive system capable of sensing, adapting, and optimizing in real time.


Microsoft handed over BitLocker keys to law enforcement, raising enterprise data control concerns

The US Federal Bureau of Investigation approached Microsoft with a search warrant in early 2025, seeking keys to unlock encrypted data stored on three laptops in a case of alleged fraud involving the COVID unemployment assistance program in Guam. As the keys were stored on a Microsoft server, Microsoft adhered to the legal order and handed over the encryption keys ... While the encryption of BitLocker is robust, enterprises need to be mindful of who has custody of the keys, as this case illustrates. ... Enterprises using BitLocker should treat the recovery keys as highly sensitive, and avoid default cloud backup unless there is a clear business requirement and the associated risks are well understood and mitigated. ... CISOs should also ensure that when devices are repurposed, decommissioned, or moved across jurisdictions, keys should be regenerated as part of the workflow to ensure old keys cannot be used. ... If recovery keys are stored with a cloud provider, that provider may be compelled, at least in its home jurisdiction, to hand them over under lawful order, even if the data subject or company is elsewhere without notifying the company. This becomes even more critical from the point of view of a pharma company, semiconductor firm, defence contractor, or critical-infrastructure operator, as it exposes them to risks such as exposure of trade secrets in cross‑border investigations.


Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

For half a century, computing advanced in a reassuring, predictable way. Transistors – devices used to switch electrical signals on a computer chip – became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing. ... Instead of one general-purpose processor trying to do everything, modern systems combine different kinds of processors. Traditional processing units or CPUs handle control and decision-making. Graphics processors, are powerful processing units that were originally designed to handle the demands of graphics for computer games and other tasks. AI accelerators (specialised hardware that speeds up AI tasks) focus on large numbers of simple calculations carried out in parallel. Performance now depends on how well these components work together, rather than on how fast any one of them is. Alongside these developments, researchers are exploring more experimental technologies, including quantum processors (which harness the power of quantum science) and photonic processors, which use light instead of electricity. ... For users, life after Moore’s Law does not mean that computers stop improving. It means that improvements arrive in more uneven and task-specific ways. Some applications, such as AI-powered tools, diagnostics, navigation, complex modelling, may see noticeable gains, while general-purpose performance increases more slowly.

Daily Tech Digest - January 26, 2026


Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki



Stop Choosing Between Speed and Stability: The Art of Architectural Diplomacy

In contemporary business environments, Enterprise Architecture (EA) is frequently misunderstood as a static framework—merely a collection of diagrams stored digitally. In fact, EA functions as an evolving discipline focused on effective conflict management. It serves as the vital link between the immediate demands of the present and the long-term, sustainable objectives of the organization. To address these challenges, experienced architects employ a dual-framework approach, incorporating both W.A.R. and P.E.A.C.E. methodologies. At any given moment, an organization is a house divided. On one side, you have the product owners, sales teams, and innovators who are in a state of perpetual W.A.R. (Workarounds, Agility, Reactivity). They are facing the external pressures of a volatile market, where speed is the only currency and being "first" often trumps being "perfect." To them, architecture can feel like a roadblock—a series of bureaucratic "No’s" that stifle the ability to pivot. On the other side, you have the operations, security, and finance teams who crave P.E.A.C.E. (Principles, Efficiency, Alignment, Consistency, Evolution). They see the long-term devastation caused by unchecked "cowboy coding" and fragmented systems. They know that without a foundation of structural integrity, the enterprise will eventually collapse under the weight of its own complexity, turning a fast-moving startup into a sluggish, expensive legacy giant.


Why Identity Will Become the Ultimate Control Point for an Autonomous World in 2026

The law of unintended consequences will dominate organisational cybersecurity in 2026. As enterprises increase their reliance on autonomous AI agents with minimal human oversight, and as machine identities multiply, accountability will blur. The constant tension between efficiency and security will fuel uncontrolled privilege sprawl forcing organisations to innovate not only in technology, but in governance. ... Attackers will exploit this shift, embedding malicious prompts and compromising automated pipelines to trigger actions that bypass traditional controls. Conventional privileged access management and identity access management will no longer be sufficient. Continuous monitoring, adaptive risk frameworks, and real-time credential revocation will become essential to manage the full lifecycle of AI agents. At the same time, innovation in governance and regulation will be critical to prevent a future defined by “runaway” automation. Two years after NIST released its first AI Risk Management Framework, the framework remains voluntary globally, and adoption has been inconsistent since no jurisdiction mandates it. Unless governance becomes a requirement not just a guideline, organisations will continue to treat it as a cost rather than a safeguard. Regulatory frameworks that once focused on data privacy will expand to cover AI identity governance and cyber resilience, mandating cross-region redundancy and responsible agent oversight.


The human paradox at the center of modern cyber resilience

The problem for security leaders is that social engineering is still the most effective way to bypass otherwise robust technical controls. The problem is becoming more acute as threat actors increasingly use AI to deliver compelling, personalized, and scalable phishing attacks. While many such incidents never reach public attention, an attempt last year to defraud WPP used AI-generated video and voice cloning to impersonate senior executives in a highly convincing deepfake meeting. Unfortunately, the risks don’t end there. Even with strong technical controls and a workforce alert to social engineering tactics, risk also comes from employees who introduce tools, devices or processes that fall outside formal IT governance. ... What’s needed instead is a shift in both mindset and culture, where employees understand not just what not to do, but why their day-to-day decisions, which tools they trust, how they handle unexpected requests, and when they choose to slow down and double check something rather than act on instinct genuinely matter. From a leadership perspective, it’s much better to foster a culture which people feel comfortable reporting suspicious activity without fear of blame, rather than an environment where taking the risk feels like the easier option. ... Instead of acting quickly to avoid delaying work, the employee pauses because the culture has normalized slowing down when something seems unusual. They also know exactly how to report or verify because the processes are familiar and straightforward, with no confusion about who to contact or whether they’ll be blamed for raising a false alarm.


Is cloud backup repatriation right for your organization?

Cost is, without a doubt, one of the major reasons for repatriation. Cloud providers have touted the affordability of the cloud over physical data storage, but getting the most bang for your buck from using the cloud requires due diligence to keep costs down. Even major corporations struggle with this issue. The bigger the environment, the more complex it is to accurately model and cost, particularly with multi-cloud environments. And as we know, cloud is incredibly easy to scale up. Keeping with our data theme, understanding the costing model of data backup and bringing back data from deep storage is extremely expensive when done in bulk. Software must be expertly tuned to use the provider storage tier stack efficiently, or massive costs can be incurred. On-premises, the storage costs are already sunk. The data is also local (assuming local backup with remote replication for offsite backup,) so restoring data and services happens quicker. ... Straight-up backup to the cloud can be cheaper and more effective than on-site backups. It also passes a good portion of the management overhead to the cloud provider, such as hardware support, general maintenance and backup security. As we discussed, however, putting backups in another provider's hands might mean longer response and recovery times. Smaller businesses often have an immature environment and cloud backup can be a boon, but larger businesses might consider repatriation if the infrastructure for on-site is available. 


Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents

AI agents are different. They operate with delegated authority and can act on behalf of multiple users or teams without requiring ongoing human involvement. Once authorized, they are autonomous, persistent, and often act across systems, moving between various systems and data sources to complete tasks end-to-end. In this model, delegated access doesn’t just automate user actions, it expands them. Human users are constrained by the permissions they are explicitly granted, but AI agents are often given broader, more powerful access to operate effectively. As a result, the agent can perform actions that the user themselves was never authorized to take. ... It’s no wonder existing IAM assumptions break down. IAM assumes a clear identity, a defined owner, static roles, and periodic reviews that map to human behavior. AI agents don’t follow those patterns. They don’t fit neatly into user or service account categories, they operate continuously, and their effective access is defined by how they are used, not how they were originally approved. Without rethinking these assumptions, IAM becomes blind to the real risk AI agents introduce. ... When agents operate on behalf of individual users, they can provide the user access and capabilities beyond the user’s approved permissions. A user who cannot directly access certain data or perform specific actions may still trigger an agent that can. The agent becomes a proxy, enabling actions the user could never execute on their own. These actions are technically authorized - the agent has valid access. However, they are contextually unsafe.


The CISO’s Recovery-First Game Plan

CISOs must be on top of their game to protect an organization’s data. Lapses in cybersecurity around the data infrastructure can be devastating. Therefore, securing infrastructure needs to be air-tight. The “game plan” that leads a CISO to success must have the following elements: Immutable snapshots; Logical air-gapping; Fenced forensic environment;  Automated cyber protection; Cyber detection; and Near-instantaneous recovery. These six elements constitute the new wave in protecting data: next-generation data protection. There has already been a shift from modern data protection to this substantially higher level of next-gen data protection. A smart CISO would not knowingly leave their enterprise weaker. This is why adoption of automated cyber protection and cyber detection, built right into enterprise storage infrastructure, is increasing, as part of this move to next-gen data protection. Automated cyber protection and cyber detection are becoming a basic requirement for all enterprises that want to eliminate the impact of cyberattacks. All of this is vital for the rapid recovery of data within an enterprise after a cyberattack. ... But what would be smart for CISOs to do is to make adjustments based on what they currently have protecting their storage infrastructure. For example, even in a mixed storage environment, you can deploy automated cyber protection through software. You don’t need to rip and replace the cybersecurity systems and applications that you already have in place. 


ICE’s expanding use of FRT on minors collides with DHS policy, oversight warnings, law

At the center of the case is DHS’s use of Mobile Fortify, a field-deployed application that scans fingerprints and performs facial recognition, then compares collected data against multiple DHS databases, including CBP’s Traveler Verification Service, Border Patrol systems, and Office of Biometric Identity Management’s Automated Biometric Identification System. The complaint alleges DHS launched Mobile Fortify around June 2025 and has used it in the field more than 100,000 times since launch. Unlike CBP’s traveler entry-exit facial recognition program in which U.S. citizens can decline participation and consenting citizens’ photos are retained only until identity verification, Mobile Fortify is not restricted to ports of entry and is not meaningfully limited as to when, where, or from whom biometrics may be taken. The lawsuit cites a DHS Privacy Threshold Analysis stating that ICE agents may use Mobile Fortify when they “encounter an individual or associates of that individual,” and that agents “do not know an individual’s citizenship at the time of initial encounter” and use Mobile Fortify to determine or verify identity. The same passage, as quoted in the complaint, authorizes collection in identifiable form “regardless of citizenship or immigration status,” acknowledging that a photo captured could be of a U.S. citizen or lawful permanent resident.


From Incident to Insight: How Forensic Recovery Drives Adaptive Cyber Resilience

The biggest flaw is that traditional forensics is almost always reactive, and once complete, it ultimately fails to deliver timely insights that are vital to an organization. For example, analysts often begin gathering logs, memory dumps, and disk images only after a breach has been detected, by which point crucial evidence may be gone. Further compounding matters is the fact that the process is typically fragmented, with separate tools for endpoint detection, SIEM, and memory analysis that make it harder to piece together a coherent narrative. ... Modern forensic approaches capture evidence at the first sign of suspicious activity — preserving memory, process data, file paths, and network activity before attackers can destroy them. The key is storing artifacts securely outside the compromised environment, which ensures their integrity and maintains the chain of custody. The most effective strategies operate on parallel tracks. The first is dedicated to restoring operations and delivering forensic artifacts, while the other begins immediate investigations. By integrating forensic, endpoint, and network evidence collection together, silos and blind spots are replaced with a comprehensive and cohesive picture of the incident. ... When integrated into the incident response process, forensic recovery investigations begin earlier, compliance reporting is backed by verifiable facts, and legal defenses are equipped with the necessary evidence. 


Memgraph founder: Don’t get too loose with your use of MCP

“It is becoming almost universally accepted that without strong curation and contextual grounding, LLMs can misfire, misuse tools, or behave unpredictably. Let me clarify what I mean by ‘tool’ i.e. external capabilities provided to the LLM, ranging from search, calculations and database queries to communication, transaction execution and more, with each exposed as an action or API endpoint through MCP.” ... “But security isn’t actually the main possible MCP stumbling block. Perversely enough, by giving the LLM more capabilities, it might just get confused and end up charging too confidently down a completely wrong path,” said Tomicevic. “This problem mirrors context-window overload: too much information increases error rates. Developers still need to carefully curate the tools their LLMs can access, with best practice being to provide only a minimal, essential set. For more complex tasks, the most effective approach is to break them into smaller subtasks, often leveraging a graph-based strategy.” ... The truth that’s coming out of this discussion might lead us to understand that the best of today’s general-purpose models, like those from OpenAI, are trained to use built-in tools effectively. But even with a focused set of tools, organisations are not entirely out of the woods. Context remains a major challenge. Give an LLM a query tool and it runs queries; but without understanding the schema or what the data represents, it won’t generate accurate or meaningful queries.


Speaking the Same Language: Decoding the CISO-CFO Disconnect

On the surface, things look good: 88% of security leaders believe their priorities match business goals, and 55% of finance leaders view cybersecurity as a core strategic driver. However, the conviction is shallow. ... For CISOs, the report is a wake-up call regarding their perceived business acumen. While security leaders feel they are working hard to protect the organization, finance remains skeptical of their execution. The translation gap: Only 52% of finance leaders are "very confident" that their security team can communicate business impact clearly. Prioritization doubts: Just 43% of finance leaders feel very confident that security can prioritize investments based on actual risk. Strategy versus operations: Only 40% express full confidence in security's ability to align with business strategy. ... Chief Financial Officers are increasingly taking responsibility for enterprise risk management and cyber insurance, yet they feel they are operating with incomplete data. Efficiency concerns: Only 46% of finance leaders are very confident that security can deliver cost-efficient solutions. Perception of value: CFOs are split, with 38% viewing cybersecurity as a strategic enabler, while another 38% still view it as a cost center. ... "When security is done right, it doesn't slow the business down—it gives leadership the confidence to move faster. And to do that, you have to be able to connect with your CFO and COO through stories. Dashboards full of red, yellow, and green don't help a CFO," said Krista Arndt, 

Daily Tech Digest - January 25, 2026


Quote for the day:

"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll



Agentic AI exposes what we’re doing wrong

What needs to change is the level of precision and adaptability in network controls. You need networking that supports fine-grained segmentation, short-lived connectivity, and policies that can be continuously evaluated rather than set once and forgotten. You also need to treat east-west traffic visibility as a core requirement because agents will generate many internal calls that look legitimate unless you understand intent, identity, and context. ... When the user is an autonomous agent, control relies solely on identity: what the agent is, its permitted actions, what it can impersonate, and what it can delegate. Network location and static IP-based trust weaken when actions are initiated by software that can run anywhere, scale instantly, and change execution paths. This is where many enterprises will stumble.  ... The old finops playbook of tagging, showback, and monthly optimization is not enough on its own. You need near-real-time cost visibility and automated guardrails that stop waste as it happens, because “later” can mean “after the budget is gone.” Put differently, the unit economics of agentic systems must be designed, measured, and controlled like any other production system, ideally more aggressively because the feedback loop is faster. ... The industry’s favorite myth is that architecture slows innovation. In reality, architecture prevents innovation from turning into entropy. Agentic AI accelerates entropy by generating more actions, integrations, permissions, data movement, and operational variability than human-driven systems typically do.


‘Cute’ and ‘Criminal’: AI Perception, Human Bias, and Emotional Intelligence

Can you build artificial intelligence (AI) without emotional intelligence (EI)? Should you? What do we mean when we talk about “humans in the loop”? Are we asking the right questions about how humans design and govern “thinking” machines? One of the immediate problems we face with generative AI is that people increasingly rely on them for big decisions. I won’t call all of these ethical decisions, but in some cases they’re consequential decisions. And many users forget that these systems are trained on data that carry all kinds of inherited biases. When we talk about AI bias, it isn’t always abstract. It shows up in very literal assumptions the models make when they are asked to generate images or ideas. ... That question is really the beginning of understanding how these systems work. They are pulling from enormous bodies of unlabeled or inconsistently labeled data and then inferring patterns. We often forget that the inferences are statistical, not conceptual. To the model, “doctor” aligns with “male” because that’s the pattern the dataset reinforced. ... I didn’t tell the system, “diverse audience,” then all the children it generated fell into the same narrow “cute child” category. It’s not that the AI systems are racist or sexist. They simply don’t have self-awareness. They’re reflecting the dominant patterns in the datasets they learned from. But reflection without critique becomes reinforcement, and reinforcement becomes norm.


AI is quietly poisoning itself and pushing models toward collapse - but there's a cure

According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted. ... You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." ... The analyst argued that enterprises can no longer assume data is human‑generated or trustworthy by default, and must instead authenticate, verify, and track data lineage to protect business and financial outcomes. Ever try to authenticate and verify data from AI? It's not easy. It can be done, but AI literacy isn't a common skill. ... This situation means that flawed inputs can cascade through automated workflows and decision systems, producing worse results. Yes, that's right, if you think AI result bias, hallucinations, and simple factual errors are bad today, wait until tomorrow. ... Gartner suggested many companies will need stronger mechanisms to authenticate data sources, verify quality, tag AI‑generated content, and continuously manage metadata so they know what their systems are actually consuming.


4 Realities of AI Governance

AI has not replaced traditional security work; it has layered new obligations on top of it. We still have to protect our data and maintain sovereign assurance through independent audit reports, whether that’s SOC, PCI, ISO, or other standards. Still, we must today also guide our own teams and vendors on the use of powerful AI tools. That’s where accountability begins: with the human or process that touches the data. When the rules are clear, people move faster and safer; when directives are fuzzy, everything downstream is too—so we keep policy short, plain, and visible. ... Unless the contract says otherwise, assume prompts, outputs, or telemetry may be retained for “service improvement.” Fine-print phrases like “continuous improvement” often mean that inputs, outputs, or telemetry can be retained or used to tune systems unless you opt out. To keep reviews consistent, leverage resources like the NIST AI Risk Management Framework. It provides practical checklists for transparency, accountability, and monitoring. Remember the AI supply chain: your vendor depends on model providers, plugins, and open-source components; your risk includes their dependencies, so cover these in your TPRM process. ... Boundaries are the difference between safe speed and reckless speed. Start by defining a short set of data types that must never be pasted into external tools: regulated PII, confidential customer data, unreleased financials, source code, or merger and acquisition materials. Map the rest into simple classes-public, internal, sensitive-and tie each class to approved tools and use cases.


Your Cache is Hiding a Bad Architecture

Most engineers treat caching as a performance optimisation. They see a complex SQL query involving four joins taking 2 seconds to execute. Instead of analysing the execution plan or restructuring the schema, they wrap the call in a redis.get() block. ... By relying on the cache to mask inefficient database interactions, you haven’t fixed the bottleneck; you have simply hidden it behind a volatile memory store. You have turned a “nice-to-have” performance layer into a Critical Infrastructure Dependency. The moment that the cache key expires, or the Redis node evicts the key to free up memory, the application is forced to confront the reality of that 2-second query. And usually, it doesn’t confront it alone. It confronts it with 500 concurrent users who were all waiting for that key. ... Caching is not a strategy; it is a tactic. It is a powerful optimisation for systems that are already healthy, but it is a disastrous life-support system for those that are not. If you take nothing else from this, remember the litmus test: System stability should not depend on volatile memory. Go back to your codebase. Turn off Redis in your staging environment. Run your load tests. If your response times go up, you have a performance problem. If your error rates go up, you have an architectural problem.


UK bill accelerates shift to offensive cyber security

The Cyber Security and Resilience (Network and Information Systems) Bill entered Parliament in late 2025 and is expected to move through the legislative process during 2026. The government has positioned the bill as a major update to the UK's cyber framework for essential services and digital service providers. ... Poyser argued that many companies still lean heavily on defensive tools without validating how those controls perform under attack conditions. "Cybercriminals and state-backed threat actors are acting faster, more aggressively, and with far greater innovation-especially through the use of artificial intelligence-while too many businesses continue to rely on traditional defensive methods. This widening gap must be closed urgently," said Poyser. He also linked the coming UK legislative changes to a push for more proactive security validation. ... The company said this attacker-style approach changes how risk gets measured and prioritised. It said corporate security teams struggle to maintain an accurate picture of exposure through passive controls and periodic checks. "It is increasingly unrealistic for corporate security teams to maintain an accurate understanding of their true risk exposure using only traditional, passive methods," said Keith Poyser. "Threat actors do not wait for annual audits or one-off checks. Unless organisations test their systems in a way that reflects how real attackers operate, they will continue to be caught off-guard," said Poyser.


The new CDIO stack: Tech, talent and storytelling

The first layer is the one everyone ‘expects’. We built strong platforms: cloud infrastructure that can flex with the business, data platforms that bring together information from plants, systems and markets, analytics and AI capabilities that sit on top of that data, and a solid cyber posture to protect all of it. ... The second layer was not about machines at all. It was about people, about changing the talent mix so that digital is no longer “their” thing — it becomes “our” thing. We realised that if we kept thinking in terms of “IT people” and “business people”, we would always be negotiating across a wall. ... The third layer is the one that surprised even me. We noticed a pattern. Even when we had good platforms and strong talent, some initiatives would start with a bang and fizzle out. The technology worked. The pilot results were good. But momentum died. When we dug deeper, we realised the issue was not in the code. It was in the story. The operators on the shop floor, the sales teams, the plant heads and the board were all hearing slightly different stories about “digital”. ... Yes, I am responsible for technology. If the platforms are not robust, I have failed at the most basic level. Yes, I am responsible for talent. If we don’t have the right mix of skills — product, data, architecture, change — we cannot deliver. But I am also responsible for the narrative. ... For me, the real maturity of a digital organization shows when these three layers are aligned.


What Software Developers Need to Know About Secure Coding and AI Red Flags

The uptick in adoption of AI tools within the developer community aligns with growing expectations. Developers are now expected to work with greater efficiency to meet deadlines more quickly, all while delivering high-quality code. Developers might find AI assistants to be beneficial as they are immune to human-based tendencies like fatigue and biases, which can boost efficiency. But sacrificing safety for speed is unacceptable, as AI tools bring inherent risks of compromise. ... AI tools are not safe for enterprise use unless the code output is reviewed and implemented by a security-proficient human. 30% of security experts admit that they don't trust the accuracy of code generated by AI itself. That's why security leaders must prioritize the education and upskilling of developer teams, to ensure they have the necessary skills and capabilities to mitigate AI-assisted code vulnerabilities as early as possible. This will lead to the cultivation of a "security first" team culture and safer AI use. ... In addition, agentic AI introduces new or "agentic variations" of existing threats, like memory poisoning, remote code execution (RCE) and code attacks. It can harm code via logic errors, which cause the product to "run" correctly but act incorrectly; style inconsistencies, which result in patterns that do not align with the current, required structure; and lenient permissions, which act correctly but lack the authorization context to determine if an end user is allowed to perform a particular action.


Building a Self-Healing Data Pipeline That Fixes Its Own Python Errors

The core concept of this is relatively simple. Most data pipelines are fragile because they assume the world is perfect, and when the input data changes even slightly, they fail. Instead of accepting that crash, I designed my script to catch the exception, capture the “crime scene evidence”, which is basically the traceback and the first few lines of the file, and then pass it down to an LLM. ... The primary challenge with using Large Language Models for code generation is their tendency to hallucinate. From my experience, if you ask for a simple parameter, you often receive a paragraph of conversational text in return. To stop that, I leveraged structured outputs via Pydantic and OpenAI’s API. This forces the model to complete a strict form, acting as a filter between the messy AI reasoning and our clean Python code. ... Getting the prompt right took some trial and error. And that’s because initially, I only provided the error message, which forced the model to guess blindly at the problem. I quickly realized that to correctly identify issues like delimiter mismatches, the model needed to actually “see” a sample of the raw data. Now here is the big catch. You cannot actually read the whole file. If you try to pass a 2GB CSV into the prompt, you’ll blow up your context window and apparently your wallet. ... First, remember that every time your pipeline breaks, you are making an API call.


‘Complexity is where cyber risk tends to grow’

Last month, the Information Systems Audit and Control Association (ISACA) announced that it had been appointed to lead the global credentialing programme for the US Department of War’s (DoW) Cybersecurity Maturity Model Certification (CMMC). The CMMC, according to ISACA’s chief global strategy officer Chris Dimitriadis, is “designed to protect sensitive information across the defence industrial base and its supply chain”. ... “Transatlantic operations almost always increase complexity, and complexity is where cyber risk tends to grow,” he says. “The first major issue is supply chain exposure. Attackers rarely go after the strongest link, they look for the most vulnerable one. “In global ecosystems, that can be a smaller supplier, a service provider or a subcontractor.” The second issue, he says, is the “nature” of the data and the systems that are involved. “When defence-related information, controlled technical data, or sensitive operational systems are in play, the impact of compromise is simply much higher. That requires stronger access controls, better identity governance, and more disciplined incident response.” The third and final issue that Dimitriadis highlights is “multi-jurisdiction reality”. He explains that companies need to navigate different requirements, obligations and reporting expectations across regions, adding that if governance and security operations aren’t aligned, “you create gaps, and those gaps are exactly what threat actors exploit”.