Daily Tech Digest - February 02, 2026


Quote for the day:

"How do you want your story to end? Begin with that end in mind." -- Elizabeth McCormick



Why Architecture Rots No Matter How Good Your Engineers Are

Every architect has seen it. The system starts clean. The design makes sense. Code reviews are sharp. Engineers are solid. Yet six months later, performance has slipped. A caching layer breaks quietly. Technical debt shows up despite everyone’s best intentions. The question isn’t why this happens to bad teams. The question is why it happens to good teams. ... Rot doesn’t usually come from bad judgment. It comes from lost context. The information needed to prevent many problems exists. It’s just scattered across too many files, too many people, and too many moments in time. No single mind can hold it all. ... Human working memory holds roughly four chunks of information at once. That isn’t a vibe. It’s a constraint. And it matters more than we like to admit. When developers read code, they’re juggling variable state, control flow, call chains, edge cases, and intent. As the number of mental models increases, onboarding slows and comprehension drops. Once cognitive load pushes beyond working memory capacity, understanding doesn’t degrade linearly. It collapses. ... Standards drift because good intentions don’t scale. The system allows degradation, and the information needed to prevent it is often invisible at the moment decisions are made. Architecture decision records are a good example. ADRs capture why you chose one path over another. They preserve context. In practice, when a developer is making a change, they rarely stop to consult ADRs. 


Quantum Computing and Cybersecurity: The Way Forward for a Quantum-Safe Future

While the timeline for commercial production of a powerful quantum computer is uncertain, most industry insiders agree that it is only a matter of time. In its 2025 report, the Global Risk Institute posits a five to ten year timeframe for the development of Cryptographically Relevant Quantum Computers (CRQC). A quantum-powered adversary may decrypt traffic as it flows, impersonate endpoints or even intercept authentication credentials in transit. The foundational risk begins with intercepting VPN traffic around the world and compromising all HTTPS/SSL certificates. Beyond this, large, distributed Internet of Things (IoT) systems that rely on light-weight encryption would be compromised. Operational Technology (OT) and Industrial Control Systems (ICS) that cannot be upgraded swiftly are likely to be compromised too, jeopardizing vital sectors like healthcare, energy and transportation. HNDL poses a significant risk to long-lasting, sensitive data in finance, healthcare, government and critical infrastructure. These sectors are especially vulnerable due to their extended confidentiality requirements, most of which could be beyond the arrival of quantum computers. Enterprises ignoring this threat now risk future breaches, and regulatory or reputational damage when adversaries deploy quantum decryption. The downstream effects of such breaches could be catastrophic not just to the organization, but to entire ecosystems.


Chewing through data access is key to AI adoption

The fact that the generic nature of LLMs can be augmented by contextual data is a valuable solution to the bottleneck problem. But it presents another problem in the form of data access. Contextual data might exist, but it is typically scattered across multiple systems, held in multiple formats and generally stored heterogeneously. All of this makes data access difficult. Data silos, always a perennial problem for analytics, have now become a critical roadblock to AI adoption and value realisation. Another problem comes from compliance requirements. Many industries, organisations, and jurisdictions regulate how data is accessed and moved. This is particularly true in industries like financial services, healthcare, insurance, or government, but it is true to a greater or lesser extent in all industries. ... Evans suggests that data federation can provide access to context to feed and augment the generic training data of models. The result is likely the best approach that organisations have when facing their AI goals and contending with data access bottlenecks. “Moving data by default is really something of a brute force approach. It was needed during the heyday of the data warehouse, but technologies like Apache Iceberg and Trino make data lakehouses built around data federation more accessible than ever,” he said. “In the past, data federation was slower than data centralisation. But in recent years, advances in Massively Parallel Processing (MPP) mean that technologies built to take advantage of federation, like Trino, are finally able to make the data federation dream a reality.”


CSO Barry Hensley on staying a step ahead of the cyber threat landscape

Times have changed as more organizations have either experienced a significant incident firsthand or have seen enough third- and fourth-party breach notifications to take up arms. All these events drive awareness and give credibility to the threats and associated risks. However, there is still a challenge in establishing an appropriate risk tolerance that drives the right investments in effective security controls, especially for budget constrained organizations. ... We do see the evolution of third- and fourth-party risk management, especially in how we validate our security partner’s maturity and resilience. The evolution of risk is partly based on third and fourth parties swapping their underlying technologies to reduce cost or increase efficiencies that a customer has little to no understanding of the risks that might expose. So, for the security functions we’re going to provide internally, we’ll focus on the basics and do them well. With the controls/functions we outsource, we must reimagine not only how we verify our partner environments but how do we actively participate to improve their security programs as well as ours. ... Are we assessing the most relevant risks, rather than the risks of yesterday? And, because we can get so wrapped up in the playbook that we ran in our last organization, how do we ensure the current playbook is relevant to the organization at hand? An example would be how much time we focus on phishing training, which burdens our teammates to be the first line of defense, where we could instead leverage anomaly-based detection to automate the detection and response actions.


Dedicated Servers vs. Cloud: Which Is More Secure?

Because the resources under a dedicated server model are yours and yours alone, you won't have to worry about "noisy neighbor" interference or side-channel attacks originating from other tenants, which can be a real risk in cloud server management. With this physical exclusivity, dedicated servers are often attractive for high-risk, compliance-heavy workloads—for example, healthcare, financial services, or government systems. This isolation doesn't just provide a higher standard of performance, but also simplifies your servers' threat surface, especially when possible mechanisms for cyberattacks are removed. ... Cloud servers, by comparison, always operate under a multi-tenant architecture. This means that virtual servers on shared hardware are separated by a hypervisor layer, which creates and manages multiple isolated operating systems in a single server. ... With dedicated servers, you'll have complete control over your operating systems, firewalls, access policies, and encryption. You'll also have the flexibility to set the patch schedule, firewall rules, monitoring tools, and segmentation strategies. ... Cloud servers, on the other hand, always rely on a shared responsibility model. Your vendor will secure the infrastructure, networking, and some parts of the stack. However, you'll still have to manage everything from the operating system (OS) upwards yourself.


How threat actors are really using AI

Are we getting to a point where hackers are going to use AI to slowly but surely circumvent every defense we throw at it? Is this more a case of actors simply using capabilities, as they have with past technical advances? Or is this entire concern overblown, meaning the money in our wallets is perfectly safe ... if only we could remember where we put the darned thing? ... While these early examples stemmed from the spread of generative AI, the technology has been sprinkled across attacks as early as 2018. TaskRabbit, the commoditized services platform owned by Ikea, was the subject of a breach where AI was used to control a massive botnet that performed a distributed denial-of-service (DDoS) attack on its servers. The result? Names, passwords, and payment details of both clients and ‘taskers’ were stolen in an attack that employed machine learning to make it more efficient and ultimately effective than a simple automated script. ... The picture isn't uniformly alarming, however, with Meyers suggesting less sophisticated actors are actually using AI “to their detriment.” He pointed to a group that created malware called Funk Walker using an adversarial LLM called Worm GPT. “There was broken cryptography in that, and the adversary left their name in it,” he explained. “That's kind of on the lower end of the sophistication spectrum.” The reality, then, is a split between highly capable state actors leveraging AI for genuine operational advantages, to less skilled criminals whose efforts to get a leg up via AI assistance have the potential to backfire through either technical failures or operational security mistakes that make them that bit easier to track.


StrongestLayer: Top ‘Trusted’ Platforms are Key Attack Surfaces

Rather than relying on malware or obvious phishing techniques, today’s attackers exploit trust, authentication gaps, and operational dependency. The report provides rare visibility into the techniques that define modern email threats by examining only attacks that incumbent security controls missed. “Email security has reached an inflection point,” said Alan LeFort, CEO and co-founder, StrongestLayer. “The controls enterprises depend on were designed to detect patterns and known bad signals. But attackers are now exploiting trusted brands and legitimate infrastructure, areas that those systems were never built to reason about.” ... The report thinks that attackers are no longer trying to look legitimate – they are hiding behind platforms that already are. DocuSign alone accounted for more than one-fifth of all attacks analyzed, particularly targeting legal, financial and healthcare organizations where document-signing workflows are deeply embedded in daily operations. Google Calendar attacks represent an especially concerning trend. Because calendar invitations are delivered via calendar APIs rather than email, these attacks bypass secure email gateways entirely, creating a blind spot for most security teams. ... StrongestLayer’s analysis shows AI-assisted phishing has fundamentally changed the economics of detection. Traditional phishing campaigns reuse templates with high similarity, allowing pattern-based systems to work. 


Enterprises are measuring the wrong part of RAG

Across enterprise deployments, the recurring pattern is that freshness failures rarely come from embedding quality; they emerge when source systems change continuously while indexing and embedding pipelines update asynchronously, leaving retrieval consumers unknowingly operating on stale context.  ... In retrieval-centric architectures, governance must operate at semantic boundaries rather than only at storage or API layers. This requires policy enforcement tied to queries, embeddings and downstream consumers — not just datasets. ... In production environments, evaluation tends to break once retrieval becomes autonomous rather than human-triggered. Teams continue to score answer quality on sampled prompts, but lack visibility into what was retrieved, what was missed or whether stale or unauthorized context influenced decisions. As retrieval pathways evolve dynamically in production, silent drift accumulates upstream, and by the time issues surface, failures are often misattributed to model behavior rather than the retrieval system itself. Evaluation that ignores retrieval behavior leaves organizations blind to the true causes of system failure. ... Retrieval is no longer a supporting feature of enterprise AI systems. It is infrastructure. Freshness, governance and evaluation are not optional optimizations; they are prerequisites for deploying AI systems that operate reliably in real-world environments. 


Data privacy urged as strategic board issue in AI era

"Data privacy is no longer a cybersecurity business control or a risk mitigation compliance checkbox. It reflects how deeply interconnected the modern world has become between businesses, governments, travellers, and citizens. Every interaction, financial transaction, remote authentication, and geolocation ping generates personal data. That data moves across borders, clouds, applications, partners, and marketing algorithms at machine speed and far beyond what most individuals realise in terms of data broker destinations. As a result, personal data privacy is harder to achieve than at any point in history, not because of negligence, but because of scale, dependency, design, and business models design to monetise the information itself," said Haber ... Bluntly, we have an unusual challenge. Data privacy strategies have not evolved at the same pace as data creation and monetised analytics. Organisations still focus on cyber security defences while data flows freely through APIs, SaaS platforms, AI models, and third-party ecosystems. True personal data privacy requires visibility into all of this data with control being assigned to the individual user and not the business or government entity based on regulations. Without the user knowing who and what is accessing data, why it is being accessed, and how long the data will be archived, data privacy will remain an abstract concept with individuals only loosely being able to opt of data storage and profiling. 


Why workers are losing confidence in AI - and what businesses can do about it

While platforms like Claude Code are saving software developers at REACHUM significant time, not everything is as effective. Tinfow sees a disparity between how some AI tools are marketed and what they can actually do. Even working at a company built around AI, Tinfow's team has run into issues with tasks like text generation in images, where certain AI tools just didn't deliver. "There's so much noise, and I don't want our team to get distracted by that, so I'm the one who will take a look at something, decide whether it is reasonable or garbage, and then give it to the team to work with," Tinfow said. ... "If you're now starting to look at how you can use AI for the same task, you all of a sudden have to put a lot more mental effort into trying to figure out how to do this in a completely different way," Ginn said, "That loss of the routine, the confidence of how I'm doing it, that can also just go back to the human nature to avoid change." Additionally, Stefan discussed the role adequate training plays in maintaining confidence. ... Back at the digital marketing agency Candour, Farrar said the company has a variety of tactics to help balance the quest for innovation with the day-to-day challenges of a technology that still has a way to go. Candour builds in extra time to account for the fact that everyone is learning, frames experiments as "test and learn" to mitigate stress, and has appointed a "champion" to stay abreast of developments in AI. 

Daily Tech Digest - February 01, 2026


Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis



Forget the chief AI officer - why your business needs this 'magician

There's a lot of debate about who should be responsible for ensuring the business makes the most out of generative AI. Some experts suggest the CIO should oversee this crucial role, while others believe the responsibility should lie with a chief data officer. Beyond these existing roles, other experts champion the chief AI officer (CAIO), a newcomer to the C-suite who oversees key considerations, including governance, security, and identification of potential use cases. ... Many people across other business units are confused about the different roles of technology and data teams. When Panayi joined Howden in August last year, he decided to head off that issue at the pass. ... "I think companies are missing a trick if they've not got someone ensuring that people are using things like Copilot and so on. These tools are new enough that we do need people to help with adoption," he said. "And at the moment, I don't think we can assume the narrative is correct that people using AI at home to help them book holidays is the same as how it can help them be more productive at work." ... "It's like he's a magician, showing people who have to deal with thousands of pages of stuff, how to get the answers they need quickly," he said, outlining how the director of productivity highlights the benefits of gen AI to the firm's brokers. "These people are not at the computer all day. They are out in the market, talking and making decisions."


Just Relying on Data Doesn’t Make You Data-driven — Advantage Solutions CDO

O’Hazo then draws a line between measurement and transformation. Success in data programs, she explains, is not only about performance indicators; it is also about whether the organization is starting to internalize the mindset behind them. “Success for me in this data and AI space is all about, ‘Are my stakeholders starting to actually speak some of my language?’” When stakeholders begin to “believe” and “trust,” she says, the shift becomes visible not only in outcomes but also in demand. The moment data starts becoming embedded in the business is the moment the need for the CDO office outgrows its capacity. ... She ties true data-driven maturity to operational efficiency and responsiveness: Accurate, timely information;  Faster decision-making cycles; Quicker reactions to market conditions; and Lower effort to extract value from data. In her view, strong data foundations should reduce friction instead of creating new burdens. Speed, however, is not just about moving fast, it’s about winning the race to insight. “Once you have that foundation built, to get to the answer quickly, you have to be the first one there. If you’re not the first one there, you’ve lost.” ... As the conversation returns to the governance part of transformation, O’Hazo underscores that governance becomes sustainable only when people are comfortable using data and confident enough to surface risks early. For her, the true differentiator is not policy; it is talent and environment. 


The Three Mindsets That Shape Your Life, Work And Fulfillment

Mission Mindset is goal-oriented but not outcome-obsessed. It begins with clarity about a specific, measurable and time-bound goal. Decades of research on goal-setting, including the work of Stanford psychologist Carol Dweck, shows that how we interpret challenges influences how we engage with them—and that mindset creates very different psychological worlds for people facing the same obstacles. Here's where most people go wrong. ... If mission provides direction, identity provides stability. Identity Mindset is rooted in a healthy, coherent self-image that does not rise and fall with every outcome. It answers a deeper question: Who am I when the going gets tough or disappointment abounds? Many people identify with their performance. Success feels like validation, and failure feels personal. That volatility makes progress emotionally expensive because every result threatens their self-worth. In contrast, PsychCentral broadly defines resilience as adapting well to adversity; individuals who are stable in how they see themselves are better able to regulate emotions, process setbacks and continue forward without losing themselves in the struggle. ... Agency Mindset is where actual momentum lives. It is the lived belief that you are the author of your life, not a character reacting to circumstances. Agency does not deny reality or minimize hardship. It refuses to play the victim, make excuses or place blame. 


Why We Can’t Let AI Take the Wheel of Cyber Defense

When we talk about fully autonomous systems, we are talking about a loop: the AI takes in data, makes a decision, generates an output, and then immediately consumes that output to make the next decision. The entire chain relies heavily on the quality and integrity of that initial data. The problem is that very few organizations can guarantee their data is perfect from start to finish. Supply chains are messy and chaotic. We lose track of where data originated. Models drift away from accuracy over time. If you take human oversight out of that loop, you aren’t building a better system; you are creating a single point of systemic failure and disguising it as sophistication. ... There is no magical self-healing feature that puts everything back together elegantly. When a breach happens, it is people who rebuild. Engineers are the ones trying to deal with the damage and restoring services. Incident commanders are the ones making the tough calls based on imperfect information. AI can and absolutely should support those teams—it’s great at surfacing weak signals, prioritizing the flood of alerts, or suggesting possible actions. But the idea that AI will independently put the pieces back together after a major attack is a fantasy. ... So, how do we actually do this? First, make “human-in-the-loop” the default setting for any AI that can act on your systems or data. Automated containment can save your skin in the first few seconds of an attack, but every autonomous process needs guardrails. 


Connecting the dots on the ‘attachment economy’

In the attention economy paradigm, human attention is a currency with monetary value that people “spend.” The more a company like Meta can get people to “spend” their attention on Instagram or Facebook, the more successful that company will be. ... Tristan Harris at the Center for Humane Technology coined the phrase “attachment economy,” which he criticizes as the “next evolution” of the extractive-tech model; that’s where companies use advanced technologies to commodify the human capacity to form attached bonds with other people and pets. In August, the idea began to gain traction in business and academic circles with a London School of Economics and Political Science blog post entitled, “Humans emotionally dependent on AI? Welcome to the attachment economy” by Dr. Aurélie Jean and Dr. Mark Esposito. ... The rise of attachment-forming tech is similar to the rise in subscriptions. While posting an article or YouTube video may get attention, getting people to subscribe to a channel or newsletter is better. It’s “sticky,” assuring not only attention now, but attention in the future as well. Likewise, the attachment economy is the “sticky” version of the attention economy. Unlike content subscription models, the attachment idea causes real harm. It threatens genuine human connection by providing an easier alternative, fostering addictive emotional dependencies on AI, and exploiting the vulnerabilities of people with mental health issues. 


From monitoring blind spots to autonomous action: Rethinking observability in an Agentic AI world

AI-supported observability tools help teams not only understand system performance but also uncover the reasons behind issues. By linking signals across interconnected parts, these tools provide actionable insights and usually resolve problems automatically, reducing Mean Time to Resolution (MTTR) and cutting the risk of outages. ... AI-driven observability can trace service dependencies from start to finish, connect signals across third-party platforms, and spot early signs of unusual behavior. By examining traffic patterns, error rates, and configuration changes in real-time, observability helps teams identify emerging issues sooner, understand the potential impact quickly, and respond before full disruptions occur. While observability cannot prevent every third-party outage, it can greatly reduce uncertainty and response time, allowing solutions to be introduced sooner and helping rebuild customer trust. ... When AI-driven applications fail, teams often lack clear visibility into what went wrong, putting significant AI investments at risk. Slow or incorrect responses turn troubleshooting into guesswork, as teams struggle to understand agent interactions, find delays, or identify the responsible agent or tool. This lack of clarity slows down root-cause analysis, extends downtime, diverts engineering efforts from innovation, and can ultimately lead to lost revenue and customer trust. Observability addresses this challenge by providing complete visibility into AI application behavior. 


Architecture Testing in the Age of Agentic AI: Why It Matters Now More Than Ever

Historically, architecture testing functioned as a safeguard against emergent complexity in distributed systems. Whenever an organization deployed a network of interdependent services, message buses, caches, and APIs, the potential for unforeseen interactions grew. Even before AI entered the picture, architects confronted the reality that large systems behave in ways no single engineer fully anticipates. ... Agentic systems challenge traditional testing practices in several fundamental ways. First, these systems are inherently non‑deterministic. A test that succeeds at 9:00 might fail just minutes later simply because the agent followed a different reasoning path. This creates a widening ‘verification gap,’ where deterministic enterprise systems and probabilistic, adaptive agents operate according to fundamentally different reliability expectations. Second, these agents operate within environments that are constantly shifting—APIs, user interfaces, databases, and document stores all evolve independently of the agent itself. Because agents are expected to detect these changes and adapt their behavior, long‑held architectural assumptions about stability and interface contracts become far more fragile. ... Third, agentic AI introduces a new level of emergent behavior. Operating through multi‑step reasoning loops and tool interactions, agents can develop strategies or intermediate actions that were never explicitly designed or anticipated. While emergence has always existed in complex distributed systems, with agents it becomes the rule rather than the exception.


Data Privacy Day warns AI, cloud outpacing governance

Kornfeld commented, "Data Privacy Day is a reminder that protecting sensitive information requires consistent discipline, not just policies. This discipline starts with infrastructure choices. As organizations continue to evaluate cloud-first strategies, many are also reassessing where their most critical data should live. For workloads that demand predictable performance, strong governance and clear ownership, on-site infrastructure continues to play an essential role in a sound privacy strategy." ... Russel said, "Data Privacy Day often prompts the usual reminders: update policies, refresh consent language, and train staff on security and resilience strategies. These are important steps, but increasingly they are simply the baseline. In 2026, the board-level question leaders should also be asking is: can we demonstrate control of personal data and sustain trust through disruption, whether it stems from a compromise, misconfiguration, insider error, or a supplier incident?" ... Russell commented that identity controls and response processes sit at the core of this shift as attackers continue to exploit account compromise to reach sensitive information in cloud environments. "Identity is a privacy fault line. In cloud environments, compromised identities are often the fastest route to sensitive data. Resilience means detecting abnormal access early, limiting blast radius, and recovering confidently when identity controls are bypassed."


Security teams are carrying more tools with less confidence

Security leaders express mixed views about the performance of their SIEM platforms. Most say their SIEM contributes to faster detection and response, yet only half describe that contribution as strong. Confidence in long-term scalability follows a similar pattern, with many teams expressing partial confidence as data volumes and monitoring demands continue to grow. Satisfaction with log management and security analytics tools mirrors this split. Teams that express higher satisfaction also report stronger alignment between their tooling and application environments. ... Threat detection represents the most common use of AI and machine learning within security operations. Fewer teams apply AI to incident triage, automated response, or anomaly detection. Despite this limited scope, security leaders consistently associate AI with reduced alert fatigue and improved signal quality. Many also prioritize AI capabilities when evaluating SIEM platforms, alongside real-time analytics. ... Security leaders frequently describe operational cost as a top pain point. Multiple point solutions contribute to overlapping capabilities, siloed data, and increased alert noise. Data that remains isolated across tools complicates threat analysis and slows investigations, particularly when teams attempt to reconstruct activity across cloud, identity, and application layers.


Integrating Financial Counterparty Risk into Your Business Continuity Plan

Vendor defaults and liquidity issues can disrupt operations in ways that ripple across departments and delay recovery. If a key financial partner fails, access to working capital, credit or critical services can disappear overnight. For example, if your leasing company collapses, essential equipment could be repossessed, or service agreements could lapse. ... Financial counterparties show up across many areas of your business. You depend on banks for credit facilities and insurers for risk transfer. Payment processors, brokers and pension custodians handle everything from daily cash flow to long-term employee benefits. Clearinghouses are also vital in structured markets, such as stocks and futures. They sit between buyers and sellers to ensure both sides honor their contracts, which reduces your exposure to failure during high-volume or high-volatility periods. ... Not all financial counterparties pose the same level of risk, but the warning signs often follow familiar patterns. Monitoring a few high-impact indicators can help you identify problems and take action before disruptions escalate. ... Industry standards are raising the bar on how you manage financial counterparties. Frameworks like ISO 22301 stress the need to include financial dependencies in your continuity and risk programs. These standards define how regulators and stakeholders expect you to identify, assess and respond to financial exposure. If you treat financial partners like background support, you risk missing vulnerabilities that could surface under pressure.

Daily Tech Digest - January 31, 2026


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart



Security work keeps expanding, even with AI in the mix

Teams with established policies report greater confidence that AI outputs pass through review steps or guardrails before influencing decisions. Governance work spans data handling, access management, auditability, and lifecycle oversight for AI models and integrations. Security and compliance considerations also affect how quickly teams operationalize automation. Concerns around data protection, regulatory obligations, tool integration, and staff readiness continue to influence adoption patterns. Budget limits and legacy systems remain common constraints, reinforcing the need for governance structures that support day-to-day execution. ... Teams managing large tool inventories report higher strain, particularly when workflows require frequent context switching. Leaders increasingly view automation and tooling improvements as key levers for retaining staff. Practitioners consistently place work-life balance and meaningful impact at the center of retention decisions. ... Many teams express interest in workflow platforms that connect automation, AI, and human review within a single operational layer. These approaches focus on moving work across systems without constant manual handoffs. Respondents associate connected workflows with higher productivity, faster response times, improved data accuracy, and stronger compliance tracking. Interoperability also plays a growing role. Security teams increasingly consider standardized frameworks and APIs that allow AI systems to interact with tools under controlled conditions. 


Human risk management: CISOs’ solution to the security awareness training paradox

Despite regulatory compliance requirements and significant investment, SAT seems to deliver marginal benefits. Clearly, SAT is broken — even with peripheral improvements like synthetic phishing tools. So, what’s needed? Over the next few years, organizations should shift from static/sporadic security training to an emerging discipline called human risk management (HRM). ... HRM is defined as a cybersecurity strategy that identifies, measures, and reduces the risks caused by human behavior. Simply stated, security awareness training is about what employees know; HRM is about what they do. To be more specific, HRM integrates into email security tools, web gateways, and identity and access management (IAM) systems to identify human vulnerabilities. Furthermore, it measures risk using behavioral data and pinpoints an organization’s riskiest users. HRM then seeks to mitigate these risks by applying targeted interventions such as micro-learning, simulations, or automated security controls. Finally, HRM monitors behavioral changes so organizations can track progress. ... From an ROI perspective, HRM offers a much more granular approach to cyber-risk mitigation than standard SAT. CISOs and HR managers can report on improved cyber hygiene and behavior, rather than how many employees have been trained and past generic tests. Repeat offenders are not only identified but also provided with personalized training tools and attention. Ultimately, HRM makes it possible to show a direct correlation between training and a reduction in actual security incidents. ...


The Human Exploit: Why Wizer Is the Secret Weapon in the War for Your Digital Soul

We are currently witnessing a systemic failure in how we prepare people for a digital world. From the moment a child gets their first school-issued tablet to the day a retiree checks their pension balance, every individual is a target. This isn’t just a corporate problem; it’s a societal one. That is why I’ve been following the rise of Wizer, a firm that has cracked the code on making security training not just tolerable, but actually effective. ... It is no coincidence that the financial industry has become Wizer’s most aggressive adopter. In banking, trust is the only product you’re actually selling. If a customer’s account is drained because an employee fell for a “vishing” attack—where a hacker samples an IT person’s voice from a voicemail to impersonate them—the damage to the brand is catastrophic. Financial institutions are currently the biggest fans of the platform because they operate under a microscope of regulation and extreme risk. They realized early on that a 45-minute annual compliance video is a waste of time. Wizer’s approach is different; it feels more like an app—specifically Duolingo—than a corporate lecture. ... One of the most profound insights Gabriel Friedlander brings to the table is the necessity of the “Security Awareness Manager” (SAM). Historically, security training was a secondary task for a stressed-out IT admin who would rather be configuring a server. That is a recipe for failure. To build a true culture of security, you need a dedicated facilitator.


Chinese APTs Hacking Asian Orgs With High-End Malware

A pile of evidence suggests that this campaign was carried out by a Chinese APT, but exactly which is unclear. Chinese threat actors are notorious for sharing tools, techniques, and infrastructure. Trend Micro found that this one — which it currently tracks as Shadow-Void-044 — used a C2 domain previously used by UNC3569. A Cobalt Strike sample on one of its servers was signed with a stolen certificate also spotted in a Bronze University campaign. And they linked one of its backdoors to a backdoor developed by a group called "TheWizards," not to be confused with the equally maligned basketball team. A second, separate threat actor has also been using PeckBirdy since at least July 2024. With low confidence, Trend Micro's report linked the group it labeled Shadow-Earth-045 to the one it tracks as Earth Baxia. This campaign was more diverse in its methods, and its targeting, involving both Asian private organizations and government entities. Chinese APTs habitually perform cyberespionage against government agencies in the APAC region and beyond. Trend Micro tells Dark Reading, "These two campaigns remind us that the boundary between cybercrime and cyberespionage is increasingly blurred. One tool used in different [kinds of] attacks is [becoming] more and more popular."



AI agent evaluations: The hidden cost of deployment

Agent evals can be complicated because they test for several possible metrics, including agent reasoning, execution, data leakage, response tone, privacy, and even moral alignment, according to AI experts. ... Most IT leaders budget for obvious costs — including compute time, API calls, and engineering hours — but miss the cost of human judgment in defining what Ferguson calls the “ground truth.” “When evaluating whether an agent properly handled a customer query or drafted an appropriate response, you need domain experts to manually grade outputs and achieve consensus on what ‘correct’ looks like,” he adds. “This human calibration layer is expensive and often overlooked.” ... The sticker shock of agent evals rarely comes from the compute costs of the agent itself, but from the “non-deterministic multiplier” of testing, adds Chengyu “Cay” Zhang, founding software engineer at voice AI vendor Redcar.ai. He compares training agents to training new employees, with both having moods. “You can’t just test a prompt once; you have to test it 50 times across different scenarios to see if the agent holds up or if it hallucinates,” he says. “Every time you tweak a prompt or swap a model, you aren’t just running one test; you’re rerunning thousands of simulations.” ... If an organization wants to save money, the better alternative is to narrow the agent’s scope, instead of cutting back on testing, Zhang adds. “If you skip the expensive steps — like human review or red-teaming — you’re relying entirely on probability,” he says.


Social Engineering Hackers Target Okta Single Sign On

What makes these attacks unusual is how criminals engage in real-time conversations as part of their trickery, using the latest generation of highly automated phishing toolkits, which enable them to redirect users to real-looking log-in screens as part of a highly orchestrated attack. "This isn't a standard automated spray-and-pray attack; it is a human-led, high-interaction voice phishing - 'vishing' - operation designed to bypass even hardened multifactor authentication setups," said threat intelligence firm Silent Push. The "live phishing panel" tools being used enable "a human attacker to sit in the middle of a login session, intercepting credentials and MFA tokens in real time to gain immediate, persistent access to corporate dashboards," it said. Callers appear to be using scripts designed to walk victims through an attacker-designated list of desired actions. ... At least so far, the campaign appears to center only on Okta-using organizations. ShinyHunters and similar groups have previously targeted a variety of SSO providers, meaning hackers' focus may well expand, Pilling said. The single best defense against live phishing attacks that don't exploit any flaws or vulnerabilities in vendors' software, is strong MFA. "We strongly recommend moving toward phishing-resistant MFA, such as FIDO2 security keys or passkeys where possible, as these protections are resistant to social engineering attacks in ways that push-based or SMS authentication are not," Mandiant's Carmakal said.


AI agents can talk to each other — they just can't think together yet

Current protocols handle the mechanics of agent communication — MCP, A2A, and Outshift's AGNTCY, which it donated to the Linux Foundation, let agents discover tools and exchange messages. But these operate at what Pandey calls the "connectivity and identification layer." They handle syntax, not semantics. The missing piece is shared context and intent. An agent completing a task knows what it's doing and why, but that reasoning isn't transmitted when it hands off to another agent. Each agent interprets goals independently, which means coordination requires constant clarification and learned insights stay siloed. For agents to move from communication to collaboration, they need to share three things, according to Outshift: pattern recognition across datasets, causal relationships between actions, and explicit goal states. "Without shared intent and shared context, AI agents remain semantically isolated. They are capable individually, but goals get interpreted differently; coordination burns cycles, and nothing compounds. One agent learns something valuable, but the rest of the multi-agent-human organization still starts from scratch," Outshift said in a paper. Outshift said the industry needs "open, interoperable, enterprise-grade agentic systems that semantically collaborate" and proposes a new architecture it calls the "Internet of Cognition," where multi-agent environments work within a shared system.


Building Software Organisations Where People Can Thrive

Trust builds over time through small interactions. When people know what to expect and how to interact with each other in tough moments, trust is formed, Card argued. Once trust is embedded, teams are more likely to take risks by putting themselves out there to be wrong and fail fast, and that is where the magic happens. You need to actively address bias and microaggressions. If left unchallenged, they quietly erode trust and belonging. Being proactive, fair, and consistent in addressing these behaviours signals your values clearly to the wider organisation, Card said. At the heart of it all is the belief that people-first leadership is performance leadership, Card said. When we take the time to build inclusive, resilient cultures, success follows, not just for the business, but for everyone within it, he concluded. ... psychological safety is the next level up from a trusting environment. Both are the foundations of any healthy, high-performing culture. Without them, people hold back; they’re less likely to share ideas, admit mistakes, or challenge the status quo. And that means your team won’t grow, innovate, or build strong relationships. If you want to build a culture that lasts, where people thrive, not just survive, then building trust and safety isn’t optional. It has to be intentional. And once it’s in place, it unlocks everything else: collaboration, resilience, accountability, and growth.


Stop Delivering Change, Start Designing a Business That Can Actually Grow

Legacy and emerging technologies sit side by side, often competing for attention and investment. Manual and systemised processes overlap in ways that only make sense to the people living inside them. Long-standing roles carry deep, tacit knowledge, while new-in-career roles arrive with different expectations, skills, and assumptions about how work should flow. Each layer is changing, but rarely in a deliberate, joined-up way. When leaders do not have a shared, design-level understanding of how these layers interact, decisions are made in isolation. ... Programme milestones become a proxy for progress. Technology capability becomes a proxy for readiness. Productivity targets replace understanding. Designing the next-generation business model requires a different kind of insight—one that shows how people, process, data, and technology interact end to end. One that makes visible where human judgement still matters, where automation genuinely adds value, and where the handoffs between the two are quietly breaking down. ... Growth and productivity are not things you add through execution. They are the result of deliberate design choices. A business model fit for today makes explicit decisions about what is standardised and what is differentiated, what is automated and what is augmented, what relies on experience and what demands new capability. Those decisions cannot be delegated to programmes alone. They sit squarely with leadership.


Beyond Human-in-the-Loop: Why Data Governance Must Be a System’s Property

The reliance on human action creates a false sense of control; although governance artifacts do exist, responsibility for accountability exists outside the formal governance system and is therefore difficult to enforce. ... The dominant structures of governance today represent a human-in-the-loop system model. In a human-in-the-loop model, technology is used primarily to automate the completion or execution of specific tasks, such as the movement of data between systems, checking the validity of data between systems, and enhancing data that has been provided or created by other systems. The responsibility for the outcome of an automated governance system is not part of the automated system itself. Therefore, humans have the ability to resolve disputes between systems, approve any exceptions made by systems, and determine what is true when using different systems produces different conclusions. ... As data ecosystems continue to expand, we see recurring patterns of failure emerge. As a result, stewardship teams tend to create bottlenecks in their processes, as the volumes of existing exceptions continue to grow much faster than the capability to resolve those exceptions. The presence of escalation paths creates delays in decision-making processes, leading to inconsistencies in the products or services being delivered. Over time, informal methods of addressing issues become accepted as standard operating procedures. Controls within the organization are bypassed in an effort to deliver projects on time.

Daily Tech Digest - January 30, 2026


Quote for the day:

"In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it." -- Jane Smiley



Crooks are hijacking and reselling AI infrastructure: Report

In a report released Wednesday, researchers at Pillar Security say they have discovered campaigns at scale going after exposed large language model (LLM) and MCP endpoints – for example, an AI-powered support chatbot on a website. “I think it’s alarming,” said report co-author Ariel Fogel. “What we’ve discovered is an actual criminal network where people are trying to steal your credentials, steal your ability to use LLMs and your computations, and then resell it.” ... How big are these campaigns? In the past couple of weeks alone, the researchers’ honeypots captured 35,000 attack sessions hunting for exposed AI infrastructure. “This isn’t a one-off attack,” Fogel added. “It’s a business.” He doubts a nation-state it behind it; the campaigns appear to be run by a small group. ... Defenders need to treat AI services with the same rigor as APIs or databases, he said, starting with authentication, telemetry, and threat modelling early in the development cycle. “As MCP becomes foundational to modern AI integrations, securing those protocol interfaces, not just model access, must be a priority,” he said.  ... Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. “Do not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI/LLM in a safe way that benefits the business,” he advised.


AI-Powered DevSecOps: Automating Security with Machine Learning Tools

Here's the uncomfortable truth: AI is both causing and solving the same problem. A Snyk survey from early 2024 found that 77% of technology leaders believe AI gives them a competitive advantage in development speed. That's great for quarterly demos and investor decks. It's less great when you realize that faster code production means exponentially more code to secure, and most organizations haven't figured out how to scale their security practice at the same rate. ... Don't try to AI-ify your entire security stack at once. Pick one high-pain problem — maybe it's the backlog of static analysis findings nobody has time to triage, or maybe it's spotting secrets accidentally committed to repos — and deploy a focused tool that solves just that problem. Learn how it behaves. Understand its failure modes. Then expand. ... This is non-negotiable, at least for now. AI should flag, suggest, and prioritize. It should not auto-merge security fixes or automatically block deployments without human confirmation. I've seen two different incidents in the past year where an overzealous ML system blocked a critical hotfix because it misclassified a legitimate code pattern as suspicious. Both cases were resolved within hours, but both caused real business impact. The right mental model is "AI as junior analyst." ... You need clear policies around which AI tools are approved for use, who owns their output, and how to handle disagreements between human judgment and AI recommendations.


AI & the Death of Accuracy: What It Means for Zero-Trust

The basic idea is that as the signal quality degrades over time through junk training data, models can remain fluent and fully interact with the user while becoming less reliable. From a security standpoint, this can be dangerous, as AI models are positioned to generate confident-yet-plausible errors when it comes to code reviews, patch recommendations, app coding, security triaging, and other tasks. More critically, model degradation can erode and misalign system guardrails, giving attackers the opportunity exploit the opening through things like prompt injection. ... "Most enterprises are not training frontier LLMs from scratch, but they are increasingly building workflows that can create self-reinforcing data stores, like internal knowledge bases, that accumulate AI-generated text, summaries, and tickets over time," she tells Dark Reading.  ... Gartner said that to combat the potential impending issue of model degradation, organizations will need a way to identify and tag AI-generated data. This could be addressed through active metadata practices (such as establishing real-time alerts for when data may require recertification) and potentially appointing a governance leader that knows how to responsibly work with AI-generated content. ... Kelley argues that there are pragmatic ways to "save the signal," namely through prioritizing continuous model behavior evaluation and governing training data.


The Friction Fix: Change What Matters

Friction is the invisible current that sinks every transformation. Friction isn’t one thing, it’s systemic. Relationships produce friction: between the people, teams and technology. ... When faced with a systemic challenge, our human inclination is to blame. Unfortunately, we blame the wrong things. We blame the engineering team for failing to work fast enough or decide the team is too small, rather than recognize that our Gantt chart was fiction, which is an oversimplification of a complex dynamic. ... The fix is to pause and get oriented. Begin by identifying the core domain, the North Star. What is the goal of the system? For Fedex, it is fast package delivery. Chances are, when you are experiencing counterintuitive behavior, it is because people are navigating in different directions while using the same words. ... Every organization trying to change has that guy: the gatekeeper, the dungeon master, the self-proclaimed 10x engineer who knows where the bodies are buried. They also wield one magic word: No. ... It’s easy to blame that guy’s stubborn personality. But he embodies behavior that has been rewarded and reinforced. ... Refusal to change is contagious. When that guy shuts down curiosity, others drift towards a fixed mindset. Doubt becomes the focus, not experimentation. The organization can’t balance avoiding risk with trying something new. The transformation is dead in the water.


From devops to CTO: 8 things to start doing now

Devops leaders have the opportunity to make a difference in their organization and for their careers. Lead a successful AI initiative, deploy to production, deliver business value, and share best practices for other teams to follow. Successful devops leaders don’t jump on the easy opportunities; they look for the ones that can have a significant business impact. ... Another area where devops engineers can demonstrate leadership skills is by establishing standards for applying genAI tools throughout the software development lifecycle (SDLC). Advanced tools and capabilities require effective strategies to extend best practices beyond early adopters and ensure that multiple teams succeed. ... If you want to be recognized for promotions and greater responsibilities, a place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. Devops engineers can position themselves for a leadership role by focusing on initiatives that deliver business value. ... One of the hardest mindset transitions for CTOs is shifting from being the technology expert and go-to problem-solver to becoming a leader facilitating the conversation about possible technology implementations. If you want to be a CTO, learn to take a step back to see the big picture and engage the team in recommending technology solutions.


The stakes rise for the CIO role in 2026

The CIO's days as back-office custodian of IT are long gone, to be sure, but that doesn't mean the role is settled. Indeed, Seewald and others see plenty of changes still underway. In 2026, the CIO's role in shaping how the business operates and performs is still expanding. It reflects a nuanced change in expectations, according to longtime CIOs, analysts and IT advisors -- and one that is showing up in many ways as CIOs become more directly involved in nailing down competitive advantage and strategic success across their organizations. ... "While these core responsibilities remain the same, the environment in which CIOs operate has become far more complex," Tanowitz added. Conal Gallagher, CIO and CISO at Flexera, said the CIO in 2026 is now "accountable for outcomes: trusted data, controlled spend, managed risk and measurable productivity. "The deliverable isn't a project plan," Gallagher said. "It's proof that the business runs faster, safer and more cost-disciplined because of the operating model IT enables." ... In 2026, the CIO role is less about being the technology owner and more about being a business integrator, Hoang said. At Commvault, that shift places greater emphasis on governance and orchestration across ecosystems. "We're operating in a multicloud, multivendor, AI-infused environment," she said. "A big part of my job is building guardrails and partnerships that enable others to move fast -- safely," she said. 


Inside the Shift to High-Density, AI-Ready Data Centres

As density increases, design philosophy must evolve. Power infrastructure, backup systems, and cooling can no longer be treated as independent layers; they have to be tightly integrated. Our facilities use modular and scalable power and cooling architectures that allow us to expand capacity without disrupting live environments. Rated-4 resilience is non-negotiable, even under continuous, high-density AI workloads. The real focus is flexibility. Customers shouldn’t be forced into an all-or-nothing transition. Our approach allows them to move gradually to higher densities while preserving uptime, efficiency, and performance. High-density AI infrastructure is less about brute force and more about disciplined engineering that sustains reliability at scale. ... The most common misconception is that AI data centres are fundamentally different entities. While AI workloads do increase density, power, and cooling demands, the core principles of reliability, uptime, and efficiency remain unchanged. AI readiness is not about branding; it’s about engineering and operations. Supporting AI workloads requires scalable and resilient power delivery, precision cooling, and flexible designs that can handle GPUs and accelerators efficiently over sustained periods. Simply adding more compute without addressing these fundamentals leads to inefficiency and risk. The focus must remain on mission-critical resilience, cost-effective energy management, and sustainability. 


Software Supply Chain Threats Are on the OWASP Top Ten—Yet Nothing Will Change Unless We Do

As organizations deepen their reliance on open-source components and embrace AI-enabled development, software supply chain risks will become more prevalent. In the OWASP survey, 50% of respondents ranked software supply chain failures number one. The awareness is there. Now the pressure is on for software manufacturers to enhance software transparency, making supply chain attacks far less likely and less damaging. ... Attackers only need one forgotten open-source component from 2014 that still lives quietly inside software to execute a widespread attack. The ability to cause widespread damage by targeting the software supply chain makes these vulnerabilities alluring for attackers. Why break into a hardened product when one outdated dependency—often buried several layers down—opens the door with far less effort? The SolarWinds software supply chain attack that took place in 2020 demonstrated the access adversaries gain when they hijack the build process itself. ... “Stable” legacy components often go uninspected for years. These aging libraries, firmware blocks, and third-party binaries frequently contain memory-unsafe constructs and unpatched vulnerabilities that could be exploited. Be sure to review legacy code and not give it the benefit of the doubt. ... With an SBOM in hand, generated at every build, you can scan software for vulnerabilities and remediate issues before they are exploited. 


What the first 24 hours of a cyber incident should look like

When a security advisory is published, the first question is whether any assets are potentially exposed. In the past, a vendor’s claim of exploitation may have sufficed. Given the precedent set over the past year, it is unwise to rely solely on a vendor advisory for exploited-in-the-wild status. Too often, advisories or exploitation confirmations reach teams too late or without the context needed to prioritise the response. CISA’s KEV, trusted third-party publications, and vulnerability researchers should form the foundation of any remediation programme. ... Many organisations will leverage their incident response (IR) retainers to assess the extent of the compromise or, at a minimum, perform a rudimentary threat hunt for indicators of compromise (IoCs) before involving the IR team. As with the first step, accurate, high-fidelity intelligence is critical. Simply downloading IoC lists filled with dual-use tools from social media will generate noise and likely lead to inaccurate conclusions. Arguably, the cornerstone of the initial assessment is ensuring that intelligence incorporates decay scoring to validate command-and-control (C2) infrastructure. For many, the term ‘threat hunt’ translates to little more than a log search on external gateways. ... The approach at this stage will be dependent on the results of the previous assessments. There is no default playbook here; however, an established decision framework that dictates how a company reacts is key.


NIST’s AI guidance pushes cybersecurity boundaries

For CISOs, what should matter is that NIST is shifting from a broad, principle-based AI risk management framework toward more operationally grounded expectations, especially for systems that act without constant human oversight. What is emerging across NIST’s AI-related cybersecurity work is a recognition that AI is no longer a distant or abstract governance issue, but a near-term security problem that the nation’s standards-setting body is trying to tackle in a multifaceted way. ... NIST’s instinct to frame AI as an extension of traditional software allows organizations to reuse familiar concepts — risk assessment, access control, logging, defense in depth — rather than starting from zero. Workshop participants repeatedly emphasized that many controls do transfer, at least in principle. But some experts argue that the analogy breaks down quickly in practice. AI systems behave probabilistically, not deterministically, they say. Their outputs depend on data that may change continuously after deployment. And in the case of agents, they may take actions that were not explicitly scripted in advance. ... “If you were a consumer of all of these documents, it was very difficult for you to look at them and understand how they relate to what you are doing and also understand how to identify where two documents may be talking about the same thing and where they overlap.”

Daily Tech Digest - January 29, 2026


Quote for the day:

"Great leaders start by leading themselves and to do that you need to know who you are" -- @GordonTredgold



Digital sovereignty feels good, but is it really?

There are no European equivalents of the American hyperscalers, let alone national ones. Although OVHcloud, Intermax, and BIT can be proposed as alternative managed locations for Azure, AWS, or Google Cloud, they are not comparable to those services. They lack the same huge ecosystem of partners, are less scalable, and are simply less user-friendly, especially when adopting new services. The reality is that many software packages also accompany the move to the cloud with a departure from on-premises. ... It is as much a ‘start’ of a digital migration as it is an end. Good luck transferring a system with deep AWS integrations to another location (even another public cloud). Although cloud-native principles would allow the same containerized workloads to run elsewhere, that has no bearing on the licenses purchased, compatibility and availability of applications, scalability, or ease of use. A self-built variant inside one’s own data center requires new expertise and almost assuredly a larger IT team. ... In some areas, European alternatives will be perfectly capable of replacing American software. However, there is no guarantee that a secure, consistent, and mature offering will be available in every area, from networking to AI inferencing and from CRM solutions to server hardware. The reality is not only that IT players from the US are prominent, but that the software ecosystem is globally integrated. Those who limit their choices must be prepared to encounter problems.


Operational data: Giving AI agents the senses to succeed

Agents need continuous streams of telemetry, logs, events, and metrics across the entire technology stack. This isn't batch processing; it is live data flowing from applications, infrastructure, security tools, and cloud platforms. When a security agent detects anomalous behavior, it needs to see what is happening right now, not what happened an hour ago ... Raw data streams aren't enough. Agents need the ability to correlate information across domains instantly. A spike in failed login attempts means nothing in isolation. But correlate it with a recent infrastructure change and unusual network traffic, and suddenly you have a confirmed security incident. This context separates signal from noise. ... The data infrastructure required for successful agentic AI has been on the "we should do that someday" list for years. In traditional analytics, poor data quality results in slower insights. Frustrating, but not catastrophic. ... Sophisticated organizations are moving beyond raw data collection to delivering data that arrives enriched with context. Relationships between systems, dependencies across services, and the business impact of technical components must be embedded in the data workflow. This ensures agents spend less time discovering context and more time acting on it. ... "Can our agents sense what is actually happening in our environment accurately, continuously, and with full context?" If the answer is no, get ready for agentic chaos. The good news is that this infrastructure isn't just valuable for AI agents. 


Identity, Data Security Converging Into Trouble for Security Teams: Report

Adversaries are shifting their focus from individual credentials to identity orchestration, federation trust, and misconfigured automation, it continued. Since access to critical data stores starts with identity, unified visibility across identity and data security is required to detect misconfigurations, reduce blind spots, and respond faster. That shift, experts warned, dramatically increases the potential impact of identity failures. ... AI automation is often a chain of agents, Schrader explained. “Each agent is a non-human identity that needs lifecycle governance, and each step accesses, transforms, or hands off data,” he said. “That means a mistake in identity governance — over-permissioned agent, weak token control, missing attestation — immediately becomes a data security incident — at machine speed and at scale — because the workflow keeps executing and propagating access and data downstream.” “As AI automation runs continuously, authorization becomes a live control system, not a quarterly review,” he continued. “Agent chains amplify failures. One over-permissioned non-human identity can propagate access and data downstream like workflow-shaped lateral movement. Non-human identities sprawl fast via APIs and OAuth. Data risk also shifts dynamically as agents transform and enrich outputs.” ... “Risk multiplies with automation,” he told TechNewsWorld. “A compromised service identity can cause automated data exfiltration, model poisoning, or large-scale misconfiguration in seconds, which is far faster than manual attacks.”


Why your AI agents need a trust layer before it’s too late

While traditional ML pipelines require human oversight at every step — data validation, model training, deployment and monitoring — modern agentic AI systems enable autonomous orchestration of complex workflows involving multiple specialized agents. But with this autonomy comes a critical question: How do we trust these agents? ... DNS transformed the internet by mapping human-readable names to IP addresses. ANS does something similar for AI agents, but with a crucial addition: it maps agent names to their cryptographic identity, their capabilities and their trust level. Here’s how it works in practice. Instead of agents communicating through hardcoded endpoints like “http://10.0.1.45:8080,” they use self-describing names like “a2a://concept-drift-detector.drift-detection.research-lab.v2.prod.” This naming convention immediately tells you the protocol (agent-to-agent), the function (drift detection), the provider (research-lab), the version (v2) and the environment (production). But the real innovation lies beneath this naming layer. ... The technical implementation leverages what’s called a zero-trust architecture. Every agent interaction requires mutual authentication using mTLS with agent-specific certificates. Unlike traditional service mesh mTLS, which only proves service identity, ANS mTLS includes capability attestation in the certificate extensions. An agent doesn’t just prove “I am agent X” — it proves “I am agent X and I have the verified capability to retrain models.” ... The broader implications extend beyond just ML operations. 


3 things cost-optimized CIOs should focus on to achieve maximum value

For Lenovo CIO Art Hu, optimization involves managing a funnel of business-focused ideas. His company’s portfolio-based approach to AI includes over 1,000 registered projects across all business areas. Hu has established a policy for AI exploration and optimization that allows thousands of flowers to bloom before focusing on value. “It’s important I don’t over-prioritize on quality initially, because we have so many projects,” he says. ... “There’s a technology thing, where you probably need multiple types of models and tools to work together,” he says. “So Microsoft or OpenAI on their own probably won’t do very well. However, when you combine Databricks, Microsoft, and your agents, then you get a solution.” ... But another key area is revenue growth management. Schildhouse’s team has developed an in-house diagnostic and predictive tool to help employees make pricing decisions quicker. They tracked usage to ensure the technology was effective, and the tool was scaled globally. This success has sponsored AI-powered developments in related areas, such as promotion and calendar optimization technology. “Scale is important at a company the size and breadth of Colgate-Palmolive, because one-off solutions in individual markets aren’t going to drive that value we need,” she says. “I travel around to our key markets, and it’s nice to be in India or Brazil and have the teams show how they’re using these tools, and how it’s making a difference on the ground.”


Gauging the real impact of AI agents

Enterprises aren’t totally sold on AI, but they’re increasingly buying into AI agents. Not the cloud-hosted models we hear so much about, but smaller, distributed models that fit into IT as it has been used by enterprises for decades. Given this, you surely wonder how it’s going. Are agents paying back? Yes. How do they impact hosting, networking, operations? That’s complicated. ... There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. ... As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. 


OT–IT Cybersecurity: Navigating The New Frontier Of Risk

IT systems managing data and corporate services and OT systems managing physical operations like energy, manufacturing, transportation, and utilities were formerly distinct worlds, but they are now intricately linked. ... Organizations can no longer treat IT and OT as distinct security areas as long as this interconnection persists. Instead, they must embrace comprehensive strategies that integrate protection, visibility, and risk management in both domains. ... It is evident to attackers that OT systems are valuable targets. Data, electricity grids, pipelines, industrial facilities, and public safety are all at risk from breaches that formerly affected traditional IT settings and increasingly spread to physical process networks. According to recent incident statistics, an increasing number of firms report breaches that affect both IT and OT systems; this is indicative of adversaries taking use of legacy vulnerabilities and interconnected routes. ... The dynamic threat environment created by contemporary OT-IT convergence is incompatible with traditional perimeter defenses and flat network trusts. In order to prevent threats from moving laterally both within and between IT/OT ecosystems, zero trust designs place a strong emphasis on segmentation, stringent access control, and continuous authentication. ... OT cybersecurity is an organizational issue rather than just a technological one. IT security leaders and OT teams have always worked in distinct silos with different goals and cultures.


SolarWinds, again: Critical RCE bugs reopen old wounds for enterprise security teams

SolarWinds is yet again disclosing security vulnerabilities in one of its widely-used products. The company has released updates to patch six critical authentication bypass and remote command execution vulnerabilities in its Web Help Desk (WHD) IT software. ... The four critical bugs are typically very reliable to exploit due to their deserialization and authentication logic flaws, noted Ryan Emmons, security researcher at Rapid7. “For attackers, that’s good news, because it means avoiding lots of bespoke exploit development work like you’d see with other less reliable bug classes.” Instead, attackers can use a standardized malicious payload across many vulnerable targets, Emmons noted. “If exploitation is successful, the attackers gain full control of the software and all the information stored by it, along with the potential ability to move laterally into other systems.” Meanwhile, the high-severity vulnerability CVE-2025-40536 would allow threat actors to bypass security controls and gain access to certain functionalities that should be restricted only to authenticated users. ... While this incident is bad news, the good news is it’s not the same error, he noted. ... Vendors must get down past the symptom layer and address the root cause of vulnerabilities in programming logic, he said, pointing out, “they plug the hole, but don’t figure out why they keep having holes.”


Policy to purpose: How HR can design sustainable scale in DPI

“In DPI, the human impact is immediate and profound: our systems touch citizens, markets, and national platforms every single day,” Anand says. The proximity to public outcomes, he notes, heightens expectations across the organisation. Employees are no longer insulated from the downstream effects of their work. “Employees increasingly recognise that their choices—technical, operational, and ethical—directly influence outcomes for millions,” he says. ... “The opportunity is to reframe governance as an enabler of meaningful, durable impact rather than a constraint,” he says. Systems that millions rely on require deep technical excellence and responsible design—work that appeals to professionals who value longevity over novelty. ... As DPI platforms scale and regulatory attention intensifies, Anand believes HR must rethink what agility really means. “As scale and scrutiny intensify, HR must design organisations where agility is achieved through clarity and discipline,” he says. Flexibility, in this framing, is not ad hoc. It must be institutionalised—across workforce models, talent mobility and capability development—within clearly articulated guardrails. ... “The role of HR will evolve from custodians of policy to architects of sustainable scale,” Anand says. In DPI contexts, that means ensuring growth, governance and human potential advance together, rather than pulling against one another.


Adversity Isn’t a Setback. It’s the Advantage That Separates Real Entrepreneurs

The entrepreneurs who endure are not defined by how fast they scale when conditions are ideal. They are defined by how they respond when conditions turn hostile. When capital dries up. When reputations are challenged. When markets shift and expectations falter. When systems resist them. ... The paradox is that entrepreneurs who face sustained adversity early often become the most capable operators later. They learn to conserve resources. They read people accurately. They pivot without panic. They make decisions grounded in reality rather than optimism. Resilience is not taught. It is earned through determination, risk and adversity. History shows time and time again that those who prevailed were often those who were hit with life’s toughest issues, but kept getting back up, adapting and keeping on their path ahead. ... Every entrepreneurial journey eventually reaches the same point. Something breaks. A deal collapses. A partner lets you down. A market turns. A personal crisis collides with professional pressure. Sometimes it is a mistake. Sometimes it is failure. Sometimes it is a disaster or trauma with no clear explanation and no easy way through. At that moment, the question is no longer about intelligence, credentials, or ambition. It is about response. Do you take the hit and adapt, or does it flatten you? Do you get back up and keep moving, or do you stay down and explain why this time was different? Does adversity sharpen your determination, or does it quietly drain your belief?