Showing posts with label policy. Show all posts
Showing posts with label policy. Show all posts

Daily Tech Digest - April 17, 2026


Quote for the day:

"We don't grow when things are easy. We grow when we face challenges." -- @PilotSpeaker


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


The agent tier: Rethinking runtime architecture for context-driven enterprise workflows

The article "The Agent Tier: Rethinking Runtime Architecture for Context-Driven Enterprise Workflows" explores the evolution of enterprise software from rigid, deterministic workflows to more flexible, agentic systems. Traditionally, business logic relies on explicit branching and hard-coded rules, which often fail to handle the nuanced, context-dependent variations found in complex processes like customer onboarding or fraud detection. To address this limitation, the author introduces the "Agent Tier"—a distinct architectural layer that separates deterministic execution from contextual reasoning. While the deterministic lane maintains authoritative control over state transitions and regulatory compliance, the Agent Tier interprets diverse signals to recommend the most appropriate next actions. This system utilizes the "Reason and Act" (ReAct) pattern, allowing AI agents to interact with governed enterprise tools within a structured reasoning cycle. By decoupling adaptive reasoning from execution, organizations can manage ambiguity more effectively without sacrificing the reliability, safety, or explainability of their core operations. This two-lane approach enables incremental adoption, allowing enterprises to modernize their workflows by integrating adaptive logic into specific points of uncertainty. Ultimately, the Agent Tier provides a scalable, robust framework for building responsive, intelligent enterprise systems that maintain strict governance while navigating the complexities of modern, context-driven business environments.


Crypto Faces Increased Threat From Quantum Attacks

The article "From RSA to Lattices: The Quantum Safe Crypto Shift" explores the intensifying race to secure digital infrastructure against the looming threat of quantum computing. Central to this discussion is a landmark whitepaper from Google Quantum AI, which reveals that the quantum resources required to break contemporary encryption are approximately twenty times smaller than previously estimated. While current quantum processors possess around 1,000 qubits, the finding that only 500,000 qubits—rather than tens of millions—could compromise RSA and elliptic curve cryptography significantly accelerates the timeline for migration. Expert Chris Peikert highlights that this "lose-lose" situation for classical security stems from compounding advancements in both quantum algorithms and hardware efficiency. The urgency is particularly acute for blockchain and cryptocurrency networks, which face the "harvest now, decrypt later" risk where encrypted data is stolen today to be cracked once capable hardware emerges. Transitioning to lattice-based post-quantum cryptography remains a complex hurdle due to the larger key sizes and signature requirements that stress existing system architectures. Although a successful attack remains unlikely within the next three years, the growing probability over the next decade necessitates immediate industry-wide re-evaluation and the adoption of more resilient, crypto-agile standards to safeguard global data integrity.


The endless CISO reporting line debate — and what it says about cybersecurity leadership

In his article, JC Gaillard explores why the debate over the Chief Information Security Officer (CISO) reporting line persists into 2026, suggesting that the focus on organizational charts masks a deeper struggle with defining the CISO’s actual role. While reporting lines define authority and visibility, Gaillard argues that the core issue is whether a CISO possesses the organizational standing to influence cross-functional silos like legal, HR, and operations. Historically viewed as a technical IT function, cybersecurity has evolved into a strategic business priority, yet governance structures often lag behind. The author asserts there is no universal reporting model; success depends less on whether a CISO reports to the CEO, CIO, or COO, and more on the quality of the relationship and mutual trust with their superior. Furthermore, the supposed conflict between CIOs and CISOs is labeled as an outdated notion, as modern security must be embedded within technology architecture rather than acting as external oversight. Ultimately, the endless debate signals that many organizations still fail to internalize cyber risk as a strategic leadership challenge. Until companies bridge this governance gap by empowering CISOs with genuine influence, structural changes alone will remain insufficient for achieving true digital resilience and organizational alignment.


Building a Leadership Bench Inside IT

Developing a robust leadership bench within Information Technology (IT) departments has become a strategic imperative for modern enterprises facing rapid digital transformation. The article emphasizes that cultivating internal talent is not merely a human resources function but a critical operational necessity to ensure business continuity and organizational agility. Organizations are increasingly moving away from reactive hiring, instead focusing on identifying high-potential employees early in their careers. These individuals are nurtured through deliberate strategies, including formal mentorship programs, cross-functional rotations, and targeted soft-skills training to bridge the gap between technical expertise and executive management. A successful leadership bench allows for seamless succession planning, reducing the risks associated with sudden executive departures and the high costs of external recruitment. Furthermore, the article highlights that fostering a culture of continuous learning and empowerment encourages retention, as employees see clear pathways for advancement. By investing in diverse talent and providing opportunities for real-world decision-making, IT leaders can build a resilient pipeline that aligns technical innovation with broader corporate objectives. This proactive approach ensures that when the time comes for a leadership transition, the organization is already equipped with visionaries who understand both the underlying infrastructure and the strategic vision of the company.


Data Center Protests Are Growing. How Should the Industry Respond?

Community opposition to data center construction has evolved into an organized movement, significantly impacting the industry by halting roughly $18 billion in projects and delaying an additional $46 billion over the last two years. While some resistance is characterized as "not in my backyard" sentiment, many protesters raise legitimate concerns regarding environmental impact, resource depletion, and public health. Specifically, residents worry about overstressed power grids, excessive water consumption in drought-prone areas, and noise or air pollution from backup generators. Furthermore, the limited number of permanent operational roles compared to the massive initial construction workforce often leaves locals feeling that the economic benefits are fleeting. To navigate this increasingly hostile landscape, industry leaders emphasize that developers must move beyond mere compliance and focus on genuine community partnership. Recommended strategies include engaging with residents early in the planning process, providing transparent data on resource usage, and adopting sustainable technologies like closed-loop cooling systems or waste heat recycling. By investing in local infrastructure and creating stable career pipelines, developers can transform from perceived "takers" of energy into valued community assets. Addressing these social and environmental anxieties is now essential for securing the future of large-scale infrastructure projects in an era of rapid AI expansion.


Empower Your Developers: How Open Source Dependencies Risk Management Can Unlock Innovation

In this InfoQ presentation, Celine Pypaert addresses the pervasive nature of open-source software and outlines a comprehensive strategy for managing the inherent risks associated with third-party dependencies. She emphasizes a critical shift from reactive "firefighting" to a proactive risk management framework designed to secure modern application architectures. Central to her blueprint is the use of Software Composition Analysis (SCA) tools and the implementation of Software Bills of Materials (SBOM) to achieve deep visibility into the software supply chain. Pypaert highlights the necessity of prioritizing high-risk vulnerabilities through the lens of exploitability data, ensuring that engineering teams focus their limited resources on the most impactful threats. A significant portion of the session focuses on bridging the historical divide between DevOps and security teams by establishing clear lines of ownership and automated governance. By defining accountability and integrating security checks directly into the development lifecycle, organizations can eliminate bottlenecks and reduce friction. Ultimately, Pypaert argues that robust dependency management does not just mitigate danger; it empowers developers and unlocks innovation by providing a stable, secure foundation for rapid software delivery. This systematic approach transforms security from a perceived hindrance into a strategic enabler of technical agility and enterprise growth.


Designing Systems That Don’t Break When It Matters Most

The article "Designing Systems That Don't Break When It Matters Most" explores the critical challenges of maintaining system resilience during extreme traffic spikes. Author William Bain argues that the most damaging failures often arise not from technical bugs but from scalability limits in state management. While stateless web services are easily scaled, they frequently overwhelm centralized databases, creating significant bottlenecks. Traditional distributed caching offers some relief by hosting "hot data" in memory; however, it remains vulnerable to issues like synchronized cache misses and "hot keys" that dominate access patterns. To overcome these hurdles, Bain advocates for "active caching," a strategy where application logic is moved directly into the cache. This approach treats cached objects as data structures, allowing developers to invoke operations locally and minimizing the need to move large volumes of data across the network. To ensure robustness, teams must load test for contention rather than just volume, tracking data motion and shared state round trips. Ultimately, designing for peak performance requires prioritizing state management as the primary scaling hurdle, keeping the database off the critical path while leveraging active caching to maintain a seamless user experience even under extreme pressure.


Cyber rules shift as geopolitics & AI reshape policy

The NCC Group’s latest Global Cyber Policy Radar highlights a transformative shift in the cybersecurity landscape, where regulation is increasingly dictated by geopolitical tensions, state-sponsored activities, and the rapid adoption of artificial intelligence. No longer confined to mere technical compliance, cyber policy has evolved into a strategic extension of national security and economic interests. This shift is characterized by a rise in digital sovereignty, with governments asserting stricter control over data, infrastructure, and supply chains, often resulting in a fragmented regulatory environment for multinational organizations. Furthermore, artificial intelligence is being governed through existing cyber frameworks, increasing the scrutiny of how businesses secure these emerging tools. A significant trend involves moving cyber governance into the boardroom, placing direct accountability on senior leadership as major legislative acts like NIS2 and the EU AI Act come into force. Perhaps most notably, there is a growing emphasis on offensive cyber capabilities as a core component of national deterrence strategies, moving beyond traditional defensive measures. For global enterprises, navigating this complex patchwork of national priorities requires moving beyond basic technical standards toward integrated resilience and proactive engagement with public authorities. Boards must now understand their strategic position within a world where cyber operations and international power dynamics are inextricably linked.


Is ‘nearly right’ AI generated code becoming an enterprise business risk?

The article examines the escalating enterprise risks associated with "nearly right" AI-generated code—software that appears functional but contains subtle errors or misses critical edge cases. As organizations increasingly adopt AI coding agents, which some analysts estimate produce up to 60% of modern code, the sheer volume of output is creating a massive quality assurance bottleneck. While AI excels at basic syntax, it often struggles with complex behavioral integration in legacy enterprise ecosystems, particularly in high-stakes sectors like finance and telecommunications. Experts warn that even minor AI-driven changes can trigger cascading system failures or outages, citing recent high-profile incidents reported at companies like Amazon. Beyond operational reliability, the shift introduces significant security vulnerabilities, such as prompt injection attacks and bloated codebases containing hidden dependencies. The core challenge lies in the fact that many large enterprises still rely on manual testing processes that cannot scale to match AI’s relentless speed. Ultimately, the article argues that the solution is not just better AI, but more robust governance and automated testing. Without clear human-in-the-loop oversight and rigorous verification protocols, the productivity gains promised by AI could be undermined by unpredictable business disruptions and an expanded cyberattack surface.


Why Traditional SOCs Aren’t Enough

The article argues that traditional Security Operations Centers (SOCs) are no longer sufficient to manage the complexities of modern digital environments characterized by AI-driven threats and rapid cloud adoption. While SOCs remain foundational for threat detection, they are inherently reactive, often operating in data silos that lack critical business context. This limitation results in analyst burnout and a failure to prioritize risks based on financial or regulatory impact. To address these systemic gaps, the author proposes a transition to a Risk Operations Center (ROC) framework, specifically highlighting DigitalXForce’s AI-powered X-ROC. Unlike traditional models, a ROC is proactive and risk-centric, integrating cybersecurity with governance and operational risk management. X-ROC utilizes artificial intelligence to provide continuous assurance and real-time risk quantification, effectively translating technical vulnerabilities into strategic business metrics such as the "Digital Trust Score." By automating manual workflows and control testing, this next-generation approach significantly reduces operational costs and audit fatigue while providing boards with actionable visibility. Ultimately, the shift from a reactive SOC to a business-aligned ROC allows organizations to transform risk management from a passive reporting requirement into a strategic advantage, ensuring resilience in an increasingly dynamic and dangerous global cyber landscape.

Daily Tech Digest - March 01, 2026


Quote for the day:

"You can't be a leader if you can't influence others to act." -- Dale E. Zand



Meet your AI auditor: How this new job role monitors model behavior

The relentless rise of artificial intelligence (AI) is creating a new role for business and technology professionals to consider: AI auditor. The role bears a striking resemblance to that of financial auditors, with a major exception: AI auditors monitor and report on the behavior of AI transactions rather than monetary transactions. ... The closest role to an AI auditor is now seen within teams tasked with reviewing AI model behavior, but their work is more akin to quality assurance, Bronfman said. The reviews cover "outputs, outliers, and edge-cases, and audit training processes for data input properties, accuracy, and predictability." AI auditors will put more teeth into assuring AI is responsible and trustworthy. ... AI auditing jobs won't just be found within enterprises. Just as organizations tend to rely on outside financial auditors, there will be many roles within third-party AI auditing firms. "Independent third-party auditors provide structured oversight and prevent conflicts of interest," said Bronfman. AI auditing standards and codes of conduct may even be ultimately supported "by a UN-like body or a coalition of major states, where deployment will require ongoing behavioral audits and mandated transparency." ... To move into this type of role, budding AI auditors "will need to deeply understand AI and how the algorithm works in order to identify where the pitfalls are and test how it can fail," said Bronfman.


Ransomware is the invoice for compounding technical debt

Cybercriminals are continuing their aggressive campaign of credential theft, purchasing stolen usernames and passwords from the dark web to access personal email, social media or financial accounts, noted the report. At an organisational level, these same pathways are compounded by internal security gaps like identity sprawl, which increases the chance of compromise, said Niraj Naidu ... “Technical debt accumulates quickly and quietly,” he told ARN. “A lot of organisations rely on legacy backup systems that were never really designed to protect against cyber-attacks. ... Naidu believes the urgency to do something “isn’t really triggered until there’s a security event for a lot of organisations”. That then leads to the ransom note, which is like “the invoice coming due for years of technical debt”, he explained. “With that there’s downtime, strained investor relations, legal implications, customer churn, as well as brand damage and regulatory penalties,” Naidu said. ... What has led to the failure for organisations to address tech debt is a “lack of clear visibility” over what sensitive information they hold, where it resides and who can access it, explained Naidu. “A lot of organisations may believe they’ve eliminated technical debt, especially executives,” he said. “They may not necessarily have that level of visibility or transparency, particularly when you’re looking at cloud adoption.


Don’t Panic Yet: “Humanity’s Last Exam” Has Begun

Well-known benchmarks such as the Massive Multitask Language Understanding (MMLU) exam, previously viewed as rigorous, have become less effective at distinguishing true progress in AI capability. In response, an international group of nearly 1,000 researchers, including a professor from Texas A&M University, developed a far more demanding assessment. Their goal was to design an exam so comprehensive and grounded in specialized human expertise that today’s AI systems would struggle to pass it. The result is “Humanity’s Last Exam” (HLE), a 2,500-question test that covers mathematics, the humanities, natural sciences, ancient languages, and highly specialized academic fields. ... Despite its apocalyptic name, Humanity’s Last Exam isn’t meant to suggest the end of human relevance. Instead, it highlights how much knowledge remains uniquely human and how far AI systems still have to go. “This isn’t a race against AI,” Nguyen said. “It’s a method for understanding where these systems are strong and where they struggle. That understanding helps us build safer, more reliable technologies. And, importantly, it reminds us why human expertise still matters.” ... HLE is intended to serve as a long‑term, transparent benchmark for evaluating advanced AI systems. As part of that mission, the team has made some of the exam publicly available, while keeping most of the test questions hidden so AI models can’t memorize the answers. 


Who really sets AI guardrails? How CIOs can shape AI governance policy

As Donald Farmer, futurist at Tranquilla AI, explains, the guardrails of a vendor's AI system reflect that vendor's assessment of acceptable risk -- not the enterprise's. "That is shaped by their legal own exposure, their broadest possible customer base and their own ethical assumptions," Farmer said. "This works for many customers, but at the edges there can be tension." ... "Every AI agent expands the attack surface." Without disciplined data management and segmentation, one compromised component can ripple across business functions. The more tightly integrated AI becomes, the greater the potential blast radius. This requires CIOs to engage actively with governance, even if it seems like they are being handed a list of preset rules. As Palmer said, "traditional IT governance assumes that products stay the same. AI governance has to assume that they will not." ... Caught between competing restrictions and changing mandates at the federal level, CIOs may feel powerless to influence much change -- but the experts reject this impotence. Turner-Williams described the CIO's influence as "significant, but not unilateral. The CIO acts as orchestrator and trust agent." This is especially true for CIOs working across multiple jurisdictions, making them accountable not only to U.S. law, but also to the EU AI Act, GDPR and other international frameworks. ... Ratcliffe offers a pragmatic lens, arguing that CIOs should approach this issue as one of reputational strategy, not a compliance exercise. 


Why Responsible Orchestration Outperforms Aggressive Automation

In complex large businesses, automation decisions are rarely made in one place. Teams optimize locally, adopt tools independently and automate processes in isolation. This results in fragmented automation that delivers short-term wins but creates long-term complexity and risk. Over time, this fragmentation further reduces leadership visibility into what work has been done, making it harder to manage risk, govern change and understand the true state (and impact!) of automation. This is where automation strategies break down. ... Orchestration is both a technical and a leadership discipline in this context, as it ensures automation decisions are intentional, coordinated and aligned with the way the business operates. Without orchestration, even well-intentioned automation can erode institutional knowledge, duplicate effort and make it harder for the very top of the organization to understand the true impact. ... The impact of fragmented automation and poorly orchestrated decision-making is felt throughout the organization, particularly by employees affected by the day-to-day disruption, and enterprises often fail to account for the impact on their workforce. Alongside day-to-day adoption, longer-term plans and how AI will make an impact are important questions to address early on. Companies must communicate AI strategy clearly and avoid reflexive headcount cuts that destroy organizational knowledge and boomerang rehiring.


India’s trillion-dollar data center opportunity is taking shape

With expanding cloud adoption, evolving sovereign data frameworks, and rapidly increasing compute intensity across industries, the country’s datacenter sector is entering its most consequential phase of growth. What is unfolding is not a temporary expansion cycle, but a sustained build-out of the digital backbone required to support the next phase of economic development. ... The drivers of this shift are both domestic and global. India generates one of the largest volumes of digital data in the world and serves a rapidly expanding digital user base. Enterprises across financial services, manufacturing, healthcare, retail, and public services are embedding cloud into core operations rather than treating it as a peripheral IT layer. AI adoption is moving from experimentation into production environments, raising compute intensity and infrastructure complexity. ... Sovereign cloud considerations further reinforce the need for domestic infrastructure. Across jurisdictions, governments and enterprises are reassessing where critical workloads reside and how data governance frameworks evolve. For a country of India’s scale, digital sovereignty is not merely regulatory; it is strategic. Hosting critical data and AI workloads domestically enhances resilience, compliance, and long-term economic control over digital systems. As sectors such as financial services, healthcare, defence, and public administration deepen their digital integration, secure and high-availability domestic capacity becomes essential.


Anthropic vs. The Pentagon: what enterprises should do

The rupture stems from a fundamental dispute over "all lawful use." The Pentagon demanded unrestricted access to Claude for any mission deemed legal, while Anthropic CEO Dario Amodei refused to budge  ... The fallout is immediate; the Department of War has ordered all contractors and partners to stop conducting commercial activity with Anthropic effectively at once, though the Pentagon itself has a 180-day window to transition to "more patriotic" providers. ... If your entire agentic workflow or customer-facing stack is hard-coded to a single provider's API, you aren't going to be nimble or flexible enough to meet the demands of a marketplace where some potential customers, such as the U.S. military or government, want you to use or avoid specific models as conditions of your contracts with them. The most prudent move right now isn't necessarily to hit the "delete" button on Claude—which remains a best-in-class model for coding and nuanced reasoning, and certainly can and should continue to be used for work outside of that with the U.S. military and government agencies—but to ensure you have a "warm standby." ... The takeaway is clear: if you plan to maintain business with federal agencies, you must be able to certify to them that your products aren't built on any single prohibited model provider — however sudden that designation may come down or how ultimately legally untenable it may prove.


Intelligence as Infrastructure: The Cloud Architecture Powering Enterprise AI

For over a decade, digital transformation has been treated as a portfolio of initiatives — cloud migration, platform consolidation, automation, data modernisation. The introduction of large-scale AI assistants signals a structural shift: intelligence is no longer a feature embedded within applications. It is becoming an organising principle of enterprise systems. This shift demands architectural literacy. Leaders responsible for digital infrastructure, service optimisation, and operational risk must understand how modern AI systems are constructed — and where control, exposure, and opportunity reside within them. ... Modern AI assistants are not monolithic systems. They are composite architectures composed of tightly integrated layers, each with distinct operational and governance responsibilities. ... In regulated industries, governance begins at the first prompt. Every interaction is both a productivity event and a potential compliance event. The architectural consequence is clear: AI entry points must be treated as critical infrastructure. ... Grounded intelligence reduces hallucination risk and ensures outputs align with current policy, documentation, and regulatory obligations. In knowledge-intensive sectors, this layer is central to operational credibility. ... Organisations that attempt to retrofit governance will encounter resistance from risk and compliance functions. Those that design governance into architecture will scale AI with institutional confidence. 


Open source devs consider making hogs pay for every Git pull

Fox, who also oversees Apache Maven, a popular Java build tool, explained that its repository site is at risk of being overwhelmed by constant Git pulls. The team has dug into this and found that 82 percent of the demand comes from less than 1 percent of IPs. Digging deeper, they discovered that many companies are using open source repositories as if they were content delivery networks (CDNs). ... How bad is it? Fox revealed that last year, major repositories handled 10 trillion downloads. That's double Google's annual search queries if you're counting from home and they're doing it on a shoestring. Fox described this as a "tragedy of the commons," where the assumption of "free and infinite" resources leads to structural waste amplified by CI/CD pipelines, security scanners, and AI-driven code generation. Companies may think that they can rely on "free and infinite" infrastructure, when in reality the costs of bandwidth, storage, staffing, and compliance are accelerating. ... With AI-driven repository usage exploding, Fox urged checking bills, using caching proxies, and avoiding per-commit tests. He seeks endorsements: "We need you to help step up... so that when we go out to the rest of the wild world... you need to pay to keep doing what you've been doing." But, wait, there's more! Besides simply being overwhelmed by constant download demands, Winser said, "People conflate open source software and open source infrastructure.." 


AI in higher education and the ‘erosion’ of learning

Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners and on-demand explainers. Faculty use them to generate rubrics, draft lectures and design syllabuses. Researchers use them to summarise papers, comment on drafts, design experiments and generate code. This is where the ‘cheating’ conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions. One has to do with transparency. ... A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibility – not only for students, but also for faculty. Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and that’s not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft and learning to spot one’s own mistakes.

Daily Tech Digest - January 29, 2026


Quote for the day:

"Great leaders start by leading themselves and to do that you need to know who you are" -- @GordonTredgold



Digital sovereignty feels good, but is it really?

There are no European equivalents of the American hyperscalers, let alone national ones. Although OVHcloud, Intermax, and BIT can be proposed as alternative managed locations for Azure, AWS, or Google Cloud, they are not comparable to those services. They lack the same huge ecosystem of partners, are less scalable, and are simply less user-friendly, especially when adopting new services. The reality is that many software packages also accompany the move to the cloud with a departure from on-premises. ... It is as much a ‘start’ of a digital migration as it is an end. Good luck transferring a system with deep AWS integrations to another location (even another public cloud). Although cloud-native principles would allow the same containerized workloads to run elsewhere, that has no bearing on the licenses purchased, compatibility and availability of applications, scalability, or ease of use. A self-built variant inside one’s own data center requires new expertise and almost assuredly a larger IT team. ... In some areas, European alternatives will be perfectly capable of replacing American software. However, there is no guarantee that a secure, consistent, and mature offering will be available in every area, from networking to AI inferencing and from CRM solutions to server hardware. The reality is not only that IT players from the US are prominent, but that the software ecosystem is globally integrated. Those who limit their choices must be prepared to encounter problems.


Operational data: Giving AI agents the senses to succeed

Agents need continuous streams of telemetry, logs, events, and metrics across the entire technology stack. This isn't batch processing; it is live data flowing from applications, infrastructure, security tools, and cloud platforms. When a security agent detects anomalous behavior, it needs to see what is happening right now, not what happened an hour ago ... Raw data streams aren't enough. Agents need the ability to correlate information across domains instantly. A spike in failed login attempts means nothing in isolation. But correlate it with a recent infrastructure change and unusual network traffic, and suddenly you have a confirmed security incident. This context separates signal from noise. ... The data infrastructure required for successful agentic AI has been on the "we should do that someday" list for years. In traditional analytics, poor data quality results in slower insights. Frustrating, but not catastrophic. ... Sophisticated organizations are moving beyond raw data collection to delivering data that arrives enriched with context. Relationships between systems, dependencies across services, and the business impact of technical components must be embedded in the data workflow. This ensures agents spend less time discovering context and more time acting on it. ... "Can our agents sense what is actually happening in our environment accurately, continuously, and with full context?" If the answer is no, get ready for agentic chaos. The good news is that this infrastructure isn't just valuable for AI agents. 


Identity, Data Security Converging Into Trouble for Security Teams: Report

Adversaries are shifting their focus from individual credentials to identity orchestration, federation trust, and misconfigured automation, it continued. Since access to critical data stores starts with identity, unified visibility across identity and data security is required to detect misconfigurations, reduce blind spots, and respond faster. That shift, experts warned, dramatically increases the potential impact of identity failures. ... AI automation is often a chain of agents, Schrader explained. “Each agent is a non-human identity that needs lifecycle governance, and each step accesses, transforms, or hands off data,” he said. “That means a mistake in identity governance — over-permissioned agent, weak token control, missing attestation — immediately becomes a data security incident — at machine speed and at scale — because the workflow keeps executing and propagating access and data downstream.” “As AI automation runs continuously, authorization becomes a live control system, not a quarterly review,” he continued. “Agent chains amplify failures. One over-permissioned non-human identity can propagate access and data downstream like workflow-shaped lateral movement. Non-human identities sprawl fast via APIs and OAuth. Data risk also shifts dynamically as agents transform and enrich outputs.” ... “Risk multiplies with automation,” he told TechNewsWorld. “A compromised service identity can cause automated data exfiltration, model poisoning, or large-scale misconfiguration in seconds, which is far faster than manual attacks.”


Why your AI agents need a trust layer before it’s too late

While traditional ML pipelines require human oversight at every step — data validation, model training, deployment and monitoring — modern agentic AI systems enable autonomous orchestration of complex workflows involving multiple specialized agents. But with this autonomy comes a critical question: How do we trust these agents? ... DNS transformed the internet by mapping human-readable names to IP addresses. ANS does something similar for AI agents, but with a crucial addition: it maps agent names to their cryptographic identity, their capabilities and their trust level. Here’s how it works in practice. Instead of agents communicating through hardcoded endpoints like “http://10.0.1.45:8080,” they use self-describing names like “a2a://concept-drift-detector.drift-detection.research-lab.v2.prod.” This naming convention immediately tells you the protocol (agent-to-agent), the function (drift detection), the provider (research-lab), the version (v2) and the environment (production). But the real innovation lies beneath this naming layer. ... The technical implementation leverages what’s called a zero-trust architecture. Every agent interaction requires mutual authentication using mTLS with agent-specific certificates. Unlike traditional service mesh mTLS, which only proves service identity, ANS mTLS includes capability attestation in the certificate extensions. An agent doesn’t just prove “I am agent X” — it proves “I am agent X and I have the verified capability to retrain models.” ... The broader implications extend beyond just ML operations. 


3 things cost-optimized CIOs should focus on to achieve maximum value

For Lenovo CIO Art Hu, optimization involves managing a funnel of business-focused ideas. His company’s portfolio-based approach to AI includes over 1,000 registered projects across all business areas. Hu has established a policy for AI exploration and optimization that allows thousands of flowers to bloom before focusing on value. “It’s important I don’t over-prioritize on quality initially, because we have so many projects,” he says. ... “There’s a technology thing, where you probably need multiple types of models and tools to work together,” he says. “So Microsoft or OpenAI on their own probably won’t do very well. However, when you combine Databricks, Microsoft, and your agents, then you get a solution.” ... But another key area is revenue growth management. Schildhouse’s team has developed an in-house diagnostic and predictive tool to help employees make pricing decisions quicker. They tracked usage to ensure the technology was effective, and the tool was scaled globally. This success has sponsored AI-powered developments in related areas, such as promotion and calendar optimization technology. “Scale is important at a company the size and breadth of Colgate-Palmolive, because one-off solutions in individual markets aren’t going to drive that value we need,” she says. “I travel around to our key markets, and it’s nice to be in India or Brazil and have the teams show how they’re using these tools, and how it’s making a difference on the ground.”


Gauging the real impact of AI agents

Enterprises aren’t totally sold on AI, but they’re increasingly buying into AI agents. Not the cloud-hosted models we hear so much about, but smaller, distributed models that fit into IT as it has been used by enterprises for decades. Given this, you surely wonder how it’s going. Are agents paying back? Yes. How do they impact hosting, networking, operations? That’s complicated. ... There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. ... As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. 


OT–IT Cybersecurity: Navigating The New Frontier Of Risk

IT systems managing data and corporate services and OT systems managing physical operations like energy, manufacturing, transportation, and utilities were formerly distinct worlds, but they are now intricately linked. ... Organizations can no longer treat IT and OT as distinct security areas as long as this interconnection persists. Instead, they must embrace comprehensive strategies that integrate protection, visibility, and risk management in both domains. ... It is evident to attackers that OT systems are valuable targets. Data, electricity grids, pipelines, industrial facilities, and public safety are all at risk from breaches that formerly affected traditional IT settings and increasingly spread to physical process networks. According to recent incident statistics, an increasing number of firms report breaches that affect both IT and OT systems; this is indicative of adversaries taking use of legacy vulnerabilities and interconnected routes. ... The dynamic threat environment created by contemporary OT-IT convergence is incompatible with traditional perimeter defenses and flat network trusts. In order to prevent threats from moving laterally both within and between IT/OT ecosystems, zero trust designs place a strong emphasis on segmentation, stringent access control, and continuous authentication. ... OT cybersecurity is an organizational issue rather than just a technological one. IT security leaders and OT teams have always worked in distinct silos with different goals and cultures.


SolarWinds, again: Critical RCE bugs reopen old wounds for enterprise security teams

SolarWinds is yet again disclosing security vulnerabilities in one of its widely-used products. The company has released updates to patch six critical authentication bypass and remote command execution vulnerabilities in its Web Help Desk (WHD) IT software. ... The four critical bugs are typically very reliable to exploit due to their deserialization and authentication logic flaws, noted Ryan Emmons, security researcher at Rapid7. “For attackers, that’s good news, because it means avoiding lots of bespoke exploit development work like you’d see with other less reliable bug classes.” Instead, attackers can use a standardized malicious payload across many vulnerable targets, Emmons noted. “If exploitation is successful, the attackers gain full control of the software and all the information stored by it, along with the potential ability to move laterally into other systems.” Meanwhile, the high-severity vulnerability CVE-2025-40536 would allow threat actors to bypass security controls and gain access to certain functionalities that should be restricted only to authenticated users. ... While this incident is bad news, the good news is it’s not the same error, he noted. ... Vendors must get down past the symptom layer and address the root cause of vulnerabilities in programming logic, he said, pointing out, “they plug the hole, but don’t figure out why they keep having holes.”


Policy to purpose: How HR can design sustainable scale in DPI

“In DPI, the human impact is immediate and profound: our systems touch citizens, markets, and national platforms every single day,” Anand says. The proximity to public outcomes, he notes, heightens expectations across the organisation. Employees are no longer insulated from the downstream effects of their work. “Employees increasingly recognise that their choices—technical, operational, and ethical—directly influence outcomes for millions,” he says. ... “The opportunity is to reframe governance as an enabler of meaningful, durable impact rather than a constraint,” he says. Systems that millions rely on require deep technical excellence and responsible design—work that appeals to professionals who value longevity over novelty. ... As DPI platforms scale and regulatory attention intensifies, Anand believes HR must rethink what agility really means. “As scale and scrutiny intensify, HR must design organisations where agility is achieved through clarity and discipline,” he says. Flexibility, in this framing, is not ad hoc. It must be institutionalised—across workforce models, talent mobility and capability development—within clearly articulated guardrails. ... “The role of HR will evolve from custodians of policy to architects of sustainable scale,” Anand says. In DPI contexts, that means ensuring growth, governance and human potential advance together, rather than pulling against one another.


Adversity Isn’t a Setback. It’s the Advantage That Separates Real Entrepreneurs

The entrepreneurs who endure are not defined by how fast they scale when conditions are ideal. They are defined by how they respond when conditions turn hostile. When capital dries up. When reputations are challenged. When markets shift and expectations falter. When systems resist them. ... The paradox is that entrepreneurs who face sustained adversity early often become the most capable operators later. They learn to conserve resources. They read people accurately. They pivot without panic. They make decisions grounded in reality rather than optimism. Resilience is not taught. It is earned through determination, risk and adversity. History shows time and time again that those who prevailed were often those who were hit with life’s toughest issues, but kept getting back up, adapting and keeping on their path ahead. ... Every entrepreneurial journey eventually reaches the same point. Something breaks. A deal collapses. A partner lets you down. A market turns. A personal crisis collides with professional pressure. Sometimes it is a mistake. Sometimes it is failure. Sometimes it is a disaster or trauma with no clear explanation and no easy way through. At that moment, the question is no longer about intelligence, credentials, or ambition. It is about response. Do you take the hit and adapt, or does it flatten you? Do you get back up and keep moving, or do you stay down and explain why this time was different? Does adversity sharpen your determination, or does it quietly drain your belief?

Daily Tech Digest - January 07, 2026


Quote for the day:

“If you're not prepared to be wrong, you'll never come up with anything original.” -- Ken Robinson



Strategy is dying from learning lag, not market change

At first, you might think this is about being more agile, more innovative, or more aggressive. However, those are reactions, not solutions. The real shift is deeper: strategy no longer scales when the underlying assumptions expire too quickly. The advantage erodes because the environment moves faster than the organization’s ability to sense, understand and adapt to it. ... Strategic failure today is less about being wrong and more about staying wrong for too long. ... One way and perhaps the only one, out of uncertainty is to learn faster and closer to where the actual signals appear. Learning to me is the disciplined updating of beliefs when new evidence arrives. Every decision is a prediction about how things will work. When reality proves you wrong, learning is how you fix that prediction. In a stable environment, you can afford to learn slowly. However, in unstable ones, like today’s, slow learning becomes existential. ... Organizations don’t fall behind all at once. They fall behind step by step: first in what they notice, then in how they interpret it, then in how long it takes to decide what to do and finally in how slowly they act. ... Strategy stalls not because people refuse to change, but because they can’t agree on the story beneath the change. They chased precision in interpretation when the real advantage would have come from running small tests to find out faster which interpretation is correct.


The new tech job doesn't require a degree. It starts in a data center

The answer won't be found in Silicon Valley or Data Center Alley. It's closer to home. Veterans, trade workers, and high school graduates not headed to college don't come through traditional pipelines, but they bring the right aptitude and mindset to the data center. Veterans have discipline and process-driven thinking that fits naturally into our operations — and for many, these roles offer a transition into a stable career. Someone who kept an aircraft carrier running knows what it means to manage infrastructure that can't fail. Many arrive with experience in related systems and are comfortable with shift work and high stakes. ... Young adults without college plans are often overlooked, but some excel in hands-on settings and just need an opportunity to prove it. Once they learn about a data center career and where it can take them, it becomes a chance to build a middle-class lifestyle close to home. ... Hiring nontraditional candidates is only the first step. What keeps them is a promotion track that works. After four weeks of hands-on and self-guided onboarding, techs can pursue certifications in battery backup systems, tower clearance, generator safety, and more. When qualified, they show it in the field and move up. This kind of investment has a ripple effect. A paycheck can lead to a mortgage and financial stability. And as techs move up or out, someone else steps in — maybe through a local program that appeared once your jobs did.


Automated data poisoning proposed as a solution for AI theft threat

The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what’s known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM. Injecting poisoned or adulterated data into a data system for protection against theft isn’t new. What’s new in this tool – dubbed AURA (Active Utility Reduction via Adulteration)– is that authorized users have a secret key that filters out the fake data so the LLM’s answer to a query is usable. If the knowledge graph is stolen, however, it’s unusable by the attacker unless they know the key, because the adulterants will be retrieved as context, causing deterioration in the LLM’s reasoning and leading to factually incorrect responses. The researchers say AURA degrades the performance of unauthorized systems to an accuracy of just 5.3%, while maintaining 100% fidelity for authorized users, with “negligible overhead,” defined as a maximum query latency increase of under 14%. ... As the use of AI spreads, CSOs have to remember that artificial intelligence and everything needed to make it work also make it much harder to recover from bad data being put into a system, Steinberg noted. ... “For now, many AI systems are being protected in similar manners to the ways we protected non-AI systems. That doesn’t yield the same level of protection, because if something goes wrong, it’s much harder to know if something bad has happened, and its harder to get rid of the implications of an attack.”


From Zero Trust to Cyber Resilience: Why Architecture Alone Will Not Protect Enterprises in 2026

The core challenge facing CISOs is not whether Zero Trust is implemented, but whether the organization can continue to operate when, inevitably, controls fail. Modern threat actors no longer focus exclusively on breaching defenses; they aim to disrupt operations, degrade trust, and extend business impact over time. In this context, architecture alone is insufficient. What enterprises require is cyber resilience: the ability to anticipate, withstand, recover from, and adapt to cyber disruption. ... Zero Trust answers the question “Who can access what?” Cyber resilience answers a more consequential one: “How quickly can the business recover when access controls are no longer the primary failure point?” ... Resilience engineering reframes cybersecurity as a property of complex socio-technical systems. In this model, failure is not an anomaly; it is an expected condition. The objective shifts from breach avoidance to disruption management. In practice, this means evolving from an assume breach mindset to an assume disruption operating model, one where systems, teams, and leadership are prepared to function under degraded conditions. ... To prepare for 2026, CISOs should: Treat cyber resilience as a continuous operating capability, not a project; Integrate cybersecurity with business continuity and crisis management; Train executives and board members through realistic disruption scenarios; and Invest in recovery validation, not just control deployment. 


Generative AI and the future of databases

The data is at the heart of your line of business application, but it is also changing all the time, and if you keep extracting the data into some other corpus it gets stale. You can view it as two approaches: replication or federation. Am I going to replicate out of the database to some other thing or am I going to federate into the database? ... engineers know how to write good SQL queries. Whether they know how to write good English language description of the SQL queries is a completely different matter, but let’s assume for a second we can or we can have AI do it for us. Then the AI can figure out which tool to call for the user request and then generate the parameters. There are some things to worry about in terms of security. How can you set the right secure parameters? What parameters are the LLM allowed to set versus not allowed to set? ... When you combine structured and unstructured data, the next step is that it’s not just about exact results but about the most relevant results. In this sense databases start to have some of the capabilities of search engines, which is about relevance and ranking, and what becomes important is almost like precision versus recall for information retrieval systems. But how do you make all of this happen? One key piece is vector indexing. ... AI search is a key attribute of an AI-native database. And the other key attribute is AI functions. 


Cyber Risk Trends for 2026: Building Resilience, Not Just Defenses

On the defensive side, AI can accelerate detection and response, but tooling without guardrails will create fresh exposures. Your questions as a board should be: Where have we embedded AI in critical workflows? How do we assure the provenance and integrity of the data those models touch? Are we red-teaming our AI-enabled processes, not just our perimeter? ... Second, third party ecosystems present attack surface. The risk isn’t abstract: it’s a payroll provider outage that stops salaries, a logistics partner breach that stalls distribution, or a SaaS compromise that leaks your crown jewels. ... Third is quantum computing. Some will say it’s too early; some will say it’s too late. The pragmatic position is this: crypto agility is a business requirement now. Inventory where and how you use cryptography—applications, devices, certificates, key management, data at rest and in transit. Prioritize crown-jewel systems and long-lived data that must remain confidential for years. ... Fourth is the risk posed by geopolitics. We live in a more unstable world, and digital risk doesn’t respect borders. Conflicts spill into cyberspace, data sovereignty rules tighten, and critical components can become chokepoints overnight. ... We won’t repel every attack in 2026. But we can decide to bend rather than break. Resilience comes of age when it stops being a slogan and becomes a practiced capability—where governance, operations, technology, and people move as one.


Will there be a technology policy epiphany in 2026?

The UK government still seems implacably opposed to bringing forward any cross-sector, comprehensive AI legislation. Its one-liner in the 2024 King’s Speech said the government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” That seemed sparing at the time, and now seems extraordinarily overblown. ... Turning to crypto-asset regulation, 2026 will continue the journey from draft legislation being published on 15 December last year through to 25 October 2027- yes, that’s meant to say 2027 - for the current “go live” date. Already we have seen some definitional clarification and the arrival of new provisions related to market abuse, public offers and disclosures. ... A critical thread to all of this is cyber. The Cyber Security Bill receives its second reading in the Commons today, 6 January. I’m very much looking forward to the bill arriving in the Lords later in the Spring and would welcome your thoughts on what’s in and what currently is not. If that wasn’t enough for week one of 2026, we have the committee stage of the Crime and Policing Bill in the Lords tomorrow, Wednesday 7 January. ... By contrast, there is much chat on digital ID. A consultation is said to be coming this month with a draft bill in May’s speech. This has hardly been helped by the government last year hanging its digital ID coat all around illegal immigration - a more than unfortunate decision.


The Big Shift: Five Trends Show Why 2026 is About Getting to Value

The conversation shifts from “What can this AI do?” to “What problem does it solve, and how much value does it unlock?”—and the technology that wins won’t be the most sophisticated. Still, the one that directly accelerates revenue, reduces friction in customer-facing workflows, or demonstrably improves employee productivity within a 12-month payback window. Crawford says this is “getting back to brass tacks. “Organizations will carefully define their business objectives, whether customer engagement, revenue growth, employee productivity, or whatever it needs to be, before selecting a technology,” he says. ... In 2026, if your digital transformation project can’t demonstrate meaningful return within twelve months, it competes for oxygen with projects that can, and many won’t survive that fight, Batista says. This compression of payback expectations reflects a fundamental shift in how CFOs and boards view technology investments. Still, initiatives based on regulatory or compliance requirements—things mandated by law, for example—still justify longer timelines, but discretionary projects face much stricter scrutiny, Batista says. ... When it comes to limiting factors in scaling successful AI deployments, Crawford says the top issue will be failures in AI governance. “AI governance will be the bottleneck that constrains an enterprise’s ability to scale AI, not AI capability itself. And enterprises rushing to deploy autonomous agents without governance infrastructure will face either painful reworks or serious operational issues.


Why CES 2026 Signals The End Of ‘AI As A Tool’

The idea of AI as a coordinating layer or “ambient background” across entire ecosystems of tools and devices was also prominent this year. Samsung outlined its vision of AI companions for everyday life, demonstrating how smart appliances will form an intelligent background fabric to our day-to-day activities. As well as in the home, Samsung is a key player in industrial technology, where the same principle will see AI coordinating and optimizing operations across smart, connected enterprise systems. ... First, it’s clear that today’s leading manufacturers and developers believe that the future of AI lies in agentic, always-on systems, rather than free-standing, isolated tools and applications. Just as consumer AI now coordinates home and entertainment technology, enterprise AI will orchestrate workflows, schedules, documents, data and codebases, anticipating business needs and proactively solving problems before they occur. Another thing that can’t be overlooked is that consumer technology clearly shapes our expectations and tolerances of enterprise technology. Workplace AI that doesn’t live up to the seamless, friction-free experiences provided by consumer AI will quickly cause frustration, limiting adoption and buy-in. ... As this AI infrastructure becomes more capable, the role of employees will shift, too, from executing routine tasks to supervising automated processes, as well as applying uniquely human skills to challenges that machines still can’t tackle. 


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... If you’re starting from scratch, standardize on OpenTelemetry libraries for services and send everything through a collector so you can change backends without code churn. Sampling should be responsive to pain—raise trace sampling when p95 latency jumps or error rates spike. Reducing cardinality in labels (looking at you, per-user IDs) will keep storage and costs sane. Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.

Daily Tech Digest - November 28, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain



Security researchers caution app developers about risks in using Google Antigravity

“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to the product rather than a conferral of privileges.” The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. “Even after a complete uninstall and re-install of Antigravity,” says Mindgard, “the backdoor remains in effect. Because Antigravity’s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.” For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them. ... Swanda recommends that app development teams building AI agents with tool-calling: assume all external content is adversarial. Use strong input and output guardrails, including tool calling; Strip any special syntax before processing; implement tool execution safeguards. Require explicit user approval for high-risk operations, especially those triggered after handling untrusted content or other dangerous tool combinations; not rely on prompts for security. System prompts, for example, can be extracted and used by an attacker to influence their attack strategy. 


How AI Is Rewriting The Rules Of Work, Leadership, And Human Potential

When a CEO tells his team, "AI is coming for your jobs, even mine," you pay attention. It is rare to hear that level of blunt honesty from any leader, let alone the head of one of the world's largest freelance platforms. Yet this is exactly how Fiverr co-founder and CEO Micha Kaufman has chosen to guide his company through the most significant technological shift of our lifetimes. His blunt assessment: AI is coming for everyone's jobs, and the only response is to get faster, more curious, and fundamentally better at being human. ... We're applying AI to existing workflows and platforms, seeing improvements, but not yet experiencing the fundamental restructuring that's coming. "It is mostly replacing the things we used to do as human beings, acting as robots," Kaufman observes. The repetitive tasks, the research gathering, the document summarizing, these elements where humans brought judgment but little humanity are being automated first. ... It's not enough to use the obvious AI tools in obvious ways. The real value emerges from those who push boundaries, combine systems creatively, or bring exceptional judgment to AI-assisted workflows. Kaufman points to viral videos created with advanced AI tools, noting that their quality stems not from the AI itself but from the operator's genius, experience, creativity, and taste developed over years.


How ‘digital twins’ could help prevent cyber-attacks on the food industry

A digital twin is a virtual replica of any product, process, or service, capturing its state, characteristics, and connections with other systems throughout its life cycle. The digital twin will include the computer system used by the company. It can help because conventional defences are increasingly out of step with cyber-attacks. Monitoring tools tend to detect anomalies after damage occurs. Complex computer systems can often obscure the origins of breaches. A digital twin creates a bridge between the physical and digital worlds. It allows organisations to simulate real-time events, predict what might happen next, and safely test potential responses. It can also help analyse what happened after a cyber-attack to help companies prepare for future incidents. ... A digital twin might be able to avert disaster under this scenario. By combining operational data such as temperature, humidity, or the speed air of flow with internal computing system data or intrusion attempts, digital twins offer a unified view of both system performance and cybersecurity. They enable organisations to simulate cyber-attacks or equipment failures in a safe, controlled digital environment, revealing vulnerabilities before attackers can exploit them. A digital twin can also detect abnormal temperature patterns, monitor the system for malicious activity, and perform analysis after a cyber-attack to identify the causes.


Why password management defines PCI DSS success

When you dig into real incidents involving payment data, a surprising number come down to poor password hygiene. PCI DSS v4.0 raised the bar for authentication, and the responsibility sits with security leaders to turn those requirements into workable daily habits for users and admins. ... Requirement 8 asks organizations to verify the identity of every user with strong authentication, make sure passwords and passphrases meet defined strength rules, prevent credential reuse, limit attempts, and store credentials securely. Passwords need to be at least 12 characters long, or at least 8 characters when a system cannot support longer strings. These rules line up with guidance from NIST SP 800 63B, which recommends longer passphrases, resistance against common word lists and hashing methods that protect stored secrets. ... PCI DSS requires that access be traceable to an individual and that shared accounts be minimized and controlled. When passwords live across multiple channels, it becomes nearly impossible to show auditors reliable evidence of access history. Even if the team is trying hard, the workflow itself creates gaps that no policy document can fix. ... Some CISOs view password managers as convenience tools. PCI DSS v4.0 shows that they are closer to compliance tools because they make it possible to enforce identity controls across an organization.



AI fluency in the enterprise: Still a ‘horseless carriage’

Companies are tossing AI agents onto existing processes, but a transformative change — where AI is the boss — is still far away. That was the view of IT leaders at this year’s Microsoft Ignite conference who’ve been putting AI agents to work, mostly with legacy processes. The IT leaders discussed their efforts during a conference panel at the event earlier this month. “We’re probably living in some version of the horseless carriage — we haven’t got to the car yet,” said John Whittaker, director of AI platform and products at accounting and consulting firm EY. ... Pfizer is very process-centric, he said, stressing that the goal is not to reinvent processes right out of the gate. The company is analyzing how AI works for them, gaining confidence in the technology before reorganizing processes within the AI lens. “Where we’re definitely heading … is thinking about, ‘I’ve solved this process, I’ve been following exactly the way it exists today. Now let’s blow it up and reimagine it…’ — and that’s exciting,” he said. ... Lumen is now looking at where it wants the business to be in 36 months and linking it to AI agents and AI-native plans. “We’re … working back from that and ensuring that we have the right set of tools, the right set of training, and the right set of agents in order to enable that,” he said. Every new Lumen employee in Alexander’s connected ecosystem group gets a Copilot license. The technology has helped speed up the process of understanding acronyms and historical trends within the company.


Creating Impactful Software Teams That Continuously Improve

When you are a person who prefers your job to be strictly defined, with clear boundaries, then you feel supported instead of stifled by a boss who checks in on you regularly. In the same culture, you will feel relaxed, happy, and content, which will in turn allow you to bring your best to your job and deliver to your strengths, Žabkar Nordberg said. You do not want to have employees who will be extensions of yourself, Žabkar Nordberg said. Instead, you want people who will bring their own thoughts, their own solutions, and in many ways be different and better than yourself. ... Provide guidance, step away, and let people have autonomy within those constraints. You might say something like "I would like you to focus on improving our customer retention. Be aware that legal regulations require all steps in our current onboarding journey to be present, but we have flexibility in how we execute them as the user experience is not prescribed". This gives people guidance and focuses them, but still gives them the autonomy to bring their own experiences and find their own solutions. ... We want people to show initiative and proactively bring their own thoughts, improvements, and worries. Clear communication and an understanding of how people work will help them do that, Žabkar Nordberg said. Psychological safety underlines trust, autonomy, and communication; it is required for them to work effectively, he concluded.


Empathetic policy engineering: The secret to better security behavior and awareness

Insecure behavior is often blamed on users, when the problem often lies in the measure itself. In IT security research, the focus is often on individual user behavior — for example, on whether secure behavior depends on personality traits. The question of how well security measures actually fit the reality of work — that is, how likely they are to be accepted in everyday practice — is neglected. For every threat, there are usually several available security measures. But differences in effort, acceptance, compatibility, or complexity are often not taken into account in practice. Instead, security or IT departments often make decisions based solely on technical aspects. ... Safety measures and guidelines are often communicated in a way that doesn’t resonate with users’ work reality because they don’t aim to engage employees and motivate them: for example, through instructions, standard online training, or overly playful formats like comics that employees don’t take seriously. ... The limited success of many security measures is not solely due to the users — often it’s unrealistic requirements, a lack of involvement, and inadequate communication. For security leaders, this means: Instead of relying on education and sanctions, a strategic paradigm shift is needed. They should become a kind of empathetic policy architect whose security strategy not only works technically but also resonates on a human level.


Agentic AI is not ‘more AI’—it’s a new way of running the enterprise

Agentic AI marks a shift from simply predicting outcomes or offering recommendations to systems that can plan tasks, take actions and learn from the results within defined guardrails. In practical terms, this means moving beyond isolated, single-task copilots towards coordinated “swarms” of agents that continually monitor signals, trigger workflows across systems, negotiate constraints and complete loops with measurable outcomes. ... A major barrier is trust and control. Leaders remain cautious about allowing software to take autonomous actions. Graduated autonomy provides a path forward: beginning with assistive tools, moving to supervised autonomy with reversible actions and eventually deploying narrow, fully autonomous loops when KPIs and rollback mechanisms have been validated. Lack of clarity on value is another obstacle. Impressive demonstrations do not constitute a strategy. Organisations should use a jobs-to-be-done perspective and tie each agent to a specific financial or risk objective, such as days-sales-outstanding, mean time to resolution, inventory turns or claims leakage. Analysts have warned that many agentic initiatives will be cancelled if value remains vague, so clear scorecards and time-boxed proofs of value are essential. Data readiness is a further challenge. Weak lineage, uncertain ownership and inconsistent quality stop AI scaling efforts in their tracks.


6 strategies for CIOs to effectively manage shadow AI

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.” ... “The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring. ... “Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. ... “Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals. ... Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.


It’s Time to Rethink Access Control for Modern Development Environments

When faced with the time-consuming complexity of managing granular permissions across dozens of development tools, most VPs of Engineering and CTOs opt for the path of least resistance, granting broad administrative privileges to entire engineering teams. It’s understandable from a productivity standpoint; nobody wants to be a bottleneck when a critical release is imminent, or explain to the CEO why they missed a market window because a developer couldn’t access a repository. However, when everyone has admin privileges, attackers who gain access to just one set of credentials can do tremendous damage. They gain not just access to sensitive code and data, but the ability to manipulate build processes, insert malicious code, or establish persistent backdoors. This problem becomes even more dangerous when combined with the prevalence of shadow IT, non-human identities, and contractor relationships operating outside your security perimeter. ... The answer to stronger security that doesn’t hinder developer productivity lies in implementing just-in-time permissioning within the SDLC, a concept successfully adopted from cloud infrastructure management that can transform how we handle development access controls. The approach is straightforward: instead of granting permanent administrative access to everyone, take 90 days to observe what developers actually need to do their jobs, then right-size their permissions accordingly.