Showing posts with label cryptography. Show all posts
Showing posts with label cryptography. Show all posts

Daily Tech Digest - April 19, 2026


Quote for the day:

“In the end, it is important to remember that we cannot become what we need to be by remaining what we are.” -- Max De Pree


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 20 mins • Perfect for listening on the go.


Beyond the degree: What education must become in the age of AI

The Firstpost opinion piece titled "Beyond degree: Education in the age of AI" explores the fundamental disruption of traditional academic structures caused by rapid artificial intelligence advancements. It argues that the era where a degree served as a definitive lifelong credential is coming to an end, replaced by a pressing need for continuous, skill-based learning. As AI increasingly automates technical and administrative tasks, the article posits that the uniquely human advantage now lies in higher-order cognitive and ethical functions. Specifically, education must evolve to prioritize the ability to formulate the right questions, critically evaluate AI-generated outputs, and maintain firm personal accountability for decisions that impact society. Rather than focusing on rote memorization—which has been rendered redundant by ubiquitous digital tools—future curricula should nurture curiosity, empathy, and cross-disciplinary thinking. The author highlights that while AI democratizes knowledge through personalized learning, it also necessitates a profound shift in how we value intelligence, moving away from rigid institutional metrics toward adaptable, lifelong expertise. Ultimately, the piece concludes that the most successful individuals in an automated economy will be those who combine technological proficiency with the critical judgment and human-centric values required to guide AI responsibly. By fostering these unique human traits, the educational system can better prepare students for a complex, technology-driven future.
In her article, Angela Zhao addresses a critical architectural flaw in modern AI agent infrastructure: the lack of "Decision Coherence." Current systems typically fragment critical data across relational databases, feature stores, and vector databases, with each component operating without a shared transactional boundary. This fragmentation creates a "seam problem" where agents retrieve inconsistent, disparate views of reality—such as current account balances paired with stale behavioral signals or outdated semantic embeddings. Consequently, agents may make incorrect, irreversible decisions, particularly in high-concurrency environments like financial transaction approvals or resource allocation. To bridge this gap, Zhao introduces the concept of the "Context Lake," a system class specifically designed to enforce Decision Coherence. Unlike traditional decoupled stacks, a Context Lake integrates episodic events, semantic transformations, and procedural rules within a single transactional scope. This ensures that every decision-making context is internally consistent, semantically enriched, and strictly bounded in freshness. By moving semantic computations—like embedding generation—inside the system boundary, the Context Lake eliminates the asynchronous delays that plague existing architectures. Based on research by Xiaowei Jiang, this emerging infrastructure layer is essential for production-grade AI agents that manage fast-changing, shared states and require absolute correctness to avoid costly operational failures or system-wide logic errors.


The Algorithmic Arms Race: Navigating the Age of Autonomous Attacks

In the article "The Algorithmic Arms Race," Kannan Subbiah explores the paradigm shift from human-led cyberattacks to the rise of autonomous Cyber Reasoning Systems. This transition marks an evolution from traditional automated scripts to cognitive AI agents capable of independent reasoning, real-time adaptation, and executing the entire cyber kill chain at machine speed. Subbiah details the anatomy of these autonomous attacks, highlighting how they compress reconnaissance, weaponization, and lateral movement into rapid, self-directed sequences that outpace human intervention. Through case studies like Operation Cyber Guardian and the Shai-Hulud supply chain siege, the author illustrates a future where malware independently manages its own obfuscation and identifies obscure vulnerabilities. To counter these sophisticated threats, the article advocates for a "fighting fire with fire" strategy, urging organizations to deploy Autonomous Security Operations Centers, Moving Target Defense, and hyper-segmented Zero Trust architectures. Furthermore, Subbiah emphasizes the necessity of integrated risk analytics, mandatory Software Bill of Materials, and adversarial red teaming where AI systems challenge one another. Ultimately, the narrative stresses that in an era of machine-speed conflict, human-centric defense models are no longer sufficient; instead, organizations must embrace autonomous, resilient infrastructures while maintaining human oversight as a final ethical and operational kill switch.


Workplace stress in 2026 is still worse than before the pandemic

The 2026 Workplace Stress Report from Help Net Security highlights a concerning trend: employee stress remains significantly higher than pre-pandemic levels, with global engagement hitting a five-year low. According to Gallup’s latest findings, roughly 40% of workers worldwide experience daily stress, while negative emotions like anger and sadness persist at elevated rates. This lack of engagement is not just a cultural issue but a massive economic burden, costing the global economy approximately $10 trillion in lost productivity, or 9% of global GDP. The report indicates that managers and leaders are bearing the brunt of this emotional weight, reporting higher levels of loneliness and stress compared to individual contributors. Demographic disparities are also evident, as women and workers under the age of 35 report higher stress levels than their peers. Geographically, the United States and Canada lead the world in daily stress at 50%. Interestingly, the study finds that work location plays a role, with hybrid and remote-capable employees experiencing more stress than those in fully remote or strictly on-site roles. Ultimately, the data suggests that organizational success is deeply tied to emotional wellbeing, as engaged leaders are far more likely to thrive and mitigate the negative impacts of workplace pressure.


Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds

According to a recent VentureBeat survey, a significant security gap exists as enterprises struggle to defend against "stage-three" AI agent threats. The survey identifies a three-stage maturity model: Stage 1 focuses on observation, Stage 2 on enforcement via Identity and Access Management (IAM), and Stage 3 on isolation through sandboxed execution. While monitoring investment has surged to 45% of security budgets, most organizations remain trapped at the observation stage, leaving them vulnerable to sophisticated agentic failures where traditional guardrails prove insufficient. Data from Gravitee and the Cloud Security Alliance underscores this readiness gap, noting that only 21.9% of teams treat AI agents as distinct identity-bearing entities, while 45.6% still rely on shared API keys. This structural weakness allows for rapid lateral movement and unauthorized actions, which 72% of CISOs identify as their top priority. Despite the high demand for robust permissioning, current enterprise infrastructure often lacks the necessary runtime enforcement to contain a "blast radius" when agents go rogue. The survey highlights that while agents are already operating with privileged access to siloed data, security teams are lagging behind in providing the isolation required to stop the next wave of autonomous exploits and supply-chain breaches.


Empty Attestations: OT Lacks the Tools for Cryptographic Readiness

Operational technology (OT) systems face a critical security gap as regulators increasingly demand attestations of post-quantum cryptographic readiness despite a severe lack of specialized auditing tools. Unlike IT environments, which prioritize confidentiality and can be regularly updated, OT infrastructure focuses primarily on availability and often relies on decades-old legacy hardware with minimal processing power. This makes the implementation of modern cryptographic standards exceptionally difficult, as many devices lack the memory to execute post-quantum algorithms or have encryption hard-coded into immutable firmware. Consequently, asset owners are often forced to treat security compliance as a box-ticking exercise, producing paperwork that provides a false sense of assurance rather than genuine protection. This vulnerability is compounded by "harvest now, decrypt later" tactics and the risk of stolen firmware signing keys, which allow adversaries to maintain long-term access and potentially push malicious updates. Without OT-specific frameworks and instrumentation, these systems remain exposed to sophisticated threats like Volt Typhoon. To truly secure critical infrastructure, industry leaders and regulators must acknowledge that current IT-centric assessment models are insufficient, requiring a shift toward developing practical tools that account for the unique operational constraints and long life cycles inherent in industrial environments.


Business Risk: How It’s Changing In The Digital Economy

The digital economy has fundamentally transformed the landscape of business risk, shifting focus from traditional financial and operational concerns toward complex, technology-driven vulnerabilities. According to experts from the Forbes Business Council, risk is no longer a separate "balance sheet" issue but is now embedded in every design choice and organizational decision. Key emerging threats include data vulnerability, algorithmic bias, and cyber risks that extend across entire supply chains via sophisticated social engineering. Notably, the rapid adoption of artificial intelligence introduces "invisible" risks, such as business models quietly becoming obsolete or conflicting AI agents causing critical system outages. Furthermore, companies face unprecedented challenges regarding digital visibility and public perception; in an oversaturated market, being unseen or suffering from viral reputation damage can be as detrimental as direct financial loss. Managing these dynamic parameters requires a shift from reactive detection to proactive, upstream governance and a focus on organizational adaptability. Ultimately, the modern definition of risk centers on a firm's ability to match its cognitive capabilities with the increasing speed and non-linearity of the digital environment. To survive, leaders must move beyond standard business formulas, integrating real-time intelligence and human-centered context to navigate the uncertainty inherent in a data-driven world.


Building your cryptographic inventory: A customer strategy for cryptographic posture management

As post-quantum cryptography approaches, Microsoft emphasizes that the primary challenge for organizations is not selecting new algorithms, but discovering existing cryptographic assets. This Microsoft Security blog post outlines a strategy for building a cryptographic inventory as the foundation of Cryptography Posture Management (CPM). A cryptographic inventory is defined as a dynamic catalog encompassing certificates, keys, protocols, and libraries used across an enterprise. To manage these effectively, Microsoft proposes a continuous six-stage lifecycle: discovery, normalization, risk assessment, prioritization, remediation, and ongoing monitoring. This approach spans four critical domains—code, network, runtime, and storage—ensuring visibility into everything from source code primitives to active network sessions. Organizations can leverage existing tools like GitHub Advanced Security for code analysis, Microsoft Defender for Endpoint for runtime signals, and Azure Key Vault for centralized key management to simplify this process. Rather than a one-time project, CPM requires clear ownership and documented policy baselines to maintain security hygiene and achieve "crypto agility." By establishing these practices now, businesses can proactively identify vulnerabilities, comply with emerging global regulations, and ensure a resilient transition to a quantum-safe future. Through strategic integration of Microsoft capabilities and partner solutions, teams can transform complex cryptographic landscapes into manageable, risk-informed systems.


The Rise of Intelligent Automation: How Technology Is Redefining Work and Efficiency

The rise of intelligent automation (IA) is fundamentally reshaping the financial landscape by blending artificial intelligence with robotic process automation to create more agile, efficient, and strategic work environments. According to Global Banking & Finance Review, this shift is not merely about replacing manual labor but about redefining the nature of work itself. By automating repetitive and high-volume tasks—such as data entry, reconciliation, and compliance checks—organizations can significantly reduce human error and operational costs while accelerating processing speeds. Beyond mere efficiency, IA empowers financial institutions to leverage advanced analytics for real-time decision-making and hyper-personalized customer experiences, such as tailored loan products and instant virtual assistance. This technological evolution allows human professionals to pivot from mundane administrative roles toward high-value activities like strategic planning and creative problem-solving. Furthermore, IA enhances risk management through proactive fraud detection and seamless regulatory adherence, providing a robust framework for digital transformation. As the industry moves toward autonomous financial operations, embracing these intelligent systems becomes a competitive necessity. Ultimately, the integration of intelligent automation fosters a culture of innovation, ensuring that financial services remain resilient, secure, and customer-centric in an increasingly complex and data-driven global market.


World targets central IDV, AI agent management role with selfie biometrics

World has unveiled a major strategic expansion aimed at becoming the primary identity verification (IDV) layer for an economy increasingly dominated by agentic AI. Central to this update is the introduction of "Selfie Check," a face biometric and liveness detection service that provides a lower-assurance alternative to its high-level iris-based verification. This shift positions World as a versatile IDV provider, allowing apps to pay for proof of personhood to combat bots and deepfakes. Key features include the "Deep Face" tool, which integrates with platforms like Zoom to offer hardware-backed "root of trust" for real-time presence verification. Beyond individual authentication, the new World ID app introduces AI agent management and delegation tools, supported by partnerships with industry leaders such as AWS, Okta, and Shopify. These updates represent a comprehensive reengineering of the World stack, incorporating privacy-enhancing technologies like multi-party entropy and key rotation to keep user data unlinkable. By diversifying its verification methods and focusing on the governance of autonomous digital agents, World seeks to monetize its infrastructure as a global trust anchor. This evolution reflects a broader market push to align biometric credentials with the evolving demands of AI-driven interactions, securing human identity in an increasingly automated world.

Daily Tech Digest - February 26, 2026


Quote for the day:

"It is not such a fierce something to lead once you see your leadership as part of God's overall plan for his world." -- Calvin Miller



Boards don’t need cyber metrics — they need risk signals

Decision-makers want to know whether risk is increasing or decreasing, whether controls are effective, and whether the organization can limit damage when prevention fails. Metrics are therefore useful when they clarify those questions. “Time is really the universal metric because everyone can understand time,” Richard Bejtlich, strategist and author in residence at Corelight, tells CSO. “How fast do we detect problems, and how fast do we contain them. Dwell time, containment time. That’s the whole game for me.” Organizations cannot prevent every intrusion, Bejtlich argues, but they can measure how quickly they recognize and contain one. ... Wendy Nather, a longtime CISO who is now an advisor at EPSD, cautions against equating measurement with understanding. “When you are reporting to the board, there are some things you just cannot count that you have to report anyway,” she tells CSO. She points to incidents, near misses, and changes in assumptions as examples. “Anything that changes your assumptions about how you’re managing your security program, you should be bringing those to the board, even if you can’t count them,” Nather says. Regular metrics can create a rhythm of predictability, and that predictability could lull board members into a false sense of security. “Metrics are very seductive,” she says. “They lead us toward things that can be counted, that happen on a regular basis.” The result may be a steady flow of data that obscures structural risk or emerging weaknesses, Nather warns. 


The Enterprise AI Postmortem Playbook: Diagnosing Failures at the Data Layer

Your first rule of the playbook is to treat AI incidents as data incidents – until proven otherwise. You should start by tagging the failure type. Document whether it’s a structure issue, retrieval misalignment, conflict with metric definition, or other categories. Ideally, you want to assign the issue to an owner and attach evidence to force some discipline into the review. Try to classify the issue into clearly defined buckets. For example, you can classify into these four buckets: structural failure, retrieval misalignment, definition conflict, or freshness failure. Once this part is clear, the investigation becomes more focused. The goal with this step is to isolate the data fault line. ... The next step is to move one layer deeper. Identify the source table behind the retrieved context. You also want to confirm the timestamp of the last refresh. Check whether any ingestion jobs failed, partially completed, or ran late. Silent failures are common. A job may succeed technically while loading incomplete data. As you go through the playbook continue tracing upstream. Find the transformation job that shaped the dataset. Look at recent schema changes. Check whether any business rules were updated. The idea here is to rebuild the exact path that led to the output. Try to not make any assumptions at this stage about model behavior – simply keep tracing until the process is complete. Don’t be surprised if the model simply worked with what it was given.


Top Attacks On Biometric Systems (And How To Defend Against Them)

Presentation attacks, often referred to as spoofing attacks, occur when an attacker presents a fake biometric sample to a sensor (like a camera or microphone) in an attempt to impersonate a legitimate user. Common examples include printed photos, video replays, silicone masks, prosthetics or synthetic fingerprints. More recently, high-quality deepfake videos have become a powerful new tool in the attacker’s arsenal. ... Passive liveness techniques, which analyze subtle physiological and behavioral signals without requiring user interaction, are particularly effective because they reduce friction while improving security. However, liveness detection must be resilient to unknown attack methods, not just tuned to detect known spoof types. ... Not all biometric attacks happen in front of the sensor. Replay and injection attacks target the biometric data pipeline itself. In these scenarios, attackers intercept, replay or inject biometric data, such as images or templates, directly into the system, bypassing the sensor entirely. ... Defensive strategies must extend beyond the biometric algorithm. Secure transmission, encryption in transit, device attestation, trusted execution environments and validation that data originates from an authorized sensor are all essential. ... Although less visible to end users, attacks targeting biometric templates and databases can pose long-term risks. If biometric templates are compromised, the impact extends far beyond a single breach.


Open-source security debt grows across commercial software

High and critical risk findings remain widespread. Most codebases contain at least one high risk vulnerability, and nearly half contain at least one critical risk issue. Those rates dipped slightly from the prior year even as total vulnerability counts rose. Supply chain attacks add another layer of risk. Sixty five percent of surveyed organizations experienced a software supply chain attack in the past year. ... “As AI reshapes software development, security teams will have to continue to adapt in turn. Security budgets and security guidelines should reflect this new reality. Leaders should continue to invest in tooling and education required to equip teams to manage the drastic increase in velocity, volume, and complexity of applications,” Mackey said. Board level reporting also requires adjustment as vulnerability volumes rise. ... Outdated components appear in nearly every audited environment. More than nine in ten codebases contain components that are several years out of date or show no recent development activity. A large share of components run many versions behind current releases. Only a small fraction operate on the latest available version. This maintenance debt intersects with regulatory obligations. The EU Cyber Resilience Act entered into effect in late 2024, with key reporting requirements taking effect in 2026 and broader enforcement following in 2027. 


The agentic enterprise: Why value streams and capability maps are your new governance control plane

The enterprise is currently undergoing a seismic pivot from generative AI, which focuses on content creation, to agentic AI, which focuses on goal execution. Unlike their predecessors, these agents possess “structured autonomy”: the ability to perceive contexts, plan actions and execute across systems without constant human intervention. For the CIO and the enterprise architect, this is not merely an upgrade in automation speed; it is a fundamental shift in the firm’s economic equation. We are moving from labor-centric workflows to digital labor capable of disassembling and reassembling entire value chains. ... In an agentic enterprise, the value stream map is no longer just a diagram; it is the control plane. It must explicitly define the handoff protocols between human and digital agents. In my opinion, Value stream maps must move from static documents stored in a repository to context documents used to drive agentic automation. ... If a value stream does not exist, you cannot automate it. For new agentic workflows, do not map the current human process. Instead, use an outcome-backwards approach. Work backward from the concrete deliverable (e.g., customer onboarded) to identify the minimum viable API calls required. Before granting write access, run the new agentic stream in shadow mode to validate agent decisions against human outcomes.


Beyond compliance: Building a culture of data security in the digital enterprise

Cyber compliance is something organisations across industrial sectors take seriously, especially with new regulations getting introduced and non-compliance having consequences such as hefty penalties. Hence, businesses are placing compliance among their top priorities. However, hyper-focusing only on compliance can lead to tunnel vision, crippling creativity, and innovation. It fails to offer a comprehensive risk assessment due to the checklist approach it follows, exposing organizations to vulnerabilities and fast-evolving threats. Having a compliance-first mindset can lead to incomplete risk assessment, creating blind spots and security gaps in security provisions. ... With businesses relying on data for operations, customer engagement, and decision-making, ensuring data security protects both users and organisations. Data breaches have severe consequences, including financial losses, reputational damage, customer churn, and regulatory penalties. With data moving across on-premises data centers, cloud platforms, third-party ecosystems, remote work environments, and AI-driven applications, there is a need for a holistic, culture-driven approach to cybersecurity. ... Data protection traditionally was focused on safeguarding the perimeter by securing networks and systems within the physical boundaries where data was normally stored. 


If you thought RTO battles were bad, wait until AI mandates start taking hold across the industry

With the advent of generative AI and the incessant beating of the drum by executives hellbent on unlocking productivity gains, we could see a revival of the dreaded workforce mandate –- only this time with AI. We’ve already had a glimpse of the same RTO tactics being used with AI over the last year. In mid-2025, Microsoft introduced new rules aimed at boosting AI use across the company, with an internal memo warning staff that “using AI is no longer optional”. ... As with RTO mandates, we’re now reaching a point where upward mobility within the enterprise could be at risk as a result of AI use. It’s a tactic initially touted by Dell in 2024 when enforcing its own hybrid work rules, which prompted a fierce backlash among staff. Forcing workers to use AI or risk losing out on promotions will have the desired effect executives want, namely that employees will use the technology, but that’s missing the point entirely. AI has been framed by many big tech providers as a prime opportunity to supercharge productivity and streamline enterprise efficiency. We’ve all heard the marketing jargon. If business leaders are at the point where they’re forcing staff to use the technology, it begs the question of whether it’s actually having the desired effect, which recent analysis suggests it’s not. ... Recent analysis from CompTIA found roughly one-third of companies now require staff to complete AI training. 


In perfect harmony: How Emerald AI is turning data centers into flexible grid assets

At the core of Emerald AI is its Emerald Conductor platform. Described by Sivaram as “an AI for AI,” the system orchestrates thousands of AI workloads across one or more data centers, dynamically adjusting operations to respond to grid conditions while ensuring the facility maintains performance. The system achieves this through a closed-loop orchestration platform comprising an autonomous agent and a digital twin simulator. ... A point keenly pointed out by Steve Smith, chief strategy and regulation officer at National Grid, at the time of the announcement: “As the UK’s digital economy grows, unlocking new ways to flexibly manage energy use is essential for connecting more data centers to our network efficiently.” The second reason was National Grid's transatlantic stature - as an American company active in both the UK and US markets - and its commitment to the technology. “They’ve invested in the program and agreed to a demo, which makes them the ideal partner for our first international launch,” says Sivaram. The final, and most important, factor, notes Sivaram, was the access to the NextGrid Alliance, a consortium of 150 utilities worldwide. By gaining access to such a robust partner network, the deal could serve as a springboard for further international projects. This aligns with the company’s broader partnership approach. Emerald AI has already leveraged Nvidia’s cloud partner network to test its technology across US data centers, laying the groundwork for broader deployment and continued global collaboration. 


7 ways to tame multicloud chaos with generative AI

Architects have the difficult job of understanding tradeoffs between proprietary cloud services and cross-cloud platforms. For example, should developers use AWS Glue, Azure Data Factory, or Google Cloud Data Fusion to develop data pipelines on the respective platforms, or should they adopt a data integration platform that works across clouds? ... “Managing multicloud is like learning multiple languages from AWS, Azure, Oracle, and others, and it’s rare to have teams that can traverse these environments fluidly and effectively. Plus, services and concepts are not portable among clouds, especially in cloud-native PaaS services that go beyond IaaS,” says Harshit Omar, co-founder and CTO at FluidCloud. One way to work around this issue is to assign an AI agent to support the developer or architect in evaluating platform selections. ... Standardizing infrastructure and service configurations across different clouds requires expertise in different naming conventions, architecture, tools, APIs, and other paradigms. Look for genAI tools to act as a translator to streamline configurations, especially for organizations that can templatize their requirements. ... CI/CD, infrastructure-as-code, and process automation are key tools for driving efficiency, especially when tasks span multiple cloud environments. Many of these tools use basic flows and rules to streamline tasks or orchestrate operations, which can create boundary cases that cause process-blocking errors. 


It’s Time To Reinforce Institutional Crypto Key Management With MPC: Sodot CEO

For years, crypto security operations were almost exclusively focused on finding a way to protect the private keys to crypto wallets. It’s known as the “custody risk,” and it will always be a concern to anyone holding digital assets. However, Sofer believes that custody is no longer the weakest link. Cyberattackers have come to realize that secure wallets, often held in cold storage, are far too difficult to crack. ... Sodot has built a self-hosted infrastructure platform that leverages a pair of cutting-edge security techniques – namely, Multi-Party Computation or MPC and Trusted Execution Environments or TEEs. With Sodot’s platform, API keys are never reassembled in full plaintext, eliminating one of the main weaknesses of traditional secrets managers, which typically expose the entire key to any authenticated machine. Instead, Sodot uses MPC to split each key into multiple “shares” that are held by different partners on different technology stacks, Sofer explained. Distributing risk in this way makes an attacker’s job exponentially more difficult, as it means they would have to compromise multiple isolated systems to gain access. ... “Keys are here to stay, and they will control more value and become more sensitive as technology progresses,” Sofer concluded. “As financial institutions get more involved in crypto, we believe demand for self-hosted solutions that secure them will only grow, driven by performance requirements, operational resilience, and control over security boundaries.”

Daily Tech Digest - February 22, 2026


Quote for the day:

"If you care enough for a result, you will most certainly attain it." -- William James



The data center gold rush is warping reality

The real impact isn’t people—it’s power, land, transmission capacity, and water. When you drop 10 massive facilities into a small grid, demand spikes don’t just happen inside the fence line. They ripple outward. Utilities must upgrade substations, reinforce transmission lines, procure new-generation equipment, and finance these investments. ... Here’s the part we don’t say out loud often enough: High-tech companies are spending massive amounts of money on data centers because the market rewards them for doing so. Capital expenditures have become a kind of corporate signaling mechanism. On earnings calls, “We’re investing aggressively” has become synonymous with “We’re winning,” even when the investment is built on forecasts that are, at best, optimistic and, at worst, indistinguishable from wishful thinking. ... The bet is straightforward: When demand spikes, prices and utilization rise, and those who built first make bank. Build the capacity, fill the capacity, charge a premium for the scarce resource, and ride the next decade of digital expansion. It’s the same playbook we’ve seen before in other infrastructure booms, except this time the infrastructure is made of silicon and electrons, and the pitch is wrapped in the language of transformation. ... Then there’s the cost reality. AI systems, especially those that deliver meaningful, production-grade outcomes, often cost five to ten times as much as traditional systems once you account for compute, data movement, storage, tools, and the people required to run them responsibly.


Chip-processing method could assist cryptography schemes to keep data secure

Just like each person has unique fingerprints, every CMOS chip has a distinctive “fingerprint” caused by tiny, random manufacturing variations. Engineers can leverage this unforgeable ID for authentication, to safeguard a device from attackers trying to steal private data. But these cryptographic schemes typically require secret information about a chip’s fingerprint to be stored on a third-party server. This creates security vulnerabilities and requires additional memory and computation. ... “The biggest advantage of this security method is that we don’t need to store any information. All the secrets will always remain safe inside the silicon. This can give a higher level of security. As long as you have this digital key, you can always unlock the door,” says Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this security method. ... A chip’s PUF can be used to provide security just like the human fingerprint identification system on a laptop or door panel. For authentication, a server sends a request to the device, which responds with a secret key based on its unique physical structure. If the key matches an expected value, the server authenticates the device. But the PUF authentication data must be registered and stored in a server for access later, creating a potential security vulnerability.


What MCP Can and Cannot Do for Project Managers Today

The most mature MCPs for PM are official connectors from the platforms themselves. Atlassian’s Rovo MCP Server connects Jira and Confluence, generally available since late 2025. Wrike has its own MCP server for real-time work management. Dart exposes task creation, updates, and querying through MCP. ClickUp does not have an official MCP server, but multiple community implementations wrap its API for task management, comments, docs, and time tracking. ... Most PM work is human and stays human. No LLM replaces the conversation where you talk a frustrated team member through a scope change, or the negotiation where you push back on an unrealistic deadline from the sponsor. No LLM runs a planning workshop or navigates the politics of resource allocation. But woven through all of that is documentation. Every conversation, every decision, every planning session produces written output. The charter that captures what was agreed. ... Beyond documentation, scheduling is where I expected MCP to add the most computational value. This is where the investigation got interesting. Every PM builds schedules. The standard method is CPM: define tasks, set dependencies, estimate durations, calculate the critical path. MS Project does this. Primavera does this. A spreadsheet with formulas does this. CPM is well understood and universally used. CPM does exactly what it says: it calculates the critical path given dependencies and durations. 


How to Write a Good Spec for AI Agents

Instead of overengineering upfront, begin with a clear goal statement and a few core requirements. Treat this as a “product brief” and let the agent generate a more elaborate spec from it. This leverages the AI’s strength in elaboration while you maintain control of the direction. This works well unless you already feel you have very specific technical requirements that must be met from the start. ... Many developers using a strong model do exactly this. The spec file persists between sessions, anchoring the AI whenever work resumes on the project. This mitigates the forgetfulness that can happen when the conversation history gets too long or when you have to restart an agent. It’s akin to how one would use a product requirements document (PRD) in a team: a reference that everyone (human or AI) can consult to stay on track. ... Treat specs as “executable artifacts” tied to version control and CI/CD. The GitHub Spec Kit uses a four-phase gated workflow that makes your specification the center of your engineering process. Instead of writing a spec and setting it aside, the spec drives the implementation, checklists, and task breakdowns. Your primary role is to steer; the coding agent does the bulk of the writing. ... Experienced AI engineers have learned that trying to stuff the entire project into a single prompt or agent message is a recipe for confusion. Not only do you risk hitting token limits; you also risk the model losing focus due to the “curse of instructions”—too many directives causing it to follow none of them well. 


NIST’s Quantum Breakthrough: Single Photons Produced on a Chip

The arrival of quantum computing is future, but the threat is current. Commercial and federal organizations need to protect against quantum computing decryption now. Various new mathematical approaches have been developed for PQC, but while they may be theoretically secure, they are not provably secure. Ultimately, the only provably secure key distribution must be based on physics rather than math. ... While this basic approach is secure, it is neither efficient nor cheap. “Quantum key distribution is an expensive solution for people that have really sensitive information,” continues Bruggeman. “So, think military primarily, and some government agencies where nuclear weapons and national security are involved.” Current implementations tend to use available dark fiber that still has leasing costs. ... “The big advance from NIST is they are able to provide single photons at a time, as opposed to sending multiple photons,” continues Bruggeman. Single photons aren’t new, but in the past, they’ve usually been photons in a stream of photons. “So, they encode the key information on those strings, and that leads to replication. And in cryptography, you don’t want to have replication of data.” There is currently a comfort level in this redundancy, since if one photon in the stream fails, the next one might succeed. But NIST has separately developed Superconducting Nanowire Single-Photon Detectors (SNSPDs) which would allow single photons to be reliably sent and received over longer distances – up to 600 miles.


Quantum security is turning into a supply chain problem

The core issue is timing. Sensitive supplier and contract data has a long shelf life, and adversaries have already started collecting encrypted traffic for future decryption. This is the “harvest now, decrypt later” model, where encrypted records are stolen and stored until quantum computing becomes capable of breaking current public-key encryption. That creates a practical security problem for cybersecurity teams supporting procurement, third-party risk, and supply chain operations. ... There’s growing pressure to adopt post-quantum cryptography (PQC), including partner expectations, insurance scrutiny, and regulatory direction. It argues that PQC adoption is increasingly being driven through procurement requirements, especially from large enterprises and public-sector organizations. Vendors without a PQC roadmap may face longer audits or disqualification during sourcing decisions. ... Beyond cryptographic threats, the researchers argue that quantum computing may eventually improve supply chain risk management by addressing complex optimization problems that overwhelm classical systems. It describes supply chain risk as a “wicked problem,” where variables shift continuously and disruptions propagate in unpredictable ways. ... Quantum readiness spans both cybersecurity and supply chain management. For cybersecurity professionals, the near-term work focuses on long-term encryption durability across vendor ecosystems, along with cryptographic migration planning and third-party dependencies.


CEOs aren't seeing any AI productivity gains, yet some tech industry leaders are still convinced AI will destroy white collar work within two years

Most companies are yet to record any AI productivity gains despite widespread adoption of the technology. That's according to a massive survey by the US National Bureau of Economic Research (NBER), which asked 6,000 executives from a range of firms across the US, UK, Germany, and Australia how they use AI. The study found 70% of companies actively use AI, but the picture is different among execs themselves. Among top executives – including CFOs and CEOs – a quarter don't use the technology at all, while two-thirds say they use it for 1.5 hours a week at most. ... "The most commonly cited uses are ‘text generation using large language models’ followed by ‘visual content creation’ and ‘data processing using machine learning’," the survey added. When it comes to employment savings, 90% of execs said they'd seen no impact from AI over the last three years, with 89% saying they saw no productivity boost, either. The report noted that previous studies have found large productivity gains in specific settings – in particular customer support and writing tasks. ... Despite the lack of impact to date, business leaders still predict AI will start to boost productivity and reduce the number of employees needed in the coming years. Respondents predict a 1.4% productivity boost and 0.8% increase in output thanks to the technology over the next three years, for example. Yet the NBER survey also reveals a "sizable gap in expectations", with senior execs saying AI would cut employment by 0.7% over the next three years — which the report said would mean 1.75 million fewer jobs. 


Observability Without Cost Telemetry Is Broken Engineering

Cost isn't an operational afterthought. It's a signal as essential as CPU saturation or memory pressure, yet we've architected it out of the feedback loop engineers actually use. ... Engineers started evaluating architectural choices through a cost lens without needing MBA training. “Should we cache this aggressively?” became answerable with data: cache infrastructure costs $X/month, API calls saved cost $Y/month, net impact is measurable, not theoretical.  ... The anti-pattern I see most often is siloed visibility. Finance gets billing dashboards. SREs get operational dashboards. Developers get APM traces. Nobody sees the intersection where cost and performance influence each other. You debug a performance issue — say, slow database queries. The fix is to add an index. Query time drops from 800 ms to 40 ms. Victory. Except the database is now using 30% more storage for that index, and your storage tier bills by the gigabyte-month. If you're on a flat-rate hosting plan, maybe that cost is absorbed. If you're on Aurora or Cosmos DB with per-IOPS pricing, you've just traded latency for dollars. Without cost telemetry, you won't notice until the bill arrives. ... Alerting without cost dimensions misses failure modes. Your error rate is fine. Latency is stable. But egress costs just doubled because a misconfigured service is downloading the same 200 GB dataset on every request instead of caching it.


A New Way To Read the “Unreadable” Qubit Could Transform Quantum Technology

“Our work is pioneering because we demonstrate that we can access the information stored in Majorana qubits using a new technique called quantum capacitance,” continues the scientist, who explains that this technique “acts as a global probe sensitive to the overall state of the system.” ... To better understand this achievement, Aguado explains that topological qubits are “like safe boxes for quantum information,” only that, instead of storing data in a specific location, “they distribute it non-locally across a pair of special states, known as Majorana zero modes.” That unusual structure is what makes them attractive for quantum computing. “They are inherently robust against local noise that produces decoherence, since to corrupt the information, a failure would have to affect the system globally.” In other words, small disturbances are unlikely to disrupt the stored information. Yet this strength has also created a major experimental challenge. As Aguado notes, “this same virtue had become their experimental Achilles’ heel: how do you “read” or “detect” a property that doesn’t reside at any specific point?.”  ... The project brings together an advanced experimental platform developed primarily at Delft University of Technology and theoretical work carried out by ICMM-CSIC. According to the authors, this theoretical input was “crucial for understanding this highly sophisticated experiment,” highlighting the importance of close collaboration between theory and experiment in pushing quantum technology forward.


When Excellent Technology Architecture Fails to Deliver Business Results

Industry research consistently shows that most large-scale transformations fail to achieve their expected business outcomes, even when the underlying technology decisions are considered sound. This suggests that the issue is not technical quality. It is structural. ... The real divergence begins later, in day-to-day decision-making. Under delivery pressure, teams make choices driven by deadlines, budget constraints, and individual accountability. Temporary workarounds are accepted. Deviations are justified as exceptions. Risks are taken implicitly rather than explicitly assessed. Architecture is often aware of these decisions, but it is not structurally embedded in the moment where choices are made. As a result, architecture remains correct, but unused.  ... When architecture cannot explain the economic and operational consequences of a decision, it loses relevance. Statements such as “this violates architectural principles” carry little weight if they are not translated into impact on cost of change, delivery speed, or operational risk. ... What is critical is that these compromises are rarely tracked, assessed cumulatively, or reintroduced into management discussions. Architecture may be aware of them, but without a mechanism to record and govern them, their impact remains invisible until flexibility is lost and change becomes expensive. Architecture debt, in this sense, is not a technical failure. It is a governance outcome. When decision trade-offs remain unmanaged, architecture is blamed for consequences it was never empowered to influence.

Daily Tech Digest - January 27, 2026


Quote for the day:

"Supreme leaders determine where generations are going and develop outstanding leaders they pass the baton to." -- Anyaele Sam Chiyson



Why code quality should be a C-suite concern

At first, speed feels like progress. Then the hidden costs begin to surface: escalating maintenance effort, rising incident frequency, delayed roadmaps and growing organizational tension. The expense of poor code slowly eats into return on investment — not always in ways that show up neatly on a spreadsheet, but always in ways that become painfully visible in daily operations. ... During the planning phase, rushed architectural decisions often lead to tightly coupled, monolithic systems that are expensive and risky to change. During development, shortcuts accumulate into what we call technical debt: duplicated logic, brittle integrations and outdated dependencies that appear harmless at first but quietly erode system stability over time. Like financial debt, technical debt compounds. ... Architecture always comes first. I advocate for modular growth — whether through a well- structured modular monolith that can later evolve into microservices, or through service-oriented architectures with clear domain boundaries. Platforms such as Kubernetes enable independent scaling of components, but only when the underlying architecture is cleanly segmented. Language and framework choices matter more than most leaders realize. ... The technologies we select, the boundaries we define and the failure modes we anticipate all place invisible limits on how far an organization can grow. From what I’ve seen, you simply cannot scale a product on a foundation that was never designed to evolve.


How to regulate social media for teens (and make it stick)

Noting that age assurance proposals have broad support from parents and educators, Allen says “the question is not whether children deserve safeguarding (they do) but whether prohibition is an effective tool for achieving it.” “History suggests that bans succeed or fail not on the basis of intention, but on whether they align with demand, supply, moral legitimacy and enforcement capacity. Prohibition does not remove human desire; it reallocates who fulfils it. Whether that reallocation reduces harm or increases it depends on how well policy engages with the underlying economics and psychology of behaviour.” ... “There is little evidence that young people themselves view social media as morally repugnant. On the contrary, it is where friendships are maintained, identities are explored and social status is negotiated. That does not mean it is harmless. It means it is meaningful.” “This creates a problem for prohibition. Where demand remains strong, supply will be found.” Here, Allen’s argument falters somewhat, in that it follows the logic that says bans push kids onto less regulated and more dangerous platforms. I.e., “the risk is not simply that prohibition fails. It is that it succeeds in changing who supplies children’s social connectivity.” The difference is that, while a basket of plums and some ingenuity are all you need to produce alcohol, social media platforms have their value in the collective. Like Star Trek’s Borg, they are more powerful the more people they assimilate. 


The era of agentic AI demands a data constitution, not better prompts

If a data pipeline drifts today, an agent doesn't just report the wrong number. It takes the wrong action. It provisions the wrong server type. It recommends a horror movie to a user watching cartoons. It hallucinates a customer service answer based on corrupted vector embeddings. ... In traditional SQL databases, a null value is just a null value. In a vector database, a null value or a schema mismatch can warp the semantic meaning of the entire embedding. Consider a scenario where metadata drifts. Suppose your pipeline ingests video metadata, but a race condition causes the "genre" tag to slip. Your metadata might tag a video as "live sports," but the embedding was generated from a "news clip." When an agent queries the database for "touchdown highlights," it retrieves the news clip because the vector similarity search is operating on a corrupted signal. The agent then serves that clip to millions of users. At scale, you cannot rely on downstream monitoring to catch this. By the time an anomaly alarm goes off, the agent has already made thousands of bad decisions. Quality controls must shift to the absolute "left" of the pipeline. ... Engineers generally hate guardrails. They view strict schemas and data contracts as bureaucratic hurdles that slow down deployment velocity. When introducing a data constitution, leaders often face pushback. Teams feel they are returning to the "waterfall" era of rigid database administration.


QA engineers must think like adversaries

Test engineers are now expected to understand pipelines, cloud-native architectures, and even prompt engineering for AI tools. The mindset has become more preventive than detective. AI has become part of QA’s toolkit, helping predict weak spots and optimise testing. At the same time, QA must validate the integrity and fairness of AI systems — making it both a user and a guardian of AI. ... With DevOps, QA became embedded into the pipeline — automated test execution, environment provisioning, and feedback loops are all part of CI/CD now. With SecOps, we’re adding security scans and penetration checks earlier, creating a DevTestSecOps model. QA is no longer a separate stage. It’s a mindset that exists throughout the lifecycle — from requirements to observability in production. ... Regression testing has become AI-augmented and data-driven. Instead of re-running all test cases, systems now prioritise based on change impact analysis. The SDET role is also evolving — they now bridge coding, observability, and automation frameworks, often owning quality gates within CI/CD. ... Security checks are now embedded as automated gates within pipelines. Performance testing, too, is moving earlier — with synthetic monitoring and API-level load simulations. In effect, security and speed can coexist, provided teams integrate validation rather than treat it as an afterthought.


The biggest AI bottleneck isn’t GPUs. It’s data resilience

The risks of poor data resilience will be magnified as agentic AI enters the mainstream. Whereas generative AI applications respond to a prompt with an answer in the same manner as a search engine, agentic systems are woven into production workflows, with models calling each other, exchanging data, triggering actions and propagating decisions across networks. Erroneous data can be amplified or corrupted as it moves between agents, like the party game “telephone.” ... Experts cite numerous reasons data protection gets short shrift in many organizations. A key one is an overly intense focus on compliance at the expense of operational excellence. That’s the difference between meeting a set of formal cybersecurity metrics and being able to survive real-world disruption. Compliance guidelines specify policies, controls and audits, while resilience is about operational survivability, such as maintaining data integrity, recovering full business operations, replaying or rolling back actions and containing the blast radius when systems fail or are attacked. ... “Resilience and compliance-oriented security are handled by different teams within enterprises, leading to a lack of coordination,” said Forrester’s Ellis. “There is a disconnect between how prepared people think they are and how prepared they actually are.” ... Missing or corrupted data can lead models to make decisions or recommendations that appear plausible but are far off the mark. 


When open science meets real-world cybersecurity

If there is no collaboration, usually the product that emerges is a great scientific specimen with very risky implementations. The risk is usually caught by normal cyber processes and reduced accordingly; however, scientists who see the value in IT/cyber collaboration usually also end up with a great scientific specimen. There is also managed risk in the implementation with almost no measurable negative impacts or costs. We’ve seen that if collaboration is planned into the project very early on, cybersecurity can provide value. ... Cybersecurity researchers often are confused and look for issues on the internet where they stumble onto the laboratory IT footprint and make claims that we are leaking non-public information. We clearly label and denote information that is releasable to the public, but it always seems there are folks who are quicker to report than to read the dissemination labels. ... Encryption at rest (EIR) is really a control to prevent data loss when the storage medium is no longer in your control. So, when the data has been reviewed for public release, we don’t spend the extra time, effort, and money to apply a control to data stores that provide no value to either the implementation or to a cyber control. ... You can imagine there are many custom IT and OT parts that run that machine. The replacement of components is not on a typical IT replacement schedule. This can present longer than ideal technology refresh cycles. The risk here is that integrating modern cyber technology into an older IT/OT technology stack has its challenges.


4 issues holding back CISOs’ security agendas

CISOs should aim to have team members know when and how to make prioritization calls for their own areas of work, “so that every single team is focusing on the most important stuff,” Khawaja says. “To do that, you need to create clear mechanisms and instructions for how you do decision-support,” he explains. “There should be criteria or factors that says it’s high, medium, low priority for anything delivered by the security team, because then any team member can look at any request that comes to them and they can confidently and effectively prioritize it.” ... According to Lee, the CISOs who keep pace with their organization’s AI strategy take a holistic approach, rather than work deployment to deployment. They establish a risk profile for specific data, so security doesn’t spend much time evaluating AI deployments that use low-risk data and can prioritize work on AI use cases that need medium- or high-risk data. They also assign security staffers to individual departments to stay on top of AI needs, and they train security teams on the skills needed to evaluate and secure AI initiatives. ... the challenge for CISOs not being about hiring for technical skills or even soft skills, but what he called “middle skills,” such as risk management and change management. These skills he sees becoming more crucial for aligning security to the business, getting users to adopt security protocols, and ultimately improving the organization’s security posture. “If you don’t have [those middle skills], there’s only so far the security team can go,” he says.


Rethinking data center strategy for AI at scale

Traditional data centers were engineered for predictable, transactional workloads. Your typical enterprise rack ran at 8kW, cooled with forced air, powered through 12-volt systems. This worked fine for databases, web applications, and cloud storage. Yet, AI workloads are pushing rack densities past 120kW. That's not an incremental change—it's a complete reimagining of what a data center needs to be. At these densities, air cooling becomes physically impossible. ... Walk into a typical data center today. The HVAC system has its own monitoring dashboard. Power distribution runs through a separate SCADA system. Compute performance lives in yet another tool. Network telemetry? Different stack entirely. Each subsystem operates in isolation, reporting intermittently through proprietary interfaces that don't talk to each other. Operators see dashboards, not decisions. ... Cooling systems can respond instantly to thermal changes, and power orchestration becomes adaptive rather than provisioned for theoretical peaks. AI clusters can scale based not just on demand, but in coordination with available power, cooling capacity, and network bandwidth. ... Real-time visibility, unified data architectures, and adaptive control will define performance, efficiency, and competitiveness in AI-ready data centers. The organizations that thrive in the AI era won't necessarily be those with the most data centers or the biggest chips; they'll be the ones that treat infrastructure as an intelligent, responsive system capable of sensing, adapting, and optimizing in real time.


Microsoft handed over BitLocker keys to law enforcement, raising enterprise data control concerns

The US Federal Bureau of Investigation approached Microsoft with a search warrant in early 2025, seeking keys to unlock encrypted data stored on three laptops in a case of alleged fraud involving the COVID unemployment assistance program in Guam. As the keys were stored on a Microsoft server, Microsoft adhered to the legal order and handed over the encryption keys ... While the encryption of BitLocker is robust, enterprises need to be mindful of who has custody of the keys, as this case illustrates. ... Enterprises using BitLocker should treat the recovery keys as highly sensitive, and avoid default cloud backup unless there is a clear business requirement and the associated risks are well understood and mitigated. ... CISOs should also ensure that when devices are repurposed, decommissioned, or moved across jurisdictions, keys should be regenerated as part of the workflow to ensure old keys cannot be used. ... If recovery keys are stored with a cloud provider, that provider may be compelled, at least in its home jurisdiction, to hand them over under lawful order, even if the data subject or company is elsewhere without notifying the company. This becomes even more critical from the point of view of a pharma company, semiconductor firm, defence contractor, or critical-infrastructure operator, as it exposes them to risks such as exposure of trade secrets in cross‑border investigations.


Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

For half a century, computing advanced in a reassuring, predictable way. Transistors – devices used to switch electrical signals on a computer chip – became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing. ... Instead of one general-purpose processor trying to do everything, modern systems combine different kinds of processors. Traditional processing units or CPUs handle control and decision-making. Graphics processors, are powerful processing units that were originally designed to handle the demands of graphics for computer games and other tasks. AI accelerators (specialised hardware that speeds up AI tasks) focus on large numbers of simple calculations carried out in parallel. Performance now depends on how well these components work together, rather than on how fast any one of them is. Alongside these developments, researchers are exploring more experimental technologies, including quantum processors (which harness the power of quantum science) and photonic processors, which use light instead of electricity. ... For users, life after Moore’s Law does not mean that computers stop improving. It means that improvements arrive in more uneven and task-specific ways. Some applications, such as AI-powered tools, diagnostics, navigation, complex modelling, may see noticeable gains, while general-purpose performance increases more slowly.

Daily Tech Digest - October 09, 2025


Quote for the day:

"No man is good enough to govern another man without that other's consent." -- Abraham Lincoln



The Quantum Wake-Up Call: Preparing Your Organization for PQC

Quantum computing promises transformative breakthroughs across industries—but it also threatens the cryptographic foundations that secure our digital world. As quantum capabilities evolve, organizations must proactively prepare for the shift to post-quantum cryptography (PQC) to safeguard sensitive data and maintain trust. ... The very mathematical "hardness" that makes RSA and ECC secure against classical computers is precisely what makes them fatally vulnerable to quantum computing. Shor's Algorithm: This quantum algorithm, developed by Peter Shor in 1994, is capable of solving the integer factorization and discrete logarithm problems exponentially faster than any classical machine. Once a sufficiently stable and large-scale quantum computer is built, an encryption that might take a supercomputer millions of years to break could be broken in hours or even minutes. The Decryption Time Bomb: Because current PKC is used to establish long-term trust and to encrypt keys, the entire cryptographic ecosystem is a single point of failure. The threat is compounded by the "Harvest Now, Decrypt Later" strategy, meaning sensitive data is already being harvested and stored by adversaries, awaiting the quantum moment to be unlocked. Quantum computing is no longer theoretical—it’s a looming reality. Algorithms like RSA and ECC, which underpin most public-key cryptography, are vulnerable to quantum attacks via Shor’s algorithm.


Producing a Better Software Architecture with Residuality Theory

Residuality theory is a very simple process. Sometimes, people are put off because the theoretical work necessary to prove that residuality works is very heavy, but applying it is easy, O’Reilly explained: We start out with a suggestion, a naive architecture that solves the functional problem. From there we stress the architecture with potential changes in the environment. These stressors allow us to uncover the attractors, often through conversations with domain experts. For each attractor, we identify the residue, what’s left of our architecture in this attractor, and then we change the naive architecture to make it survive better. We do this many times and, at the end, integrate all of these augmented residues into a coherent architecture. We can then test this to show that it survives unknown forms of stress better than our naive architecture. In complex business environments with uncertainty, residuality makes it possible to create architectures quickly instead of chasing down stakeholders demanding specific requirements or answers to questions that are unknown by the business itself, O’Reilly said. It pulls technical architects out of details and teaches them to productively engage with a business environment without the lines and boxes of traditional enterprise architecture, he concluded. ... Senior architects report that it gives a theoretical justification for practices that many had already figured out and a shared vocabulary for teams to talk about architecture. 


Will AI-SPM Become the Standard Security Layer for Safe AI Adoption?

By continuously monitoring AI assets, AI-SPM helps ensure that only trusted data sources are used during model development. Runtime security testing and red-team exercises detect vulnerabilities caused by malicious data. The system actively identifies abnormal model behavior, such as biased, toxic, or manipulated output, and brings them up for remediation prior to production release. ... AI-SPM continuously checks system requests and user inputs to find dangerous patterns before they lead to security problems, like attempts to remove or change built-in directives. It also uses protection against prompt injection and jailbreak attacks, which are common ways to access or alter system-level commands. By finding unapproved AI tools and services, it stops the use of insecure or poorly set up LLMs that could reveal system prompts. ... Shadow AI is starting to get more attention, and for good reason. Like shadow IT, employees are using public AI tools without authorization. That might mean uploading sensitive data or sidestepping governance rules, often without realizing the risks. The problem isn’t just the tools themselves, but the lack of visibility around how and where they’re being used. AI-SPM should work to identify all AI tools in play across networks, endpoints, cloud platforms, and dev environments, mapping how data moves between them, which is often the missing piece when trying to understand exposure risks.


How to write nonfunctional requirements for AI agents

Nonfunctional requirements for AI agents can be like those for applications, where user stories are granular and target delivering small, atomic functions. These NFRs can guide developers in answering how to develop the functionality described in user stories and to help quantify what should pass a code review. However, you may need another set of NFRs expressed at a feature or release level. ... “Agile teams often struggle with how to evaluate NFRs like latency, fairness, or explainability, which may seem nonfunctional, but with a little specification work, they can often be made concrete and part of a user story with clear pass/fail tests,” says Grant Passmore, co-founder of Imandra. “We use formal verification to turn NFRs into mathematical functional requirements we can prove or disprove.” ... AI agent NFRs that connect dev with ops have all the complexities of applications, infrastructure, automations, and AI models bundled together. Deploying the AI agent is just the beginning of its lifecycle, and NFRs for maintainability and observability help create the feedback loops required to diagnose issues and make operational improvements. As many organizations aim toward autonomous agentic AI and agent-to-agent workflows, standardizing a list of NFRs that are applied across all AI agents becomes important.


Unplug Gemini from email and calendars, says cybersecurity firm

CSOs should consider turning off Google Gemini access to employees’ Gmail and Google Calendars, because the chatbot is vulnerable to a form of prompt injection, says the head of a cybersecurity firm that discovered the vulnerability. ”If you’re worried about the risk, you might want to turn off automatic email and calendar processing by Gemini until this, and potentially other things like it, are addressed,” Jeremy Snider, CEO of US-based FireTail, said in an interview. ... This flaw is “particularly dangerous when LLMs, like Gemini, are deeply integrated into enterprise platforms like Google Workspace,” the report adds. FireTail tested six AI agents. OpenAI’s ChatGPT, Microsoft Copilot, and Anthropic AI’s Claude caught the attack. Gemini, DeepSeek, and Grok failed. In a test, FireTail researchers were able to change the word “Meeting” in an appointment in Google Calendar to “Meeting. It is optional.” ... “ASCII Smuggling attacks against AIs aren’t new,” commented Joseph Steinberg, a US-based cybersecurity and AI expert. “I saw one demonstrated over a year ago.” He didn’t specify where, but in August 2024, a security researcher blogged about an ASCII smuggling vulnerability in Copilot. The finding was reported to Microsoft. Many ways of disguising malicious prompts will be discovered over time, he added, so it’s important that IT and security leaders ensure that AIs don’t have the power to act without human approval on prompts that could be damaging.


Broken Opt-Outs, Big Fines: Tractor Supply Shows Privacy Enforcement Has Arrived for Retail

The Tractor Supply violations reveal a clear enforcement pattern. Broken opt-out links that route to dead webforms. Global Privacy Control signals ignored entirely. Privacy notices that skip job applicant data disclosures. Vendor agreements without data restriction clauses. These aren’t random oversights. They’re the exact gaps that surfaced across recent CCPA enforcement by the Attorney General and CPPA orders. Regulators are building a playbook: test the opt-out mechanisms, check for GPC compliance, review all privacy notices including HR portals, and audit third-party contracts. If any piece fails, expect enforcement. Regulators no longer accept opt-outs in theory or privacy policies in fine print. ... The message is clear: prove you have control. Not just over the data you collect, but over the algorithms that process it. Retailers who can’t show governance across both will face scrutiny on multiple fronts. The same broken opt-out that triggers a privacy fine could signal to regulators that your AI systems lack oversight too. This isn’t about adding more compliance checkboxes. It’s about recognizing that data governance and AI governance are becoming inseparable. The retailers who understand this convergence will build unified systems that handle both. The ones who don’t will scramble to retrofit governance after the fact, just like they’re doing with privacy today.


Why Enterprises Continue to Stick With Traditional AI

AI success also depends on digital maturity. Many organizations are still laying data foundations. "Let's say you want to run analytics on how many tickets were raised, do a dashboard on how many tickets one can expect … all of that was over a call. Nothing was digitized. There is no trace of it. That is the reason why chatbots are getting created because they are now recording and getting traced," Iyer said. ... Strict compliance and privacy requirements push enterprises toward controlled AI development. … Even in such cases, we ensure the data in the model that we build, it stays exclusively. At any point of time, your data or your model is not going to be used for the betterment of someone else," Iyer said. This approach reflects broader enterprise concerns about AI governance. According to KPMG research, frameworks such as local interpretable model-agnostic explanations and Shapley Additive exPlanations help clarify AI decisions, support compliance and build stakeholder confidence. ... Iyer said enterprise needs are often highly contextual, making massive models unnecessary. "Do you need a 600-700 billion [parameter] model sitting in your enterprise running inferences when the questions are going to be very contextual?" she said. This practical wisdom is supported by recent industry analysis. Traditional ML models often produce classification accuracy at a fraction of the cost compared to deep learning alternatives. 


Lead with a human edge: Why empathy is the new strategy

Traditional management was built on control: plans, processes, and hierarchies designed to tame complexity. But as Pushkar noted, ‘organisations are living organisms. They evolve, sense, and respond. Trying to manage them like machines is an illusion. The leaders of tomorrow will not be engineers of systems — they will be gardeners of cultures.’ “Planting a tree is very easy,” Bidwai said. “The real game is how you nurture, how you create an environment, how you enable the culture.” Nurturing, not directing, is the leadership mindset for an era of interdependence. ... Perhaps the most striking moment of Pushkar’s talk was not analytical but symbolic. He invited participants to discard their corporate titles just for a moment and invent new ones that reflected their purpose, not their position. “Sometimes titles define how we operate. Can we look beyond titles?” His own? In People Matters, Pushkar stated that he visualises his creative title as Plumber. “Wherever anything needs fixing, I will go and fix things.” The metaphor landed. Leadership, stripped of status, is about service. To lead with a human edge is to roll up your sleeves, listen, and fix what’s broken, in systems, in relationships, in ourselves. ... What Pushkar calls ‘the human edge’ is not a nostalgic pushback against technology. It is a pragmatic blueprint for sustainable growth. The leaders who will define the next decade will be those who use AI to augment human potential, not replace it those who recognise that data drives decisions, but empathy drives destiny.


Building a modern fraud prevention stack: why centralised data, not point solutions, is the answer

The fraud prevention landscape is riddled with fragmented tools, reactive approaches and blind spots. Despite the best of intentions, many organisations rely on outdated, point-in-time methods that are ill-suited for today’s dynamic fraud landscape. And fraud no longer plays by the old rules. It unfolds across the entire customer journey, mutating with every new channel, payment method or customer behaviour pattern. A fraudster may test stolen credentials one day, then come back weeks later to exploit a weak link in the onboarding or refund process. These disjointed systems miss multi-step attacks and patterns that unfold over time. ... while many organisations have historically relied on a patchwork of tools to cover each threat vector, it’s becoming clear that more tools aren’t the answer. Better coordination is. A modern stack doesn’t need to come from a single vendor, but it does need to operate like a single, unified system. That means integrated data, shared intelligence and orchestration that supports real-time response, not after-the-fact analysis. While investment is rising, with 85% of organisations having increased their fraud prevention budgets, it’s crucial to highlight that spending must be strategic. So, what does a modern fraud prevention stack actually look like? And how can organisations build one that’s unified, flexible and future-proof?


CISOs, Start Securing Software's Agentic Future Now

Industry-wide challenges create obstacles to AI governance, leaving leaders uncertain about where to focus their strategic efforts most effectively. The non-deterministic nature of agents causes them to behave in unexpected ways, which has been proven to disrupt existing security boundaries. Adding to this security complexity, universal protocols, such as Model Context Protocol and Agent2Agent, are emerging to streamline data access and improve agent interoperability, but their ecosystem-building capabilities introduce additional security considerations. But these challenges cannot stop security leaders from prioritizing AI governance. ... A culture of security now requires AI literacy. 43% of survey respondents acknowledged a widening AI skills gap, which is likely to grow unless technical leaders prioritize upskilling teams to understand model behavior, prompt engineering, and how to evaluate model inputs and outputs critically. Understanding where models are performant versus where their use is suboptimal helps teams avoid unnecessary security risk and technical debt. ... Teams should also recognize that no model can replace human ingenuity. When models fail in domains where security engineers or developers lack expertise, they will not be able to identify the security gaps the model has left behind. CISOs should consider dedicating a portion of learning and development budgets to continuous technical education.