Daily Tech Digest - February 14, 2026


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



UK CIOs struggle to govern surge in business AI agents

The findings point to a growing governance challenge alongside the rapid spread of agent-based systems across the enterprise. AI agents, which can take actions or make decisions within software environments, have moved quickly from pilots into day-to-day operations. That shift has increased demands for monitoring, audit trails and accountability across IT and risk functions. UK CIOs also reported growing concern about the spread of internally built tools. ... The results suggest "shadow AI" risks are becoming a mainstream issue for large organisations. As AI development tools get easier to use, more staff outside IT can build automated workflows, chatbots and agent-like applications. This trend has intensified questions about data access, model behaviour, and whether organisations can trace decisions back to specific inputs and approvals. ... The findings also suggest governance gaps are already affecting operations. Some 84% of UK CIOs said traceability or explainability shortcomings have delayed or prevented AI projects from reaching production, highlighting friction between the push to deploy AI and the work needed to demonstrate effective controls. For CIOs, the issue also intersects with enterprise risk management and information security. Unmonitored agents and rapidly developed internal apps can create new pathways into sensitive datasets and complicate incident response if an organisation cannot determine which automated process accessed or changed data.


You’ve Generated Your MVP Using AI. What Does That Mean for Your Software Architecture?

While the AI generates an MVP, teams can’t control the architectural decisions that the AI made. They might be able to query the AI on some of the decisions, but many decisions will remain opaque because the AI does not understand why the code that it learned from did what it did. ... From the perspective of the development team, AI-generated code is largely a black-box; even if it could be understood, no one has time to do so. Software development teams are under intense time pressure. They turn to AI to partially relieve this pressure, but in doing so they also increase the expectations of their business sponsors regarding productivity. ... As a result, the nature of the work of architecting will shift from up-front design work to empirical evaluation of QARs, i.e. acceptance testing of the MVA. As part of this shift, the development team will help the business sponsors figure out how to test/evaluate the MVP. In response, development teams need to get a lot better at empirically testing the architecture of the system. ... The team needs to know what trade-offs it may need to make, and they need to articulate those in the prompts to the AI. The AI then works as a very clever search engine to find possible solutions that might address the trade-offs. As noted above, these still need to be evaluated empirically, but it does save the team some time in coming up with possible solutions.


Successful Leaders Often Lack Self-Awareness

As a leader, how do you respond in emotionally charged situations? It's under pressure that emotions can quickly escalate and unexamined behavioral patterns emerge—for all of us. In my work with senior executives, I have seen time and again how these unconscious “go-to” reactions surface when stakes are high. This is why self-awareness is not a one-time achievement but a lifelong practice—and for many leaders, it remains their greatest blind spot. Why? ... Turning inward to develop self-awareness naturally places you in uncomfortable territory. It challenges long-standing assumptions and exposes blind spots. One client came to me because a colleague described her as harsh. She genuinely did not see herself that way. Another sought my help after his CEO told him he struggled to communicate with him. Through our work together, we uncovered how defensively he responded to feedback, often without realizing it. ... As leaders rise to the top, the accolades that propel them forward are rooted in talent, strategic decision-making and measurable outcomes. However, once at the highest levels, leadership expands beyond execution. The role now demands mastery of relationships—within the organization and beyond, with clients, partners and customers. At this level, self-awareness is no longer optional; it becomes essential.


How Should Financial Institutions Prepare for Quantum Risk?

“Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers,” said Rob Joyce, then director of cybersecurity for the National Security Agency, in an August 2023 statement. In August 2024, NIST published three post-quantum cryptographic standards — ML-KEM, ML-DSA and SLH-DSA — designed to withstand quantum attacks. These standards are intended to secure data across systems such as digital banking platforms, payment processing environments, email and e-commerce. NIST has encouraged organizations to begin implementation as soon as possible. ... A critical first step is conducting an assessment of which systems and data assets are most at risk. The ISACA IT security organization recommends building a comprehensive inventory of systems vulnerable to quantum attacks and classifying data based on sensitivity, regulatory requirements and business impact. For financial institutions, this assessment should prioritize customer PII, transaction data, long-term financial records and proprietary business information. Understanding where the greatest financial, reputational and regulatory exposure exists enables IT leaders to focus mitigation efforts where they matter most. Institutions should also conduct executive briefings, staff training and tabletop exercises to build awareness. 


The cure for the AI hype hangover

The way AI dominates the discussions at conferences is in contrast to its slower progress in the real world. New capabilities in generative AI and machine learning show promise, but moving from pilot to impactful implementation remains challenging. Many experts, including those cited in this CIO.com article, describe this as an “AI hype hangover,” in which implementation challenges, cost overruns, and underwhelming pilot results quickly dim the glow of AI’s potential. Similar cycles occurred with cloud and digital transformation, but this time the pace and pressure are even more intense. ... Too many leaders expect AI to be a generalized solution, but AI implementations are highly context-dependent. The problems you can solve with AI (and whether those solutions justify the investment) vary dramatically from enterprise to enterprise. This leads to a proliferation of small, underwhelming pilot projects, few of which are scaled broadly enough to demonstrate tangible business value. In short, for every triumphant AI story, numerous enterprises are still waiting for any tangible payoff. For some companies, it won’t happen anytime soon—or at all. ... Beyond data, there is the challenge of computational infrastructure: servers, security, compliance, and hiring or training new talent. These are not luxuries but prerequisites for any scalable, reliable AI implementation. In times of economic uncertainty, most enterprises are unable or unwilling to allocate the funds for a complete transformation.


4th-Party Risk: How Commercial Software Puts You At Risk

Unlike third-party providers, however, there are no contractual relationships between businesses and their fourth-party vendors. That means companies have little to no visibility into those vendors' operations, only blind spots that are fueling an even greater need to shift from trust-based to evidence-based approaches. That lack of visibility has severe consequences for enterprises and other end-user organizations. ... Illuminating 4th-party blind spots begins with mapping critical dependencies through direct vendors. As you go about this process, don't settle for static lists. Software supply chains are the most common attack vector, and every piece of software you receive contains evidence of its supply chain. This includes embedded libraries, development artifacts, and behavioral patterns. ... Businesses must also implement some broader frameworks that go beyond the traditional options, such as NIST CSF or ISO 27001, which provide a foundation but ultimately fall short by assuming businesses lack control in their fourth-party relationships. This stems from the fact that no contractual relationships exist that far downstream, and without contractual obligations, a business cannot conduct risk assessments, demand compliance documentation, or launch an audit as it might with a third-party vendor. ... Also consider SLSA (Supply Chain Levels for Software Artifacts). These provide measurable security controls to prevent tampering and ensure integrity. For companies operating in regulated industries, consider aligning with emerging requirements.


Geopatriation and sovereign cloud: how data returns to the source

The key to understanding a sovereign cloud, adds Google Cloud Spain’s national technology director Héctor Sánchez Montenegro, is that it’s not a one-size-fits-all concept. “Depending on the location, sector, or regulatory context, sovereignty has a different meaning for each customer,” he says. Google already offers sovereign clouds, whose guarantee of sovereignty isn’t based on a single product, but on a strategy that separates the technology from the operations. “We understand that sovereignty isn’t binary, but rather a spectrum of needs we guarantee through three levels of isolation and control,” he adds. ... One of the certainties of this sovereign cloud boom is it’s closely connected to the context in which organizations, companies, and other cloud end users operate. While digital sovereignty was less prevalent at the beginning of the century, it’s now become ubiquitous, especially as political decisions in various countries have solidified technology as a key geostrategic asset. “Data sovereignty is a fundamental part of digital sovereignty, to the point that in practice, it’s becoming a requirement for employment contracts,” says María Loza ... With the technological landscape becoming more unsure and complex, the goal is to know and mitigate risks where possible, and create additional options. “We’re at a crucial moment,” Loza Correa points out. “Data is a key business asset that must be protected.”


Managing AI Risk in a Non-Deterministic World: A CTO’s Perspective

Drawing parallels to the early days of cloud computing, Chawla notes that while AI platforms will eventually rationalize around a smaller set of leaders, organizations cannot afford to wait for that clarity. “The smartest investments right now are fearlessly establishing good data infrastructure, sound fundamentals, and flexible architectures,” she explains. In a world where foundational models are broadly accessible, Chawla argues that differentiation shifts elsewhere. ... Beyond tooling, Chawla emphasizes operating principles that help organizations break silos. “Improve the quality at the source,” she says. “Bring DevOps principles into DataOps. Clean it up front, keep data where it is, and provide access where it needs to be.” ... Bias, hallucinations, and unintended propagation of sensitive data are no longer theoretical risks. Addressing them requires more than traditional security controls. “It’s layering additional controls,” Chawla says, “especially as we look at agentic AI and agentic ops.” ... Auditing and traceability are equally critical, especially as models are fine-tuned with proprietary data. “You don’t want to introduce new bias or model drift,” she explains. “Testing for bias is super important.” While regulatory environments differ across regions, Chawla stresses that existing requirements like GDPR, data sovereignty, PCI, and HIPAA still apply. AI does not replace those obligations; it intensifies them.


CVEs are set to top 50,000 this year, marking a record high – here’s how CISOs and security teams can prepare for a looming onslaught

"Much like a city planner considering population growth before commissioning new infrastructure, security teams benefit from understanding the likely volume and shape of vulnerabilities they will need to process," Leverett added. "The difference between preparing for 30,000 vulnerabilities and 100,000 is not merely operational, it’s strategic." While the figures may be jarring for business leaders, Kevin Knight, CEO of Talion, said it’s not quite a worst-case scenario. Indeed, it’s the impact of the vulnerabilities within their specific environments that business leaders and CISOs should be focusing on. ... Naturally, security teams could face higher workloads and will be contending with a more perilous threat landscape moving forward. Adding insult to injury, Knight noted that security teams are often brought in late during the procurement process - sometimes after contracts have been signed. In some cases, applications are also deployed without the CISO’s knowledge altogether, creating blind spots and increasing the risk that critical vulnerabilities are being missed. Meanwhile, poor third-party risk management means organizations can unknowingly inherit their suppliers’ vulnerabilities, effectively expanding their attack surface and putting their sensitive data at risk of being breached. "As CVE disclosures continue to rise, businesses must ensure the CISO is involved from the outset of technology decisions," he said. 


Data Privacy in the Age of AI

The first challenge stems from the fact that AI systems run on large volumes of customer data. This “naturally increases the risk of data being used in ways that go beyond what customers originally expected, or what regulations allow,” says Chiara Gelmini, financial services industry solutions director at Pegasystems. This is made trickier by the fact that some AI models can be “black boxes to a certain degree,” she says. “So it’s not always clear, internally or to customers, how data is used or how decisions are actually made," she tells SC Media UK. ... AI is “fully inside” the existing data‑protection regime the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, Gelmini explains. Under these current laws, if an AI system uses personal data, it must meet the same standards of lawfulness, transparency, data minimisation, accuracy, security and accountability as any other processing, she says. Meanwhile, organisations are expected to prove they have thought the area through, typically by carrying out a Data Protection Impact Assessment (DPIA) before deploying high‑risk AI. ... The growing use of AI can pose a risk, but only if it gets out of hand. As AI becomes easier to adopt and more widespread, the practical way to stay ahead of these risks is “strong, AI governance,” says Gelmini. “Firms should build privacy in from the start, mask private data, lock down security, make models explainable, test for bias, and keep a close eye on how systems behave over time."

Daily Tech Digest - February 13, 2026


Quote for the day:

"If you want teams to succeed, set them up for success—don’t just demand it." -- Gordon Tredgold



Hackers turn bossware against the bosses

Huntress discovered two incidents using this tactic, one late in January and one early this month. Shared infrastructure, overlapping indicators of compromise, and consistent tradecraft across both cases make Huntress strongly believe a single threat actor or group was behind this activity. ... CSOs must ensure that these risks are properly catalogued and mitigated,” he said. “Any actions performed by these agents must be monitored and, if possible, restricted. The abuse of these systems is a special case of ‘living off the land’ attacks. The attacker attempts to abuse valid existing software to perform malicious actions. This abuse is often difficult to detect.” ... Huntress analyst Pham said to defend against attacks combining Net Monitor for Employees Professional and SimpleHelp, infosec pros should inventory all applications so unapproved installations can be detected. Legitimate apps should be protected with robust identity and access management solutions, including multi-factor authentication. Net Monitor for Employees should only be installed on endpoints that don’t have full access privileges to sensitive data or critical servers, she added, because it has the ability to run commands and control systems. She also noted that Huntress sees a lot of rogue remote management tools on its customers’ IT networks, many of which have been installed by unwitting employees clicking on phishing emails. This points to the importance of security awareness training, she said. 


Why secure OT protocols still struggle to catch on

“Simply having ‘secure’ protocol options is not enough if those options remain too costly, complex, or fragile for operators to adopt at scale,” Saunders said. “We need protections that work within real-world constraints, because if security is too complex or disruptive, it simply won’t be implemented.” ... Security features that require complex workflows, extra licensing, or new infrastructure often lose out to simpler compensating controls. Operators interviewed said they want the benefits of authentication and integrity checks, particularly message signing, since it prevents spoofing and unauthorized command execution. ... Researchers identified cost as a primary barrier to adoption. Operators reported that upgrading a component to support secure communications can cost as much as the original component, with additional licensing fees in some cases. Costs also include hardware upgrades for cryptographic workloads, training staff, integrating certificate management, and supporting compliance requirements. Operators frequently compared secure protocol deployment costs with segmentation and continuous monitoring tools, which they viewed as more predictable and easier to justify. ... CISA’s recommendations emphasize phased approaches and operational realism. Owners and operators are advised to sign OT communications broadly, apply encryption where needed for sensitive data such as passwords and key exchanges, and prioritize secure communication on remote access paths and firmware uploads.


SaaS isn’t dead, the market is just becoming more hybrid

“It’s important to avoid overgeneralizing ‘SaaS,’” Odusote emphasized . “Dev tools, cybersecurity, productivity platforms, and industry-specific systems will not all move at the same pace. Buyers should avoid one-size-fits-all assumptions about disruption.” For buyers, this shift signals a more capability-driven, outcomes-focused procurement era. Instead of buying discrete tools with fixed feature sets, they’ll increasingly be able to evaluate and compare platforms that are able to orchestrate agents, adapt workflows, and deliver business outcomes with minimal human intervention. ... Buyers will likely have increased leverage in certain segments due to competitive pressure among new and established providers, Odusote said. New entrants often come with more flexible pricing, which obviously is an attraction for those looking to control costs or prove ROI. At the same time, traditional SaaS leaders are likely to retain strong positions in mission-critical systems; they will defend pricing through bundled AI enhancements, he said. So, in the short term, buyers can expect broader choice and negotiation leverage. “Vendors can no longer show up with automatic annual price increases without delivering clear incremental value,” Odusote pointed out. “Buyers are scrutinizing AI add-ons and agent pricing far more closely.”


When algorithms turn against us: AI in the hands of cybercriminals

Cybercriminals are using AI to create sophisticated phishing emails. These emails are able to adapt the tone, language, and reference to the person receiving it based on the information that is publicly available about them. By using AI to remove the red flag of poor grammar from phishing emails, cybercriminals will be able to increase the success rate and speed with which the stolen data is exploited. ... An important consideration in the arena of cyber security (besides technical security) is the psychological manipulation of users. Once visual and audio “cues” can no longer be trusted, there will be an erosion of the digital trust pillar. The once-recognizable verification process is now transforming into multi-layered authentication which expands the amount of time it takes to verify a decision in a high-pressure environment. ... AI’s misuse is a growing problem that has created a paradox. Innovation cannot stop (nor should it), and AI is helping move healthcare, finance, government and education forward. However, the rate at which AI has been adopted has surpassed the creation of frameworks and/or regulations related to ethics or security. As a result, cyber security needs to transition from a reactive to a predictive stance. AI must be used to not only react to attacks, but also anticipate future attacks. 


Those 'Summarize With AI' Buttons May Be Lying to You

Put simply, when a user visits a rigged website and clicks a "Summarize With AI" button on a blog post, they may unknowingly trigger a hidden instruction embedded in the link. That instruction automatically inserts a specially crafted request into the AI tool before the user even types anything. ... The threat is not merely theoretical. According to Microsoft, over a 60-day period, it observed 50 unique instances of prompt-based AI memory poisoning attempts for promotional purposes. ... AI recommendation poisoning is a sort of drive-by technique with one-click interaction, he notes. "The button will take the user — after the click — to the AI domain relevant and specific for one of the AI assistants targeted," Ganacharya says. To broaden the scope, an attacker could simply generate multiple buttons that prompt users to "summarize" something using the AI agent of their choice, he adds. ... Microsoft had some advice for threat hunting teams. Organizations can detect if they have been affected by hunting for links pointing to AI assistant domains and containing prompts with certain keywords like "remember," "trusted source," "in future conversations," and "authoritative source." The company's advisory also listed several threat hunting queries that enterprise security teams can use to detect AI recommendation poisoning URLs in emails and Microsoft Teams Messages, and to identify users who might have clicked on AI recommendation poisoning URLs.


EU Privacy Watchdogs Pan Digital Omnibus

The commission presented its so-called "Digital Omnibus" package of legal changes in November, arguing that the bloc's tech rules needed streamlining. ... Some of the tweaks were expected and have been broadly welcomed, such as doing away with obtrusive cookie consent banners in many cases, and making it simpler for companies to notify of data breaches in a way that satisfies the requirements of multiple laws in one go. But digital rights and consumer advocates are reacting furiously to an unexpected proposal for modifying the General Data Protection Regulation. ... "Simplification is essential to cut red tape and strengthen EU competitiveness - but not at the expense of fundamental rights," said EDPB chair Anu Talus in the statement. "We strongly urge the co-legislators not to adopt the proposed changes in the definition of personal data, as they risk significantly weakening individual data protection." ... Another notable element of the Digital Omnibus is the proposal to raise the threshold for notifying all personal data breaches to supervisory authorities. As the GDPR currently stands, organizations must notify a data protection authority within 72 hours of becoming aware of the breach. If amended as the commission proposes, the obligation would only apply to breaches that are "likely to result in a high risk" to the affected people's rights - the same threshold that applies to the duty to notify breaches to the affected data subjects themselves - and the notification deadline would be extended to 96 hours.


The Art of the Comeback: Why Post-Incident Communication is a Secret Weapon

Although technical resolutions may address the immediate cause of an outage, effective communication is essential in managing customer impact and shaping public perception—often influencing stakeholders’ views more strongly than the issue itself. Within fintech, a company's reputation is not built solely on product features or interface design, but rather on the perceived security of critical assets such as life savings, retirement funds, or business payrolls. In this high-stakes environment, even brief outages or minor data breaches are perceived by clients as threats to their financial security. ... While the natural instinct during a crisis (like a cyber breach or operational failure) is to remain silent to avoid liability, silence actually amplifies damage. In the first 48 hours, what is said—or not said—often determines how a business is remembered. Post-incident communication (PIC) is the bridge between panic and peace of mind. Done poorly, it looks like corporate double-speak. Done well, it demonstrates a level of maturity and transparency that your competitors might lack. ... H2H communication acknowledges the user’s frustration rather than just providing a technical error code. It recognizes the real-world impact on people, not just systems. Admitting mistakes and showing sincere remorse, rather than using defensive, legalistic language, makes a company more relatable and trustworthy. Using natural, conversational language makes the communication feel sincere rather than like an automated, cold response.


Why AI success hinges on knowledge infrastructure and operational discipline

Many organisations assume that if information exists, it is usable for GenAI, but enterprise content is often fragmented, inconsistently structured, poorly contextualised, and not governed for machine consumption. During pilots, this gap is less visible because datasets are curated, but scaling exposes the full complexity of enterprise knowledge. Conflicting versions, missing context, outdated material, and unclear ownership reduce performance and erode confidence, not because models are incapable, but because the knowledge they depend on is unreliable at scale. ... Human-in-the-loop processes struggle to keep pace with scale. Successful deployments treat HITL as a tiered operating structure with explicit thresholds, roles, and escalation paths. Pilot-style broad review collapses under volume; effective systems route only low-confidence or high-risk outputs for human intervention. ... Learning compounds over time as every intervention is captured and fed back into the system, reducing repeated manual review. Operationally, human-in-the-loop teams function within defined governance frameworks, with explicit thresholds, escalation paths, and direct integration into production workflows to ensure consistency at scale. In short, a production-grade human-in-the-loop model is not an extension of BPO but an operating capability combining domain expertise, governance, and system learning to support intelligent systems reliably.


Why short-lived systems need stronger identity governance

Consider the lifecycle of a typical microservice. In its journey from a developer’s laptop to production, it might generate a dozen distinct identities: a GitHub token for the repository, a CI/CD service account for the build, a registry credential to push the container, and multiple runtime roles to access databases, queues and logging services. The problem is not just volume; it is invisibility. When a developer leaves, HR triggers an offboarding process. Their email is cut, their badge stops working. But what about the five service accounts they hardcoded into a deployment script three years ago? ... In reality, test environments are often where attackers go first. It is the path of least resistance. We saw this play out in the Microsoft Midnight Blizzard attack. The attackers did not burn a zero-day exploit to break down the front door; they found a legacy test tenant that nobody was watching closely. ... Our software supply chain is held together by thousands of API keys and secrets. If we continue to rely on long-lived static credentials to glue our pipelines together, we are building on sand. Every static key sitting in a repo—no matter how private you think it is—is a ticking time bomb. It only takes one developer to accidentally commit a .env file or one compromised S3 bucket to expose the keys to the kingdom. ... Paradoxically, by trying to control everything with heavy-handed gates, we end up with less visibility and less control. The goal of modern identity governance shouldn’t be to say “no” more often; it should be to make the secure path the fastest path.


India's E-Rupee Leads the Secure Adoption of CBDCs

India has the e-rupee, which will eventually be used as a legal tender for domestic payments as well as for international transactions and cross-border payments. Ever since RBI launched the e-rupee, or digital rupee, in December 2022, there has been between INR 400 to 500 crore - or $44 to $55 million - in circulation. Many Indian banks are participating in this pilot project. ... Building broad awareness of CBDCs as a secure method for financial transactions is essential. Government and RBI-led awareness campaigns highlighting their security capability can strengthen user confidence and drive higher adoption and transaction volumes. People who have lost money due to QR code scams, fake calls, malicious links and other forms of payment fraud need to feel confident about using CBDCs. IT security companies are also cooperating with RBI to provide data confidentiality, transaction confidentiality and transaction integrity. E-transactions will be secured by hashing, digital signing and [advanced] encryption standards such as AES-192. This can ensure that the transaction data is not tampered with or altered. ... HSMs use advanced encryption techniques to secure transactions and keys. The HSM hardware [boxes] act as cryptographic co-processors and accelerate the encryption and decryption processes to minimize latency in financial transactions. 

Daily Tech Digest - February 12, 2026


Quote for the day:

"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode



The hard part of purple teaming starts after detection

Imagine you’re driving, and you see the car ahead braking suddenly. Awareness helps, but it’s your immediate reaction that avoids the collision. Insurance plans don’t matter at that moment. Nor do compliance reports or dashboards. Only vigilance and rehearsal matter. Cyber resilience works the same way. You can’t build the instinct required to act by running one simulation a year. You build it through repetition. Through testing how specific scenarios unfold. Through examining not only how adversaries get in, but also how they move, escalate, evade, and exfiltrate. This is the heart of real purple teaming. ... AI can accelerate analysis, but it can’t replace intuition, design, or the judgment required to act. If the organization hasn’t rehearsed what to do when the signal appears, AI only accelerates the moment when everyone realises they don’t know what happens next. This is why so much testing today only addresses opportunistic attacks. It cleans up the low-hanging fruit. ... The standard testing model traps everyone involved: One-off tests create false confidence; Scopes limit imagination. Time pressure eliminates depth; Commercial structures discourage collaboration; Tooling gives the illusion of capability; and Compliance encourages the appearance of rigour instead of the reality of it. This is why purple teaming often becomes “jump out, stabilize, pull the chute, roll on landing.” But what about the hard scenarios? What about partial deployments? What about complex failures? That’s where resilience is built.


State AI regulations could leave CIOs with unusable systems

Numerous states are considering AI regulations for systems used in medical care, insurance, human resources, finance and other critical areas. ... Despite the growing regulatory risk, businesses appear unwilling to slow AI deployments. "Moving away from AI with the regulation is not going to be an option for us," Juttiyavar said. He said AI is already deeply embedded in how organizations operate and is essential for speed and competitiveness. ... If CIOs establish strong internal frameworks for AI deployment, "that helps you react better to legislative change" and anticipate new requirements, Kourinian said. Still, regulatory shifts can leave companies with systems that are technically sound but legally unusable, said Peter Cassat, a partner at CM Law. To manage that risk, Cassat advises CIOs to negotiate "change of law" provisions in vendor contracts that provide termination rights if regulations make continued use of a system impossible or impractical. But such provisions do not eliminate the risk of sunk costs. "If it's a SaaS provider and you've signed a three-year term, they don't want to necessarily let you walk for free either," Cassat said. Beyond legal exposure, CIOs must also anticipate public and political reaction to AI and biometric tools. "The CIO absolutely has the responsibility to understand how this technology could be perceived -- not just internally, but by the public and lawmakers," said Mark Moccia, an analyst at Forrester Research.


Your dev team isn’t a cost center — it’s about to become a multiplier

If you treat AI as a pathway to eliminate developer headcount, sure, you’ll capture some cost savings in the short term. But you’ll miss the bigger opportunity entirely. You’ll be the bank executive in 1975 who saw ATMs and thought, “Great, we can close branches and fire tellers.” Meanwhile, your competitors have automated the mundane teller tasks and are opening new branches to sell higher-end services to more people. The 1.4-1.6x productivity improvement that GDPval documented isn’t about doing the same work with fewer people. It’s about doing vastly more work with the same people. That new product idea you had that was 10x too expensive to develop? It’s now possible. That customer experience improvement that could drive loyalty that you didn’t have the headcount for? It’s on the table. The technical debt you’ve been accumulating? You can start to pay it down. ... What struck me about Werner’s final keynote wasn’t the content, it was the intent. This was Werner’s last time at that podium. He could have done a victory lap through AWS’s greatest hits. Instead, he spent his time outlining a framework of success for the next generation of developers. For those of us leading technology organizations, the framework is both validating and challenging. Validating because these traits aren’t new. They have always separated good developers from great ones. Challenging because AI amplifies everything, including the gaps in our capabilities.


Cloud teams are hitting maturity walls in governance, security, and AI use

Migration activity remains heavy across enterprises, especially for data platforms. At the same time, downtime tolerance is limited. Nearly half of respondents said their organizations can accept only one to six hours of downtime for cutover during migration. That combination creates pressure to migrate at speed while keeping data integrity intact. In regulated environments, that pressure extends to audit evidence and compliance validation, which often needs to be produced in parallel with migration execution. ... Cloud-native managed database adoption is also high. More than half of respondents reported using managed cloud databases, and a third reported using SaaS-based database services. Only 10% reported operating self-hosted databases. This shift toward managed services reduces operational burden on infrastructure teams, but it increases reliance on identity governance, network segmentation, and application-layer security controls. It also creates stronger dependency on cloud provider logging and access models. ... Development stacks also reflect this shift. Python was reported as a primary language, with Java close behind. These languages remain central to AI workflows, data engineering, and enterprise application back ends. Machine learning adoption is also widespread since organizations reported actively training ML models. Many of these pipelines are now part of production environments, making operational continuity a priority.


MIT's new fine-tuning method lets LLMs learn new skills without losing old ones

To build truly adaptive AI, the industry needs to solve "continual learning," allowing systems to accumulate knowledge much like humans do throughout their careers. The most effective way for models to learn is through "on-policy learning.” In this approach, the model learns from data it generates itself allowing it to correct its own errors and reasoning processes. This stands in contrast to learning by simply mimicking static datasets. ... The standard alternative is supervised fine-tuning (SFT), where the model is trained on a fixed dataset of expert demonstrations. While SFT provides clear ground truth, it is inherently "off-policy." Because the model is just mimicking data rather than learning from its own attempts, it often fails to generalize to out-of-distribution examples and suffers heavily from catastrophic forgetting. SDFT seeks to bridge this gap: enabling the benefits of on-policy learning using only prerecorded demonstrations, without needing a reward function. ... For teams considering SDFT, the practical tradeoffs come down to model size and compute. The technique requires models with strong enough in-context learning to act as their own teachers — currently around 4 billion parameters with newer architectures like Qwen 3, though Shenfeld expects 1 billion-parameter models to work soon. It demands roughly 2.5 times the compute of standard fine-tuning, but is best suited for organizations that need a single model to accumulate multiple skills over time, particularly in domains where defining a reward function for reinforcement learning is difficult or impossible.


The Illusion of Zero Trust in Modern Data Architectures

Modern data stacks stretch far beyond a single system. Data flows from SaaS tools into ingestion pipelines, through transformation layers, into warehouses, lakes, feature stores, and analytics tools. Each hop introduces a new identity, a new permission model, and a new surface area for implicit trust. Not to mention, niches like healthcare data storage are a completely different beast. Whatever the system may be, teams may enforce strict access at the perimeter while internal services freely exchange data with long-lived credentials and broad scopes. This is where the illusion forms. Zero Trust is declared because no user gets blanket access, yet services trust other services almost entirely. Tokens are reused, roles are overprovisioned, and data products inherit permissions they were never meant to have. The architecture technically verifies everything, but conceptually trusts too much. ... Data rarely stays where Zero Trust policies are strongest. Warehouses enforce row-level security, masking, and role-based access, but data doesn’t live exclusively in warehouses. Extracts are generated, snapshots are shared, and datasets are copied into downstream systems for performance or convenience. Each copy weakens the original trust guarantees and problems worse than increasing cloud costs come to fruition. Once data leaves its source, context is often stripped away.


Top Cyber Industry Defenses Spike CO2 Emissions

Though rarely discussed, like any other technologies, cybersecurity protections carry their own costs to the planet. Programs run on electricity. Servers demand water. Devices are built from natural resources and eventually get thrown out. ... "CISOs can help or make the situation worse [when it comes to] sustainability, depending on the way they write security rules," he says. "And that's why we started a study: to enable the CISO to be part of the sustainability process of his or her company, and to find actionable ways to reduce CO2 consumption while at the same time not adding more risks." ... "We collect a lot of logs, not exactly always knowing why, and the retention period is a huge cost in terms of infrastructure, and also CO2," Billois says. "So at some point, you can revisit your log collection, and log retention, and if there are no legal issues, you can think about compressing them to reduce their volume. It's something that is, I would say, quite easy to do. ... All of that said, unfortunately, the biggest cyber polluter, by far, is also the most difficult to scale back without incurring risk. Some companies can swap underutilized physical infrastructure for virtualized backups, which eat less power, if they're not already doing that; but there are few other great ways to make cyber resilience more efficient. "You can reduce CO2 [from backups] very easily: you stop buying two servers, or you stop having a duplicate of all your data," Billois says.


Five ways quantum technology could shape everyday life

There is growing promise of quantum technology’s ability to solve problems that today’s systems struggle to overcome, or cannot even begin to tackle, with implications for industry, national security and everyday life. ... In healthcare, faster drug discovery could bring quicker response to outbreaks and epidemics, personalised medicine and insight into previously inscrutable biological interactions. Quantum simulation of how materials behave could lead to new high efficiency energy materials, catalysts, alloys and polymers. ... In medicine, quantum sensors could improve diagnostic capabilities via more sensitive, quicker and noninvasive imaging modes. In environmental monitoring, these sensors could track delicate shifts beneath the Earth’s surface, offer early warnings of seismic activity, or detect trace pollutants in air and water with exceptional accuracy. ... Airlines and rail networks could automatically reconfigure to avoid cascading delays, while energy providers might balance renewable generation, storage and consumption with far greater precision. Banks could use quantum computers to evaluate numerous market scenarios in parallel, informing the management of investment portfolios. ... While still at an early stage of development, quantum algorithms might accelerate a subset of AI called machine learning (where algorithms improve with experience), help simulate complex systems, or optimise AI architectures more efficiently.


Nokia predicts huge WAN traffic growth, but experts question assumptions

“Consumer- and enterprise-generated AI traffic imposes a substantial impact on the wide-area network (WAN) by adding AI workloads processed by data centers across the WAN. AI traffic does not stay inside one data center; it moves across edge, metro, core, and cloud infrastructure, driving dense lateral flows and new capacity demands,” the report says. An explosion in agentic AI applications further fuels growth “by inducing extra machine-to-machine (M2M) traffic in the background,” Nokia predicts. “AI traffic isn’t just creating more demand inside data centers; it’s driving a sustained surge of traffic between them. AI inferencing traffic—both user-initiated and agentic-AI-induced M2M—moving over inter-data-center links grows at a 20.3% CAGR through 2034.” ... Global enterprise and industrial traffic, including fixed wireless access, will also steadily rise over the next decade, “as more operations, machines, and workers become digitally connected,” Nokia predicts. “Pervasive automation, high-resolution video, AI-driven analytics, and remote access to industrial systems,” will drive traffic growth. “Factory lines are streaming machine vision data to the cloud. AI copilots are assisting personnel in real time. Field teams are using AR instead of manuals. Robots are coordinating across sites,” the Nokia report says. “Industrial systems are continuously sending telemetry over the WAN instead of keeping it on-site. This shift makes wide-area connectivity part of the core production workflow.”


The death of reactive IT: How predictive engineering will redefine cloud performance in 10 years

Reactive monitoring fails not because tools are inadequate, but because the underlying assumption that failures are detectable after they occur no longer holds true. Modern distributed systems have reached a level of interdependence that produces non-linear failure propagation. A minor slowdown in a storage subsystem can exponentially increase tail latencies across an API gateway. ... Predictive engineering is not marketing jargon. It is a sophisticated engineering discipline that combines statistical forecasting, machine learning, causal inference, simulation modeling and autonomous control systems. ... Predictive engineering will usher in a new operational era where outages become statistical anomalies rather than weekly realities. Systems will no longer wait for degradation, they will preempt it. War rooms will disappear, replaced by continuous optimization loops. Cloud platforms will behave like self-regulating ecosystems, balancing resources, traffic and workloads with anticipatory intelligence. ... In distributed networks, routing will adapt in real time to avoid predicted congestion. Databases will adjust indexing strategies before query slowdowns accumulate. The long-term trajectory is unmistakable: autonomous cloud operations. Predictive engineering is not merely the next chapter in observability, it is the foundation of fully self-healing, self-optimizing digital infrastructure. 

Daily Tech Digest - February 11, 2026


Quote for the day:

"What you do has far greater impact than what you say." -- Stephen Covey



Predicting the future is easy — deciding what to do is the hard part

The prescriptive analysis assists in developing strategies to optimize operations, increase profitability, and reduce risks. Traditionally, linear and non-linear programming models are used for resource allocation, supply chain management, and portfolio optimization. ... In enterprise decision-making, both predictive and prescriptive analytics play an important role. Predictive analytics enables forecasting possible business outcomes, while prescriptive analytics uses these forecasts to create a strategy to maximize business profits. However, enterprises often fail to integrate these two analytics techniques in an effective way for their own benefit. ... The integration of AI agents in predictive and prescriptive analytics workflows has not been explored much by data science professionals. However, a consolidated AI agentic framework can be developed that makes integrated use of predictive and prescriptive analytics in a combined way. ... On implementing the AI agentic framework, the industries experienced better forecasts through efficient predictive analytics. On the other hand, prescriptive analytics helped businesses in making their workflows more adaptable. Despite this success, high computational costs and explainability still remain a major challenge. To overcome these setbacks, an enterprise can further invest in developing multi-modal predictive-prescriptive AI agents and neuro-symbolic agents.


Agile development might be 25 years old, but it’s withstood the test of time – and there’s still more to come in the age of AI

Key focus areas of the Agile Manifesto helped drastically simplify software development, Reynolds noted. By moving teams to smaller more regular releases, for example, this “shortened feedback loops” typically associated with Waterfall and improved flexibility throughout the development lifecycle. “That reduced risk made it easier to respond to customer and business needs, and genuinely improved software quality,” he told ITPro. “Smaller changes meant testing could happen continuously, rather than being bolted on at the end.” The longevity of Agile methodology is testament to its impact, and research shows it’s still highly popular. ... According to Kern, AI and Agile are “a match made in heaven” and the advent of the technology means this approach is no longer optional, albeit with a notable caveat. “You need it more than ever,” he said. “You can build so much more in less time, which can also magnify potential pitfalls if you’re not careful. The speed of delivery with AI can easily outpace feedback, but that’s an exciting opportunity, not a flaw.” Reynolds echoed those comments, noting that while Agile can be a force multiplier for teams, there are still risks – particularly with the influx of AI-generated code in software development. “Those gains are often offset downstream, creating more bugs, higher cloud costs, and greater security exposure. The real value comes when AI is extended beyond code creation into testing, quality assurance, and deployment,” he said.


CISOs must separate signal from noise as CVE volume soars

“While the number of vulnerabilities goes up, what really matters is which of these are going to be exploited,” Michael Roytman, co-founder and CTO of Empirical Security, tells CSO. “And that’s a different process. It does not depend on the number of vulnerabilities that are out there because sometimes an exploit is written before the CVE is even out there.” What FIRST’s forecast highlights instead is a growing signal-to-noise problem, one that strains already overburdened security teams and raises the stakes for prioritization, automation, and capacity planning rather than demanding that organizations patch more flaws exponentially. ... Despite the scale of the forecast, experts stress that vulnerability volume alone is a poor proxy for enterprise risk. “The risk to an enterprise is not directly related to the number of vulnerabilities released,” Empirical Security’s Roytman says. “It is a separate process.” ... For CISOs, the implication is that patching strategies are now more about scaling decision-making processes that were already under strain. ... The cybersecurity industry is not facing an explosion of exploitable weaknesses so much as an explosion of information. For CISOs, success in 2026 will depend less on reacting faster and more on deciding better — using automation and context to ensure that rising vulnerability counts do not translate into rising risk. “It hasn’t been a human-scale problem for some time now,” Roytman says. 


Strengthening a modern retail cybersecurity strategy

Enterprises might declare robust cybersecurity strategies yet fail to adequately address the threats posed by complex supply chains and aggressive digital transformation efforts. To bridge this gap, at Groupe Rocher, we have chosen to integrate cybersecurity into the core business strategy, ensuring that security measures are not only reactive but also predictive, leveraging threat intelligence to anticipate and mitigate risks effectively. ... It’s also important to remember that vulnerabilities aren’t always about technology. Often, they come from poor practices, like using weak passwords, having too much access, or not using multi-factor authentication (MFA). Criminals might use phishing or social engineering attacks to steal access from their victims. ... Additionally, fostering open communication and collaboration with vendors can help identify potential vulnerabilities early. We regularly organize workshops and joint security drills that can enhance mutual understanding and preparedness. By building strong partnerships and emphasizing shared security goals, brands can create a resilient network that not only protects their interests but also strengthens the entire ecosystem against evolving threats. ... As both regulators and consumers become less accepting of business models that prioritize data above all else, retail and beauty brands need to change how they protect data, focusing more on privacy and transparency.


OT Attacks Get Scary With 'Living-off-the-Plant' Techniques

For a number of reasons, ransomware against IT is affecting OT," Derbyshire explains. "This can occur due to, for example, convergences within the IT environment, that the OT simply cannot function without relying upon. Or a complete lack of trust in security controls or network architecture from the IT or OT security teams, so they voluntarily shut down the OT systems or sever the connection to kind of prevent the spread [of an IT attack]. Colonial Pipeline style. ... With a holistic understanding of how OT works, and knowledge of how a given OT site works, suddenly new threat vectors come into focus, which can blend with operational systems as elegantly as LotL attacks do Windows or Linux systems. For instance, Derbyshire plans to demonstrate at RSAC how an attacker can weaponize S7comm, Siemens' proprietary protocol for communication between programmable logic controllers (PLCs). He'll show how, by manipulating frequently overlooked configuration fields in S7comm, an attacker could potentially leak sensitive data and transmit attacks across devices. He calls it "an absolute brain melter." ... there are plenty of resources attackers can turn to to understand OT products better, be they textbooks, chatbots, or even just buying a PLC on a secondhand marketplace. "It still takes a bit of investment or a bit of time going out of your way to find these obscure things. But it's never been impossible and it's only getting easier," Derbyshire says.


The missing layer between agent connectivity and true collaboration

Today's AI challenge is about agent coordination, context, and collaboration. How do you enable them to truly think together, with all the contextual understanding, negotiation, and shared purpose that entails? It's a critical next step toward a new kind of distributed intelligence that keeps humans firmly in the loop. ... While protocols like MCP and A2A have solved basic connectivity, and AGNTCY tackles the problems of discovery, identity management to inter-agent communication and observability, they've only addressed the equivalent of making a phone call between two people who don't speak the same language. But Pandey's team has identified something deeper than technical plumbing: the need for agents to achieve collective intelligence, not just coordinated actions. ... "We have to mimic human evolution,” Pandey explained. “In addition to agents getting smarter and smarter, just like individual humans, we need to build infrastructure that enables collective innovation, which implies sharing intent, coordination, and then sharing knowledge or context and evolving that context.” ... Guardrails remain a central challenge in deploying multi-functional agents that touch every part of an organization's system. The question is how to enforce boundaries without stifling innovation. Organizations need strict, rule-like guardrails, but humans don't actually work that way. Instead, people operate on a principle of minimal harm, or thinking ahead about consequences and making contextual judgments.


Cyber firms face ‘verification crisis’ on real risk

Continuous Threat Exposure Management, commonly referred to as CTEM, has become more widely adopted as a way to structure security work around an organisation's exposure to attack. Even so, only 33% of organisations measure whether exploitable risk is actually reduced over time, according to the report. Instead, most programmes continue to track metrics focused on discovery and volume, such as coverage gaps, asset counts and alert volume. These measures can show rising activity and expanding scope, but they do not necessarily show whether the organisation has reduced the likelihood of a successful attack. "Security programs keep adding tools and expanding scope, but outcomes aren't improving," said Rogier Fischer, CEO and co-founder of Hadrian. ... According to the report, these vulnerabilities were not unknown. They were identified and recorded, but competed for attention as security teams dealt with new alerts, new tickets and the ongoing output of multiple tools. In organisations with complex technology estates, this can create a persistent backlog in which older issues remain unresolved while new potential risks continue to surface. "Security teams can move fast, but too many tools and unverified alerts make it difficult to maintain focus on what actually matters," Fischer said. The report calls for earlier validation of exploitability and success measures that focus on reducing real exposure rather than the number of findings generated.


Trust and Compliance in the Age of AI: Navigating the Risks of Intelligent Software Development

One of the most pressing challenges is trust in AI-generated outputs: Many teams report minimal productivity gains despite operational deployment, citing issues such as hallucinated code, misleading suggestions, and a lack of explainability. This trust gap is amplified by the opaque nature of many AI systems; developers often report struggling to understand how models arrive at decisions, making it difficult for them to validate outputs or debug errors. This lack of transparency, known as black box AI, puts teams at risk of accepting flawed code or test cases, potentially introducing vulnerabilities or performance regressions. ... AI's reliance on data introduces significant compliance risks, especially when proprietary documentation or sensitive datasets are used to train models. Continuing to conduct business the old-fashioned way is not the answer because traditional compliance frameworks often lag behind AI innovation, and governance models built for deterministic systems struggle with probabilistic outputs and autonomous decision-making. ... Another risk with potentially serious consequences: AI-generated code often lacks context. It may not align with architectural patterns, business rules, or compliance requirements, and without rigorous review, these changes can degrade system integrity and increase technical debt. It also must be noted that faster code generation does not equal better code. There is a risk of "bloated" or unsecure code being generated, requiring rigorous validation.


The Cost of AI Slop in Lines of Code

Before we can get to the problem of excessive lines of code, we need to understand how LLMs arrived at the generation of code with unnecessary lines. The answer is in the training dataset and how that dataset was sourced from publicly accessible places, including open repositories on Github and coding websites. These sources lack any form of quality control, and therefore the code the LLMs learned on is of varying quality. ... In the quest to get as much training data as possible, there was little effort available to vet the training data to ensure that it was good training data. The result LLMs outputting the kind of code written by a first-year developer – and that should be concerning to us. ... Some of the common vulnerabilities that we’ve known about for decades, including cross-site scripting, SQL injection, and log injection, are the kinds of vulnerabilities that AI introduces into the code – and it generates this code at rates that are multiples of what even junior developers produce. In a time when it’s important that we be more cautious about security, AI can’t do it. ... Today, we have AI generating bloated code that creates maintenance problems, and we’re looking the other way. It can’t structure code to minimize code duplication. It doesn’t care that there are two, three, four, or more implementations of basic operations that could be made into one generic function. The code it was trained on didn’t generate the abstractions to create the right functions, so it can’t get there.


Why Jurisdiction Choice Is the Newest AI Security Filter

AI moves exponentially faster than legislation and regulations ever could. By the time that sector regulators or governing bodies have drafted frameworks, held consultations, and passed laws through their incumbent democratic processes, the technology has already evolved and scaled far ahead. Not to be too hyperbolic, but the rules could prove irrelevant for a widely-adopted technology and solution that's far outpaced them. This creates what's been dubbed the "speed of instinct" challenge. In essence, how can you possibly regulate something that reinvents itself regularly? ... Rather than attempting to codify every possible and conceivable AI scenario into law, Gibraltar developed a principles-based framework, emphasizing clarity, proportionality, and innovation. Essentially, the framework recognizes that AI regulations must be adaptive and not binary. ... While frameworks exist at both ends of the spectrum—with some enforcing strict rules and others encouraging innovation with AI technology—neither solution is inherently superior. The EU model provides more certainty and protection for humans, but the agile model has merit with responsive governance and the encouragement of rapid innovation. For cybersecurity teams deploying AI, the smart strategy is understanding both standpoints and choosing jurisdictions strategically and with informed processes. Scale and implications matter profoundly; a customer chatbot may have fewer jurisdictional considerations than an internal threat intelligence platform.

Daily Tech Digest - February 10, 2026


uote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward



AQ Is The New EQ: Why Adaptability Now Defines Success

AQ describes the ability to adjust thinking, behaviour, and strategy in response to rapid change and uncertainty. Unlike IQ, which measures cognitive capacity, or EQ, which best captures emotional regulation, AQ predicts how quickly someone can learn, unlearn, and recalibrate when conditions change. ... One key reason AQ is eclipsing other forms of intelligence is that it is dynamic rather than static. IQ remains stable across adulthood for the most part. Adaptability, however, varies with experience, exposure to stress, and environmental demands. Research on psychological flexibility shows that people who can manage ambiguity and shift perspectives under pressure are more likely to adapt effectively to uncertainty. ... At the end of the day, AQ is neither fixed nor innate. When it comes to learning and organizational development, adaptability can be strengthened deliberately through structured challenges, supportive feedback loops, and reflective practices. ... Adaptable people seek feedback, revise strategies quickly when presented with new evidence, don’t get stuck, and remain effective even when the rules of the game are shifting under their feet. This high degree of cognitive flexibility - the ability to shift between problem-solving approaches versus defaulting to the “but we’ve always done it this way” approach - best predicts effective decision-making under stress.


Why AI Governance Risk Is Really a Data Governance Problem

Modern enterprise AI systems now use retrieval-augmented generation, which has further exacerbated these weaknesses. Trained AI models retrieve context from internal repositories during inference, pulling from file shares, collaboration platforms, CRM systems and knowledge bases. That retrieval layer must extract meaning from complex documents, preserve structure, generate AI embeddings and retrieve relevant fragments - while enforcing the same access controls as the source systems. This is where governance assumptions begin to break down. ... "We have to accept two things: Data will never be fully governed. Second, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. AI-first organizations are, therefore, exposing curated proprietary data as reusable "data products" that can be consumed by both humans and AI agents. The alternative is growing risk. As AI systems integrate more deeply with enterprise applications, APIs have become a critical but often under-governed data pathway. ... Regulators are converging on the same conclusion: AI accountability depends on data governance. Data protection regimes such as GDPR already require accuracy, purpose limitation and security. Emerging AI regulations, including the EU AI Act, explicitly tie AI risk to data sourcing, preparation and governance practices. 


Is AI killing open source?

It takes a developer 60 seconds to prompt an agent to fix typos and optimize loops across a dozen files. But it takes a maintainer an hour to carefully review those changes, verify they do not break obscure edge cases, and ensure they align with the project’s long-term vision. When you multiply that by a hundred contributors all using their personal LLM assistants to help, you don’t get a better project. You get a maintainer who just walks away. ... On one side, we’ll have massive, enterprise-backed projects like Linux or Kubernetes. These are the cathedrals, the bourgeoisie, and they’re increasingly guarded by sophisticated gates. They have the resources to build their own AI-filtering tools and the organizational weight to ignore the noise. On the other side, we have more “provincial” open source projects—the proletariat, if you will. These are projects run by individuals or small cores who have simply stopped accepting contributions from the outside. The irony is that AI was supposed to make open source more accessible, and it has. Sort of. ... Open source isn’t dying, but the “open” part is being redefined. We’re moving away from the era of radical transparency, of “anyone can contribute,” and heading toward an era of radical curation. The future of open source, in short, may belong to the few, not the many. ... In this new world, the most successful open source projects will be the ones that are the most difficult to contribute to. They will demand a high level of human effort, human context, and human relationship.


Designing Effective Multi-Agent Architectures

Some coordination patterns stabilize systems. Others amplify failure. There is no universal best pattern, only patterns that fit the task and the way information needs to flow. ... Neural scaling1 is continuous and works well for models. As shown by classic scaling laws, increasing parameter count, data, and compute tends to result in predictable improvements in capability. This logic holds for single models. Collaborative scaling,2 as you need in agentic systems, is different. It’s conditional. It grows, plateaus, and sometimes collapses depending on communication costs, memory constraints, and how much context each agent actually sees. Adding agents doesn’t behave like adding parameters. This is why topology matters. Chains, trees, and other coordination structures behave very differently under load. Some topologies stabilize reasoning as systems grow. Others amplify noise, latency, and error. ... If your multi-agent system is failing, thinking like a model practitioner is no longer enough. Stop reaching for the prompt. The surge in agentic research has made one truth undeniable: The field is moving from prompt engineering to organizational systems. The next time you design your agentic system, ask yourself: How do I organize the team? (patterns); Who do I put in those slots? (hiring/architecture); and Why could this fail at scale? (scaling laws) That said, the winners in the agentic era won’t be those with the smartest instructions but the ones who build the most resilient collaboration structures.


Never settle: How CISOs can go beyond compliance standards to better protect their organizations

A CISO running a compliant program may only review a vendor once a year or after significant system changes. Compliance standards haven’t caught up to the best practice of continuously monitoring vendors to stay on top of third-party risk. This highlights one of the most unfortunate incentives any CISO who manages a compliance program knows: It is often easier to set a less stringent standard and exceed it than to set a better target and risk missing it. ... One of the most common shortfalls of compliance-driven risk assessments is simplistic math around likelihood and impact. Many of the emergent risks mentioned above have a lower likelihood but an extremely high impact and even a fair amount of uncertainty around timeframes. Using this simplistic math, these tail risks do not often bubble up organically; instead, they have to be pulled up from the batch of lower frequency-x-impact scoring. Defining that impact in dollars and cents cuts through the noise. ... If your budget has already been approved without these focus areas in mind, now is the time to start weaving a risk-first approach into discussions with your board. You should be talking about this year-round, not only during budget season when it’s time to present your plan. It will position security as a way to protect revenue, improve capital efficiency, preserve treasury integrity and optimize costs, rather than a cost center.


India Reveals National Plan for Quantum-Safe Security

India is building a foundation to address the national security risks posed by quantum computing through the implementation of a Quantum Safe Ecosystem. As quantum computing rapidly advances, the Task Force, formed under the National Quantum Mission (NQM), has outlined critical steps for India to safeguard its digital infrastructure and maintain economic resilience. ... Critical Information Infrastructure sectors — including defense, power, telecommunications, space and core government systems — are identified as the highest priority for early adoption. According to the report, these sectors should begin formal implementation of post-quantum cryptography by 2027, with accelerated migration schedules reflecting the long operational lifetimes and high-risk profiles of their systems. The task force notes that these environments often support sensitive communications and control functions that must remain confidential for decades, making them especially vulnerable to “harvest now, decrypt later” attacks. ... To support large-scale adoption of post-quantum cryptography, the task force recommends the creation of a national testing and certification framework designed to bring consistency, credibility and risk-based assurance to quantum-safe deployments. Rather than mandating a single technical standard across all use cases, the proposed framework aligns levels of evaluation with the operational criticality of the system being secured.


TeamPCP Worm Exploits Cloud Infrastructure to Build Criminal Infrastructure

TeamPCP is said to function as a cloud-native cybercrime platform, leveraging misconfigured Docker APIs, Kubernetes APIs, Ray dashboards, Redis servers, and vulnerable React/Next.js applications as main infection pathways to breach modern cloud infrastructure to facilitate data theft and extortion. In addition, the compromised infrastructure is misused for a wide range of other purposes, ranging from cryptocurrency mining and data hosting to proxy and command-and-control (C2) relays. Rather than employing any novel tradecraft, TeamPCP leans on tried-and-tested attack techniques, such as existing tools, known vulnerabilities, and prevalent misconfigurations, to build an exploitation platform that automates and industrializes the whole process. This, in turn, transforms the exposed infrastructure into a "self-propagating criminal ecosystem," Flare noted. Successful exploitation paves the way for the deployment of next-stage payloads from external servers, including shell- and Python-based scripts that seek out new targets for further expansion. ... "The PCPcat campaign demonstrates a full lifecycle of scanning, exploitation, persistence, tunneling, data theft, and monetization built specifically for modern cloud infrastructure," Morag said. "What makes TeamPCP dangerous is not technical novelty, but their operational integration and scale. Deeper analysis shows that most of their exploits and malware are based on well-known vulnerabilities and lightly modified open-source tools."


The evolving AI data center: Options multiply, constraints grow, and infrastructure planning is even more critical

AI use has moved from experiment to habit. Usage keeps growing in both consumer and enterprise settings. Model design has also diversified. Some workloads are dominated by large training runs. Others are dominated by inference at scale. Agentic systems add a different pattern again (e.g., long-lived sessions, many tool calls). From an infrastructure standpoint, that tends to increase sustained utilisation of accelerators and networks. ... AI also increases the importance of connectivity between data centers. Training data must move. Checkpoints and replicas must be protected. Inference often runs across regions for latency, resilience, and capacity balancing. As a result, data center interconnect (DCI) is scaling, with operators planning for multi-terabit campus capacity and wide-area links that support both throughput and operational resilience. This reinforces a simple point: the AI infrastructure story is not confined to a single room or building. The ‘shape’ of the network increasingly includes campus, metro, and regional connectivity. ... Connectivity has to match that reality. The winners will be connectivity systems that are: Dense but serviceable – designed for access, not just packing factor; Repeatable – standard blocks that can be deployed many times; Proven – inspection discipline, and documentation that survives handoffs; Compatible with factory workflows – pre-terminated assemblies and predictable integration steps; and Designed for change – expansion paths that do not degrade order and legibility.


Living off the AI: The Next Evolution of Attacker Tradecraft

Organizations are rapidly adopting AI assistants, agents, and the emerging Model Context Protocol (MCP) ecosystem to stay competitive. Attackers have noticed. Let’s look at how different MCPs and AI agents can be targeted and how, in practice, enterprise AI becomes part of the attacker’s playbook. ... With access to AI tools, someone with minimal expertise can assemble credible offensive capabilities. That democratization changes the risk calculus. When the same AI stack that accelerates your workforce also wields things like code execution, file system access, search across internal knowledge bases, ticketing, or payments, then any lapse in control turns into real business impact. ... Unlike smash‑and‑grab malware, these campaigns piggyback on your sanctioned AI workflows, identities, and connectors. ... Poorly permissioned tools let an agent read more data than it needs. An attacker nudges the agent to chain tools in ways the designer didn’t anticipate. ... If an agent learns from prior chats or a shared vector store, an attacker can seed malicious “facts” that reshape future actions—altering decisions, suppressing warnings, or inserting endpoints used for data exfiltration that look routine. ... Teams that succeed make AI security boring: agents have crisp scopes; high‑risk actions need explicit consent; every tool call is observable; and detections catch weird behavior quickly. In that world, an attacker can still try to live off your AI, sure, but they’ll find themselves fenced in, logged, rate‑limited, and ultimately blocked.


Observability has become foundational to the success of AI at scale

Today, CIOs are faced with more data, but less visibility. As AI adoption accelerates data growth, many enterprises struggle with fragmented tools, inconsistent governance, and rising ingestion costs. This leads to critical blind spots across security, performance, and user experience, precisely when operational intelligence matters most. ... The most important mindset shift for CIOs in India is this: AI success starts with getting your data house in order and observability is the discipline that makes that possible at scale. AI does not fail because of algorithms, it fails because of fragmented, low-quality, and poorly governed data. In fact, low data quality is the main barrier to AI readiness, even as organisations accelerate AI adoption. Without clean, accurate, timely, and well-governed data flowing through systems, AI outcomes become unreliable, opaque, and difficult to trust. This is where observability becomes a business catalyst, beyond monitoring. Observability ensures that data powering AI is continuously validated, contextualised, and actionable across applications, infrastructure, and increasingly, AI workloads themselves. At the same time, forward-looking organisations are shifting toward smarter practices such as data lifecycle management, data quality, data reuse, and federation.These practices not only reduce cost and complexity but improve AI accuracy, bias reduction, and decision-making outcomes.