Daily Tech Digest - February 14, 2026


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



UK CIOs struggle to govern surge in business AI agents

The findings point to a growing governance challenge alongside the rapid spread of agent-based systems across the enterprise. AI agents, which can take actions or make decisions within software environments, have moved quickly from pilots into day-to-day operations. That shift has increased demands for monitoring, audit trails and accountability across IT and risk functions. UK CIOs also reported growing concern about the spread of internally built tools. ... The results suggest "shadow AI" risks are becoming a mainstream issue for large organisations. As AI development tools get easier to use, more staff outside IT can build automated workflows, chatbots and agent-like applications. This trend has intensified questions about data access, model behaviour, and whether organisations can trace decisions back to specific inputs and approvals. ... The findings also suggest governance gaps are already affecting operations. Some 84% of UK CIOs said traceability or explainability shortcomings have delayed or prevented AI projects from reaching production, highlighting friction between the push to deploy AI and the work needed to demonstrate effective controls. For CIOs, the issue also intersects with enterprise risk management and information security. Unmonitored agents and rapidly developed internal apps can create new pathways into sensitive datasets and complicate incident response if an organisation cannot determine which automated process accessed or changed data.


You’ve Generated Your MVP Using AI. What Does That Mean for Your Software Architecture?

While the AI generates an MVP, teams can’t control the architectural decisions that the AI made. They might be able to query the AI on some of the decisions, but many decisions will remain opaque because the AI does not understand why the code that it learned from did what it did. ... From the perspective of the development team, AI-generated code is largely a black-box; even if it could be understood, no one has time to do so. Software development teams are under intense time pressure. They turn to AI to partially relieve this pressure, but in doing so they also increase the expectations of their business sponsors regarding productivity. ... As a result, the nature of the work of architecting will shift from up-front design work to empirical evaluation of QARs, i.e. acceptance testing of the MVA. As part of this shift, the development team will help the business sponsors figure out how to test/evaluate the MVP. In response, development teams need to get a lot better at empirically testing the architecture of the system. ... The team needs to know what trade-offs it may need to make, and they need to articulate those in the prompts to the AI. The AI then works as a very clever search engine to find possible solutions that might address the trade-offs. As noted above, these still need to be evaluated empirically, but it does save the team some time in coming up with possible solutions.


Successful Leaders Often Lack Self-Awareness

As a leader, how do you respond in emotionally charged situations? It's under pressure that emotions can quickly escalate and unexamined behavioral patterns emerge—for all of us. In my work with senior executives, I have seen time and again how these unconscious “go-to” reactions surface when stakes are high. This is why self-awareness is not a one-time achievement but a lifelong practice—and for many leaders, it remains their greatest blind spot. Why? ... Turning inward to develop self-awareness naturally places you in uncomfortable territory. It challenges long-standing assumptions and exposes blind spots. One client came to me because a colleague described her as harsh. She genuinely did not see herself that way. Another sought my help after his CEO told him he struggled to communicate with him. Through our work together, we uncovered how defensively he responded to feedback, often without realizing it. ... As leaders rise to the top, the accolades that propel them forward are rooted in talent, strategic decision-making and measurable outcomes. However, once at the highest levels, leadership expands beyond execution. The role now demands mastery of relationships—within the organization and beyond, with clients, partners and customers. At this level, self-awareness is no longer optional; it becomes essential.


How Should Financial Institutions Prepare for Quantum Risk?

“Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers,” said Rob Joyce, then director of cybersecurity for the National Security Agency, in an August 2023 statement. In August 2024, NIST published three post-quantum cryptographic standards — ML-KEM, ML-DSA and SLH-DSA — designed to withstand quantum attacks. These standards are intended to secure data across systems such as digital banking platforms, payment processing environments, email and e-commerce. NIST has encouraged organizations to begin implementation as soon as possible. ... A critical first step is conducting an assessment of which systems and data assets are most at risk. The ISACA IT security organization recommends building a comprehensive inventory of systems vulnerable to quantum attacks and classifying data based on sensitivity, regulatory requirements and business impact. For financial institutions, this assessment should prioritize customer PII, transaction data, long-term financial records and proprietary business information. Understanding where the greatest financial, reputational and regulatory exposure exists enables IT leaders to focus mitigation efforts where they matter most. Institutions should also conduct executive briefings, staff training and tabletop exercises to build awareness. 


The cure for the AI hype hangover

The way AI dominates the discussions at conferences is in contrast to its slower progress in the real world. New capabilities in generative AI and machine learning show promise, but moving from pilot to impactful implementation remains challenging. Many experts, including those cited in this CIO.com article, describe this as an “AI hype hangover,” in which implementation challenges, cost overruns, and underwhelming pilot results quickly dim the glow of AI’s potential. Similar cycles occurred with cloud and digital transformation, but this time the pace and pressure are even more intense. ... Too many leaders expect AI to be a generalized solution, but AI implementations are highly context-dependent. The problems you can solve with AI (and whether those solutions justify the investment) vary dramatically from enterprise to enterprise. This leads to a proliferation of small, underwhelming pilot projects, few of which are scaled broadly enough to demonstrate tangible business value. In short, for every triumphant AI story, numerous enterprises are still waiting for any tangible payoff. For some companies, it won’t happen anytime soon—or at all. ... Beyond data, there is the challenge of computational infrastructure: servers, security, compliance, and hiring or training new talent. These are not luxuries but prerequisites for any scalable, reliable AI implementation. In times of economic uncertainty, most enterprises are unable or unwilling to allocate the funds for a complete transformation.


4th-Party Risk: How Commercial Software Puts You At Risk

Unlike third-party providers, however, there are no contractual relationships between businesses and their fourth-party vendors. That means companies have little to no visibility into those vendors' operations, only blind spots that are fueling an even greater need to shift from trust-based to evidence-based approaches. That lack of visibility has severe consequences for enterprises and other end-user organizations. ... Illuminating 4th-party blind spots begins with mapping critical dependencies through direct vendors. As you go about this process, don't settle for static lists. Software supply chains are the most common attack vector, and every piece of software you receive contains evidence of its supply chain. This includes embedded libraries, development artifacts, and behavioral patterns. ... Businesses must also implement some broader frameworks that go beyond the traditional options, such as NIST CSF or ISO 27001, which provide a foundation but ultimately fall short by assuming businesses lack control in their fourth-party relationships. This stems from the fact that no contractual relationships exist that far downstream, and without contractual obligations, a business cannot conduct risk assessments, demand compliance documentation, or launch an audit as it might with a third-party vendor. ... Also consider SLSA (Supply Chain Levels for Software Artifacts). These provide measurable security controls to prevent tampering and ensure integrity. For companies operating in regulated industries, consider aligning with emerging requirements.


Geopatriation and sovereign cloud: how data returns to the source

The key to understanding a sovereign cloud, adds Google Cloud Spain’s national technology director Héctor Sánchez Montenegro, is that it’s not a one-size-fits-all concept. “Depending on the location, sector, or regulatory context, sovereignty has a different meaning for each customer,” he says. Google already offers sovereign clouds, whose guarantee of sovereignty isn’t based on a single product, but on a strategy that separates the technology from the operations. “We understand that sovereignty isn’t binary, but rather a spectrum of needs we guarantee through three levels of isolation and control,” he adds. ... One of the certainties of this sovereign cloud boom is it’s closely connected to the context in which organizations, companies, and other cloud end users operate. While digital sovereignty was less prevalent at the beginning of the century, it’s now become ubiquitous, especially as political decisions in various countries have solidified technology as a key geostrategic asset. “Data sovereignty is a fundamental part of digital sovereignty, to the point that in practice, it’s becoming a requirement for employment contracts,” says María Loza ... With the technological landscape becoming more unsure and complex, the goal is to know and mitigate risks where possible, and create additional options. “We’re at a crucial moment,” Loza Correa points out. “Data is a key business asset that must be protected.”


Managing AI Risk in a Non-Deterministic World: A CTO’s Perspective

Drawing parallels to the early days of cloud computing, Chawla notes that while AI platforms will eventually rationalize around a smaller set of leaders, organizations cannot afford to wait for that clarity. “The smartest investments right now are fearlessly establishing good data infrastructure, sound fundamentals, and flexible architectures,” she explains. In a world where foundational models are broadly accessible, Chawla argues that differentiation shifts elsewhere. ... Beyond tooling, Chawla emphasizes operating principles that help organizations break silos. “Improve the quality at the source,” she says. “Bring DevOps principles into DataOps. Clean it up front, keep data where it is, and provide access where it needs to be.” ... Bias, hallucinations, and unintended propagation of sensitive data are no longer theoretical risks. Addressing them requires more than traditional security controls. “It’s layering additional controls,” Chawla says, “especially as we look at agentic AI and agentic ops.” ... Auditing and traceability are equally critical, especially as models are fine-tuned with proprietary data. “You don’t want to introduce new bias or model drift,” she explains. “Testing for bias is super important.” While regulatory environments differ across regions, Chawla stresses that existing requirements like GDPR, data sovereignty, PCI, and HIPAA still apply. AI does not replace those obligations; it intensifies them.


CVEs are set to top 50,000 this year, marking a record high – here’s how CISOs and security teams can prepare for a looming onslaught

"Much like a city planner considering population growth before commissioning new infrastructure, security teams benefit from understanding the likely volume and shape of vulnerabilities they will need to process," Leverett added. "The difference between preparing for 30,000 vulnerabilities and 100,000 is not merely operational, it’s strategic." While the figures may be jarring for business leaders, Kevin Knight, CEO of Talion, said it’s not quite a worst-case scenario. Indeed, it’s the impact of the vulnerabilities within their specific environments that business leaders and CISOs should be focusing on. ... Naturally, security teams could face higher workloads and will be contending with a more perilous threat landscape moving forward. Adding insult to injury, Knight noted that security teams are often brought in late during the procurement process - sometimes after contracts have been signed. In some cases, applications are also deployed without the CISO’s knowledge altogether, creating blind spots and increasing the risk that critical vulnerabilities are being missed. Meanwhile, poor third-party risk management means organizations can unknowingly inherit their suppliers’ vulnerabilities, effectively expanding their attack surface and putting their sensitive data at risk of being breached. "As CVE disclosures continue to rise, businesses must ensure the CISO is involved from the outset of technology decisions," he said. 


Data Privacy in the Age of AI

The first challenge stems from the fact that AI systems run on large volumes of customer data. This “naturally increases the risk of data being used in ways that go beyond what customers originally expected, or what regulations allow,” says Chiara Gelmini, financial services industry solutions director at Pegasystems. This is made trickier by the fact that some AI models can be “black boxes to a certain degree,” she says. “So it’s not always clear, internally or to customers, how data is used or how decisions are actually made," she tells SC Media UK. ... AI is “fully inside” the existing data‑protection regime the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, Gelmini explains. Under these current laws, if an AI system uses personal data, it must meet the same standards of lawfulness, transparency, data minimisation, accuracy, security and accountability as any other processing, she says. Meanwhile, organisations are expected to prove they have thought the area through, typically by carrying out a Data Protection Impact Assessment (DPIA) before deploying high‑risk AI. ... The growing use of AI can pose a risk, but only if it gets out of hand. As AI becomes easier to adopt and more widespread, the practical way to stay ahead of these risks is “strong, AI governance,” says Gelmini. “Firms should build privacy in from the start, mask private data, lock down security, make models explainable, test for bias, and keep a close eye on how systems behave over time."

No comments:

Post a Comment