Showing posts with label TPRM. Show all posts
Showing posts with label TPRM. Show all posts

Daily Tech Digest - March 05, 2026


Quote for the day:

"To get a feel for the true essence of leadership, assume everyone who works for you is a volunteer." -- Kouzes and Posner



CISOs Are Now AI Guardians of the Enterprise

CISOs are managing risk, talent and digital resilience that underpins critical business outcomes - a reality that demands new approaches to leadership and execution. Security leaders are quantifying and communicating ROI to executive leadership, developing the next generation of cybersecurity talent, and responsibly deploying emerging technologies - including generative and agentic AI ... While CISOs approach AI with cautious optimism, 86% fear agentic AI will increase the sophistication of social engineering attacks and 82% worry it will increase deployment speed and complexity of persistence mechanisms. "This is happening primarily because AI accelerates existing weaknesses in how organizations understand and control their data. The solution to both is not more tools, but [to implement] a strong and well-understood data governance model across the organization," said Kim Larsen, group CISO at Keepit. ... Despite the rise of AI, CISOs know that human intelligence and judgement supersede even the most intelligent tools, because of their ability to understand context. Their primary strategies include upskilling current workforces, hiring new full-time employees and engaging contractors, especially for nuanced tasks like threat hunting. "AI risk management, cloud security architecture, automation skills and the ability to secure AI-driven systems will be far more valuable in senior cybersecurity hires in 2026 than they were three years ago," said Latesh Nair


The right way to architect modern web applications

A single modern SaaS platform often contains wildly different workloads. Public-facing landing pages and documentation demand fast first contentful paint, predictable SEO behavior, and aggressive caching. Authenticated dashboards, on the other hand, may involve real-time data, complex client-side interactions, and long-lived state where a server round trip for every UI change would be unacceptable. Trying to force a single rendering strategy across all of that introduces what many teams eventually recognize as architectural friction. ... Modern server-rendered applications behave very differently. The initial HTML is often just a starting point. It is “hydrated,” enhanced, and kept alive by client-side logic that takes over after the first render. The server no longer owns the full interaction loop, but it hasn’t disappeared either. ... Data volatility matters. Content that changes once a week behaves very differently from real-time, personalized data streams. Performance budgets matter too. In an e-commerce flow, a 100-millisecond delay can translate directly into lost revenue. In an internal admin tool, the same delay may be irrelevant. Operational reality plays a role as well. Some teams can comfortably run and observe a fleet of SSR servers. Others are better served by static-first or serverless approaches simply because that’s what their headcount and expertise can support. ... When something breaks, the hardest part is often figuring out where it broke. This is where staged architectures show a real advantage. 


Safeguarding biometric data through anonymization

Biometric anonymization refers to a range of approaches that remove Personally Identifiable Information (PII) from biometric data so that an individual can no longer be identified from the data alone. If, after anonymization, the retained data or template can still perform its required function, then we have successfully removed the risk of the identifiers being compromised. An anonymized biometric template in the wrong hands then has no meaningful value, as it can’t be used to identify the individual from whom it originated. As a result, there is great interest in anonymization approaches that can meet the needs of different business applications. ... While biometrics deliver significant value across a wide range of use cases, safeguarding data privacy and meeting regulatory obligations remain top priorities for most organizations. Biometric anonymization can help reduce risk by limiting the exposure of sensitive personal data. Taken together, anonymization approaches address different dimensions of risk – from inference and reporting exposure to vulnerabilities at the template level. They are not one-size-fits-all solutions. Organizations must evaluate which method aligns with their functional requirements, risk tolerance, and compliance obligations, while ensuring that only the minimum necessary personal data is retained for the intended purpose. Anonymization is no longer a peripheral consideration. 


Security leaders must regain control of vendor risk, says Vanta’s risk and compliance director

The rise of AI technologies has made vendor networks increasingly harder to manage. Shadow supply chains (untracked vendor networks), fast-moving subcontracting, model updates, data-sharing and embedded tooling all compound the complexities. Particularly for large enterprises with a network of tens of thousands of suppliers or more, traditional vendor management relying on legacy infrastructure and manual operations is no longer adequate. This is where the Cyber Security and Resilience Bill comes in, forcing a shift toward continuous monitoring which should match the speed of AI threats. ... By implementing evidence-led reporting templates, automated control validation, and continuous monitoring of supplier security posture, businesses can provide the board with real-time assurance, not point-in-time attestations. This approach demonstrates that systemic supplier risk is actively managed without diverting disproportionate time away from frontline threat detection and response. At an operational level, leaders shouldn’t wait for the bill to be finalised to find out who their ‘critical suppliers’ are. ... Upcoming changes to the bill will likely encourage tighter contractual obligations. Businesses should get ahead of this mandate and implement measures such as incident notification service-level agreements, rights-to-audit and evidence provisions, continuous monitoring, and Software bill of Materials.


Inspiration And Aspiration: Why Feel-Good Leadership Rarely Changes Outcomes

Inspiration is fancy. It makes ideas feel noble, futures feel possible and leadership feel virtuous—all without demanding immediate action or sacrifice. We feel moved, aligned and temporarily elevated. It’s a dream we see others have achieved through their actions. Aspiration is different. It is inconvenient. It’s our own dream, our desire to see ourselves in a certain spot or a way in the future. It requires disproportionate effort, new skills and a willingness to confront the uncomfortable gap between who we are today and who we say we want to become. ... That gap between intent and impact was uncomfortable. I told myself "I can't" and then took a step back, which was the easiest thing to do. What I realized is this: Aspiration without action becomes self-deception. Inspiration without action becomes mere admiration. And leadership that relies on either one eventually stagnates. Real change happens only when inspiration and aspiration move together, dance together—not sequentially, not occasionally, but in constant unison. ... Belief does not close gaps; capability and capacity do. Until the distance between intention and reality is acknowledged, effort will always be miscalculated. This gap should evoke and cement commitment, rather than creating drag. One needs to be very careful at this stage, as most people stop here. We may get inspired by mountaineers climbing Everest, but when we do a mental assessment about ourselves, we assume we are incapable of the task of bridging the gap, and we take a step back.


Most Organizations Plan Strategically. Few Manage It That Way

The report segments respondents into two categories: “Dynamic Planners,” characterized by frequent review cycles, cross-functional integration, high portfolio visibility, and active use of scenario planning; and “Plodders,” defined by siloed operations, infrequent reassessment, and limited real-time visibility into execution data. The performance difference between them is sharp enough to be operationally relevant. Eighty-one percent of Planners’ projects deliver measurable ROI or strategic value. Among Plodders, that figure is 45%. That’s a 36-point spread. That’s not measuring financial metrics; it’s about whether projects are doing what they were supposed to do. The survey also found that 30% of projects are not delivering meaningful ROI or strategic value. That leaves nearly one in three funded initiatives operating at levels ranging from marginal to counterproductive. ... Over a third of projects across the survey population are stopped early due to misalignment or insufficient ROI. The report treats this not as a problem to fix but as a sign of mature portfolio management. Chynoweth frames it in capital terms: “Cancellation is not failure. It’s disciplined capital allocation.” Most enterprises reward launch momentum, delivery against plan, and continuation of funded initiatives. Budget cycles create sunk-cost inertia. Career incentives favor project sponsors who ship, not those who cancel. 


Malicious insider threats outpace negligence in Australia

John Taylor, Mimecast's Field Chief Technical Officer for APAC, said organisations are seeing more cases where insiders are used to bypass established security controls. "We're seeing a concerning acceleration in malicious insider threats across Australia. While negligence has traditionally been the primary insider concern, intentional betrayal is now growing at a faster rate. ..." The report described AI as a factor that can increase the speed and scale of attacks, citing more convincing social engineering messages and automated reconnaissance. It also raised the prospect of AI being used to help recruit insiders. Taylor said older assumptions about a clear boundary between internal and external users no longer match how organisations operate, particularly with distributed workforces and widespread cloud adoption. ... Governance and compliance over communications data emerged as another concern. Mimecast found 91% of Australian organisations face challenges maintaining governance and compliance across communications data, and 53% lack confidence in quickly locating data to meet regulatory or legal requirements. These issues can slow incident response by delaying investigations and limiting the ability to reconstruct timelines across messaging platforms, email, and file stores. They can also increase risk during regulatory inquiries when organisations must produce relevant records quickly. Taylor said visibility is central to improving governance, culture, and response.


AI fatigue is real and it’s time for leaders to close the organizational gap

AI has been pitched as the next great accelerant of productivity. But inside many enterprises, teams are still recovering from years’ worth of transformation programs—cloud migrations, ERP upgrades, data modernization. Adding AI to an already overloaded change agenda can feel less like innovation and more like yet another disruption to absorb. The result is a predictable backlash. Tools in the industry are dismissed as “just another license”. Expectations are sky high; lived experience is often underwhelming. And when the novelty wears off, employees revert to old behavior fast. ... A pervasive misconception is that adopting AI is mostly about selecting and deploying the right technology. But tooling alone doesn’t redesign workflows. It doesn’t train employees. It doesn’t embed new decision making patterns. Some of the highest spending organizations are seeing the least value from AI precisely because investment has been concentrated at the technology layer rather than the organizational one. Without true operational change, AI tools risk becoming surface level enhancements rather than business accelerators. ... AI is not a spectator sport. Employees must understand how to use it, when to trust it, and how it adds value to their role. Organizations that invest early in skills from prompting to automation design will see dramatically higher adoption rates. The companies scaling fastest are those that build internal capability, not dependency on a small number of specialists.


Measuring What Matters in Large Language Model Performance

The study is timely, as LLM innovation increasingly targets skills and traits that are difficult to benchmark. “There’s been a shift towards testing AI systems for more complex capabilities like reasoning, helpfulness, and safety, which are very hard to measure,” said Rocher. “We wanted to look at whether evaluations are doing a good job capturing these sorts of skills.” Historically, AI innovators focused on equipping programs with easy-to-measure skills, like the ability to play chess and other strategy games. Today’s general-purpose LLMs, including popular models like ChatGPT, feature more flexible, open-ended strengths and traits. These attributes are notoriously difficult to operationalize, or to define in a way that’s precise enough to work in AI program measurement but broad enough to encompass the many different ways that the attribute might show up in the real world. Reasoning is one such skill. While most people are able to tell what counts as good or bad reasoning on a case-by-case basis, it’s not easy to describe reasoning in general terms. ... Towards this end, “Measuring what Matters” includes a set of guidelines to promote precision, thoroughness, rigor, and transparency in benchmark development. The first two recommendations, “define the phenomenon” and “measure the phenomenon and only the phenomenon,” encourage benchmark authors to be direct and specific as they define their target phenomena. 


Hallucination is not an option when AI meets the real world

For Boeckem, the most consequential AI applications are not advisory. They are autonomous. “In industrial environments, AI doesn’t just recommend,” he says. “It acts.” That shift, from insight to action, raises the stakes dramatically. Autonomous systems operate in safety-critical environments where failure can result in physical damage, financial loss, or human harm. “When generative AI went mainstream in 2022, it was exciting,” Boeckem says. “But professional environments need AI that is grounded in reality. These systems must always know where they are, what obstacles exist, and what the consequences of an action might be.” ... Despite the growing popularity of digital twins, many enterprises struggle to make them operational. According to Boeckem, the problem is not ambition, but misunderstanding. “A digital twin must be fit for purpose,” he says. “And above all, it must be dimensionally accurate.” Accuracy is non-negotiable. A flood simulation requires a watertight model. Urban planning demands precise representations of sunlight, shadows, and surroundings. Aesthetic simulations require photorealistic textures and material properties. At the most complex end of the spectrum, Hexagon models human faces. “A human face is not static,” Boeckem explains. “It’s soft-body material. When you smile, when you’re angry, when you’re sad, it changes. If you want to do diagnosis or therapy, you have to account for that.” 

Daily Tech Digest - February 14, 2026


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



UK CIOs struggle to govern surge in business AI agents

The findings point to a growing governance challenge alongside the rapid spread of agent-based systems across the enterprise. AI agents, which can take actions or make decisions within software environments, have moved quickly from pilots into day-to-day operations. That shift has increased demands for monitoring, audit trails and accountability across IT and risk functions. UK CIOs also reported growing concern about the spread of internally built tools. ... The results suggest "shadow AI" risks are becoming a mainstream issue for large organisations. As AI development tools get easier to use, more staff outside IT can build automated workflows, chatbots and agent-like applications. This trend has intensified questions about data access, model behaviour, and whether organisations can trace decisions back to specific inputs and approvals. ... The findings also suggest governance gaps are already affecting operations. Some 84% of UK CIOs said traceability or explainability shortcomings have delayed or prevented AI projects from reaching production, highlighting friction between the push to deploy AI and the work needed to demonstrate effective controls. For CIOs, the issue also intersects with enterprise risk management and information security. Unmonitored agents and rapidly developed internal apps can create new pathways into sensitive datasets and complicate incident response if an organisation cannot determine which automated process accessed or changed data.


You’ve Generated Your MVP Using AI. What Does That Mean for Your Software Architecture?

While the AI generates an MVP, teams can’t control the architectural decisions that the AI made. They might be able to query the AI on some of the decisions, but many decisions will remain opaque because the AI does not understand why the code that it learned from did what it did. ... From the perspective of the development team, AI-generated code is largely a black-box; even if it could be understood, no one has time to do so. Software development teams are under intense time pressure. They turn to AI to partially relieve this pressure, but in doing so they also increase the expectations of their business sponsors regarding productivity. ... As a result, the nature of the work of architecting will shift from up-front design work to empirical evaluation of QARs, i.e. acceptance testing of the MVA. As part of this shift, the development team will help the business sponsors figure out how to test/evaluate the MVP. In response, development teams need to get a lot better at empirically testing the architecture of the system. ... The team needs to know what trade-offs it may need to make, and they need to articulate those in the prompts to the AI. The AI then works as a very clever search engine to find possible solutions that might address the trade-offs. As noted above, these still need to be evaluated empirically, but it does save the team some time in coming up with possible solutions.


Successful Leaders Often Lack Self-Awareness

As a leader, how do you respond in emotionally charged situations? It's under pressure that emotions can quickly escalate and unexamined behavioral patterns emerge—for all of us. In my work with senior executives, I have seen time and again how these unconscious “go-to” reactions surface when stakes are high. This is why self-awareness is not a one-time achievement but a lifelong practice—and for many leaders, it remains their greatest blind spot. Why? ... Turning inward to develop self-awareness naturally places you in uncomfortable territory. It challenges long-standing assumptions and exposes blind spots. One client came to me because a colleague described her as harsh. She genuinely did not see herself that way. Another sought my help after his CEO told him he struggled to communicate with him. Through our work together, we uncovered how defensively he responded to feedback, often without realizing it. ... As leaders rise to the top, the accolades that propel them forward are rooted in talent, strategic decision-making and measurable outcomes. However, once at the highest levels, leadership expands beyond execution. The role now demands mastery of relationships—within the organization and beyond, with clients, partners and customers. At this level, self-awareness is no longer optional; it becomes essential.


How Should Financial Institutions Prepare for Quantum Risk?

“Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers,” said Rob Joyce, then director of cybersecurity for the National Security Agency, in an August 2023 statement. In August 2024, NIST published three post-quantum cryptographic standards — ML-KEM, ML-DSA and SLH-DSA — designed to withstand quantum attacks. These standards are intended to secure data across systems such as digital banking platforms, payment processing environments, email and e-commerce. NIST has encouraged organizations to begin implementation as soon as possible. ... A critical first step is conducting an assessment of which systems and data assets are most at risk. The ISACA IT security organization recommends building a comprehensive inventory of systems vulnerable to quantum attacks and classifying data based on sensitivity, regulatory requirements and business impact. For financial institutions, this assessment should prioritize customer PII, transaction data, long-term financial records and proprietary business information. Understanding where the greatest financial, reputational and regulatory exposure exists enables IT leaders to focus mitigation efforts where they matter most. Institutions should also conduct executive briefings, staff training and tabletop exercises to build awareness. 


The cure for the AI hype hangover

The way AI dominates the discussions at conferences is in contrast to its slower progress in the real world. New capabilities in generative AI and machine learning show promise, but moving from pilot to impactful implementation remains challenging. Many experts, including those cited in this CIO.com article, describe this as an “AI hype hangover,” in which implementation challenges, cost overruns, and underwhelming pilot results quickly dim the glow of AI’s potential. Similar cycles occurred with cloud and digital transformation, but this time the pace and pressure are even more intense. ... Too many leaders expect AI to be a generalized solution, but AI implementations are highly context-dependent. The problems you can solve with AI (and whether those solutions justify the investment) vary dramatically from enterprise to enterprise. This leads to a proliferation of small, underwhelming pilot projects, few of which are scaled broadly enough to demonstrate tangible business value. In short, for every triumphant AI story, numerous enterprises are still waiting for any tangible payoff. For some companies, it won’t happen anytime soon—or at all. ... Beyond data, there is the challenge of computational infrastructure: servers, security, compliance, and hiring or training new talent. These are not luxuries but prerequisites for any scalable, reliable AI implementation. In times of economic uncertainty, most enterprises are unable or unwilling to allocate the funds for a complete transformation.


4th-Party Risk: How Commercial Software Puts You At Risk

Unlike third-party providers, however, there are no contractual relationships between businesses and their fourth-party vendors. That means companies have little to no visibility into those vendors' operations, only blind spots that are fueling an even greater need to shift from trust-based to evidence-based approaches. That lack of visibility has severe consequences for enterprises and other end-user organizations. ... Illuminating 4th-party blind spots begins with mapping critical dependencies through direct vendors. As you go about this process, don't settle for static lists. Software supply chains are the most common attack vector, and every piece of software you receive contains evidence of its supply chain. This includes embedded libraries, development artifacts, and behavioral patterns. ... Businesses must also implement some broader frameworks that go beyond the traditional options, such as NIST CSF or ISO 27001, which provide a foundation but ultimately fall short by assuming businesses lack control in their fourth-party relationships. This stems from the fact that no contractual relationships exist that far downstream, and without contractual obligations, a business cannot conduct risk assessments, demand compliance documentation, or launch an audit as it might with a third-party vendor. ... Also consider SLSA (Supply Chain Levels for Software Artifacts). These provide measurable security controls to prevent tampering and ensure integrity. For companies operating in regulated industries, consider aligning with emerging requirements.


Geopatriation and sovereign cloud: how data returns to the source

The key to understanding a sovereign cloud, adds Google Cloud Spain’s national technology director Héctor Sánchez Montenegro, is that it’s not a one-size-fits-all concept. “Depending on the location, sector, or regulatory context, sovereignty has a different meaning for each customer,” he says. Google already offers sovereign clouds, whose guarantee of sovereignty isn’t based on a single product, but on a strategy that separates the technology from the operations. “We understand that sovereignty isn’t binary, but rather a spectrum of needs we guarantee through three levels of isolation and control,” he adds. ... One of the certainties of this sovereign cloud boom is it’s closely connected to the context in which organizations, companies, and other cloud end users operate. While digital sovereignty was less prevalent at the beginning of the century, it’s now become ubiquitous, especially as political decisions in various countries have solidified technology as a key geostrategic asset. “Data sovereignty is a fundamental part of digital sovereignty, to the point that in practice, it’s becoming a requirement for employment contracts,” says María Loza ... With the technological landscape becoming more unsure and complex, the goal is to know and mitigate risks where possible, and create additional options. “We’re at a crucial moment,” Loza Correa points out. “Data is a key business asset that must be protected.”


Managing AI Risk in a Non-Deterministic World: A CTO’s Perspective

Drawing parallels to the early days of cloud computing, Chawla notes that while AI platforms will eventually rationalize around a smaller set of leaders, organizations cannot afford to wait for that clarity. “The smartest investments right now are fearlessly establishing good data infrastructure, sound fundamentals, and flexible architectures,” she explains. In a world where foundational models are broadly accessible, Chawla argues that differentiation shifts elsewhere. ... Beyond tooling, Chawla emphasizes operating principles that help organizations break silos. “Improve the quality at the source,” she says. “Bring DevOps principles into DataOps. Clean it up front, keep data where it is, and provide access where it needs to be.” ... Bias, hallucinations, and unintended propagation of sensitive data are no longer theoretical risks. Addressing them requires more than traditional security controls. “It’s layering additional controls,” Chawla says, “especially as we look at agentic AI and agentic ops.” ... Auditing and traceability are equally critical, especially as models are fine-tuned with proprietary data. “You don’t want to introduce new bias or model drift,” she explains. “Testing for bias is super important.” While regulatory environments differ across regions, Chawla stresses that existing requirements like GDPR, data sovereignty, PCI, and HIPAA still apply. AI does not replace those obligations; it intensifies them.


CVEs are set to top 50,000 this year, marking a record high – here’s how CISOs and security teams can prepare for a looming onslaught

"Much like a city planner considering population growth before commissioning new infrastructure, security teams benefit from understanding the likely volume and shape of vulnerabilities they will need to process," Leverett added. "The difference between preparing for 30,000 vulnerabilities and 100,000 is not merely operational, it’s strategic." While the figures may be jarring for business leaders, Kevin Knight, CEO of Talion, said it’s not quite a worst-case scenario. Indeed, it’s the impact of the vulnerabilities within their specific environments that business leaders and CISOs should be focusing on. ... Naturally, security teams could face higher workloads and will be contending with a more perilous threat landscape moving forward. Adding insult to injury, Knight noted that security teams are often brought in late during the procurement process - sometimes after contracts have been signed. In some cases, applications are also deployed without the CISO’s knowledge altogether, creating blind spots and increasing the risk that critical vulnerabilities are being missed. Meanwhile, poor third-party risk management means organizations can unknowingly inherit their suppliers’ vulnerabilities, effectively expanding their attack surface and putting their sensitive data at risk of being breached. "As CVE disclosures continue to rise, businesses must ensure the CISO is involved from the outset of technology decisions," he said. 


Data Privacy in the Age of AI

The first challenge stems from the fact that AI systems run on large volumes of customer data. This “naturally increases the risk of data being used in ways that go beyond what customers originally expected, or what regulations allow,” says Chiara Gelmini, financial services industry solutions director at Pegasystems. This is made trickier by the fact that some AI models can be “black boxes to a certain degree,” she says. “So it’s not always clear, internally or to customers, how data is used or how decisions are actually made," she tells SC Media UK. ... AI is “fully inside” the existing data‑protection regime the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, Gelmini explains. Under these current laws, if an AI system uses personal data, it must meet the same standards of lawfulness, transparency, data minimisation, accuracy, security and accountability as any other processing, she says. Meanwhile, organisations are expected to prove they have thought the area through, typically by carrying out a Data Protection Impact Assessment (DPIA) before deploying high‑risk AI. ... The growing use of AI can pose a risk, but only if it gets out of hand. As AI becomes easier to adopt and more widespread, the practical way to stay ahead of these risks is “strong, AI governance,” says Gelmini. “Firms should build privacy in from the start, mask private data, lock down security, make models explainable, test for bias, and keep a close eye on how systems behave over time."