Daily Tech Digest - May 15, 2025


Quote for the day:

“Challenges are what make life interesting and overcoming them is what makes life meaningful.” -- Joshua J. Marine


How to use genAI for requirements gathering and agile user stories

The key to success is engaging end-users and stakeholders in developing the goals and requirements around features and user stories. ... GenAI should help agile teams incorporate more design thinking practices and increase feedback cycles. “GenAI tools are fundamentally shifting the role of product owners and business analysts by enabling them to prototype and iterate on requirements directly within their IDEs rapidly,” says Simon Margolis, Associate CTO at SADA. “This allows for more dynamic collaboration with stakeholders, as they can visualize and refine user stories and acceptance criteria in real time. Instead of being bogged down in documentation, they can focus on strategic alignment and faster delivery, with AI handling the technical translation.” ... “GenAI excels at aligning user stories and acceptance criteria with predefined specs and design guidelines, but the original spark of creativity still comes from humans,” says Ramprakash Ramamoorthy, director of AI research at ManageEngine. “Analysts and product owners should use genAI as a foundational tool rather than relying on it entirely, freeing themselves to explore new ideas and broaden their thinking. The real value lies in experts leveraging AI’s consistency to ground their work, freeing them to innovate and refine the subtleties that machines cannot grasp.”


5 Subtle Indicators Your Development Environment Is Under Siege

As security measures around production environments strengthen, which they have, attackers are shifting left—straight into the software development lifecycle (SDLC). These less-protected and complex environments have become prime targets, where gaps in security can expose sensitive data and derail operations if exploited. That’s why recognizing the warning signs of nefarious behavior is critical. But identification alone isn’t enough—security and development teams must work together to address these risks before attackers exploit them. ... Abnormal spikes in repository cloning activity may indicate potential data exfiltration from Software Configuration Management (SCM) tools. When an identity clones repositories at unexpected volumes or times outside normal usage patterns, it could signal an attempt to collect source code or sensitive project data for unauthorized use. ... While cloning is a normal part of development, a repository that is copied but shows no further activity may indicate an attempt to exfiltrate data rather than legitimate development work. Pull Request approvals from identities lacking repository activity history may indicate compromised accounts or an attempt to bypass code quality safeguards. When changes are approved by users without prior engagement in the repository, it could be a sign of malicious attempts to introduce harmful code or represent reviewers who may overlook critical security vulnerabilities.


Data, agents and governance: Why enterprise architecture needs a new playbook

The rapid evolution of AI and data-centric technologies is forcing organizations to rethink how they structure and govern their information assets. Enterprises are increasingly moving from domain-driven data architectures — where data is owned and managed by business domains — to AI/ML-centric data models that require large-scale, cross-domain integration. Questions arise about whether this transition is compatible with traditional EA practices. The answer: While there are tensions, the shift is not fundamentally at odds with EA but rather demands a significant transformation in how EA operates. ... Governance in an agentic architecture flips the script for EA by shifting focus to defining the domain authority of the agent to participate in an ecosystem. That encompasses the system they can interact with, the commands they can execute, the other agents they can interact with, the cognitive models they rely on and the goals that are set for them. Ensuring agents are good corporate citizens means enterprise architects must engage with business units to set the parameters for what an agent can and cannot do on behalf of the business. Further, the relationship and those parameters must be “tokenized” to authenticate the capacity to execute those actions. 

California’s location data privacy bill aims to reshape digital consent

“We’re really trying to help regulate the use of your geolocation data,” says the bill’s author, Democratic Assemblymember Chris Ward, who represents California’s 78th district, which covers parts of San Diego and surrounding areas. “You should not be able to sell, rent, trade, or lease anybody’s location information to third parties, because nobody signed up for that.” Among types of personal information, location data is especially sensitive. It reveals where people live, work, worship, protest, and seek medical care. It can expose routines, relationships, and vulnerabilities. As stories continue to surface about apps selling location data to brokers, government workers, and even bounty hunters, the conversation has expanded. What was once a debate about privacy has increasingly become a concern over how the exposure of this data infringes upon fundamental civil liberties. “Geolocation is very revealing,” says Justin Brookman, the director of technology policy at Consumer Reports, which supported the legislation. “It tells a lot about you, and it also can be a public safety issue if it gets into the wrong person’s hands.” ... Equally troubling, Ward argues, is who benefits. The companies collecting and selling this data are driven by profit, not transparency. As scholar Shoshana Zuboff has argued, surveillance capitalism doesn’t thrive because users want personalized ads. 


Digital Transformation Expert Discusses Trends

From day one, I emphasise that digital transformation isn’t just about adopting new tools—it’s about aligning those tools with business objectives, improving internal processes, and responding to changing customer expectations. To bring this to life, I use a blended approach that combines theory with real-world practice. Students explore frameworks and models that explain how businesses adapt to technological change, and then apply these to real case studies from global companies, SMEs, and my own entrepreneurial experiences. These examples give them insight into how digital transformation plays out in areas like operations, marketing, and customer relationship management (CRM). Active learning is central to my teaching. I use group work, live problem-solving, digital tool demonstrations, and hands-on simulations to help students experience digital transformation in action. I also introduce them to established business platforms and emerging technologies, encouraging them to assess their value and strategic impact. Ultimately, I aim to create an environment where students don’t just learn about digital transformation—they think like digital leaders, able to question, analyse, and apply what they’ve learned in real organisational contexts.


Building cybersecurity culture in science-driven organizations

The perception of security as a barrier is a challenge faced by many organizations, especially in environments where innovation is prioritized. The solution lies in shifting the narrative: Security are care givers for the value created in this organization. Most scientists and executives already understand the consequences of a cyberattack—lost research, stolen intellectual property, and disrupted operations. We involve them in the process. When lab leaders feel that their input has shaped security protocols, they’re more likely to support and champion those initiatives. Co-creating solutions ensures that security controls are not only effective but also practical for the scientific workflow. In short, building trust, demonstrating empathy for their challenges, and proving the value of security through action are what ultimately win buy-in. ... Shadow IT is a reality in any organization, but it’s particularly prevalent in environments like ours, where creativity and experimentation often outpace formal approval processes. While it’s important to communicate the risks of shadow IT clearly, we also recognize that outright bans are rarely effective. Instead, we focus on enabling secure alternatives. In the broader organization, we use tools to detect and prevent shadow IT, combined with strict communication around approved solutions. 


LastPass can now monitor employees' rogue reliance on shadow SaaS - including AI tools

With LastPass's browser extension for password management already well-positioned to observe -- and even restrict -- employee web usage, the security company has announced that it's diversifying into SaaS monitoring for small to midsize enterprises (SMEs). SaaS monitoring is part of a larger technology category known as SaaS Identity and Access Management, or SaaS IAM. As more employees are drawn to AI to improve productivity, the company is pitching an affordable solution to help SMEs contain the risks and costs associated with shadow SaaS; an umbrella of rogue SaaS procurement that's inclusive of shadow IT and its latest variant -- shadow AI. ... LastPass sees the new capabilities aligning with an organization's business objectives in a variety of ways. "One could be compliance," MacLennan told ZDNET. "Another could be the organization's internal sense of risk and risk management. Another could be cost because we're surfacing apps by category, in which case you'll see the whole universe of duplicative apps in use." MacLennan also noted that the new offering makes it easy to reduce costs due to the over-provisioning of SaaS licenses. For example, an organization is paying for 100 seats of some SaaS solution while the SaaS monitoring tool reveals that only 30 of those licenses are in active use.


Why ISO 42001 sets the standard for responsible AI governance

ISO 42001 is particularly relevant for organisations operating within layered supply chains, especially those building on cloud platforms. For these environments, where infrastructure, platform and software providers each play a role in delivering AI-powered services to end users, organisations must maintain a clear chain of responsibility and vendor due diligence. By defining roles across the shared responsibility model, ISO 42001 helps ensure that governance, compliance and risk management are consistent and transparent from the ground up. Doing so not only builds internal confidence but also enables partners and providers to demonstrate trustworthiness to customers across the value chain. As a result, trust management becomes a vital part of the picture by delivering an ongoing process of demonstrating transparency and control around the way organisations handle data, deploy technology, and meet regulatory expectations. Rather than treating compliance as a static goal, trust management introduces a more dynamic, ongoing approach to demonstrating how AI is governed across an organisation. By operationalising transparency, it becomes much easier to communicate security practices and explain decision-making processes to provide evidence of responsible development and deployment.


Beyond the office: Preparing for disasters in a remote work world

When disaster strikes, employees may be without electricity, internet, or cell service for days or weeks. They may have to evacuate their homes. They may be struggling with the loss of family members, friends, or neighbors. Just as organizations have disaster mitigation and recovery plans for main offices and data centers, they should be prepared to support remote employees in disaster situations they likely have never encountered before. Employers must counsel workers on what to do, provide additional resources, and above all, ensure that their mental health is attended to. ... Beyond cybersecurity risks, being forced to leave their home environment presents employees with another significant challenge: the potential loss of personal artifacts, from tax documents and family heirlooms to cherished photos. Lahiri refers to the process of safeguarding such items as “personal disaster recovery planning” and notes that this aspect of worker support is often overlooked. While companies have experience migrating servers from local offices to distributed teams, few have considered how to support employees on a personal level, he says. Lahiri urges IT teams to take a more empathetic approach and broaden their scope to include disaster recovery planning for employees’ home offices.


Beyond the Gang of Four: Practical Design Patterns for Modern AI Systems

Prompting might seem trivial at first. After all, you send free-form text to a model, so what could go wrong? However, how you phrase a prompt and what context you provide can drastically change your model's behavior, and there's no compiler to catch errors or a standard library of techniques. ... Few-Shot Prompting is one of the most straightforward yet powerful prompting approaches. Without examples, your model might generate inconsistent outputs, struggle with task ambiguity, or fail to meet your specific requirements. You can solve this problem by providing the model with a handful of examples (input-output pairs) in the prompt and then providing the actual input. You are essentially providing training data on the fly. This allows the model to generalize without re-training or fine-tuning. ... If you are a software developer trying to solve a complex algorithmic problem or a software architect trying to analyze complex system bottlenecks and vulnerabilities, you will probably brainstorm various ideas with your colleagues to understand their pros and cons, break down the problem into smaller tasks, and then solve it iteratively, rather than jumping to the solution right away. In Chain-of-Thought (CoT) prompting, you encourage the model to follow a very similar process and think aloud by breaking the problem down into a step-by-step process.

Daily Tech Digest - May 14, 2025


Quote for the day:

"Success is what happens after you have survived all of your mistakes." -- Anonymous


3 Stages of Building Self-Healing IT Systems With Multiagent AI

Multiagent AI systems can allow significant improvements to existing processes across the operations management lifecycle. From intelligent ticketing and triage to autonomous debugging and proactive infrastructure maintenance, these systems can pave the way for IT environments that are largely self-healing. ... When an incident is detected, AI agents can attempt to debug issues with known fixes using past incident information. When multiple agents are combined within a network, they can work out alternative solutions if the initial remediation effort doesn’t work, while communicating the ongoing process with engineers. Keeping a human in the loop (HITL) is vital to verifying the outputs of an AI model, but agents must be trusted to work autonomously within a system to identify fixes and then report these back to engineers. ... The most important step in creating a self-healing system is training AI agents to be able to learn from each incident, as well as from each other, to become truly autonomous. For this to happen, AI agents cannot be siloed into incident response. Instead, they must be incorporated into an organization’s wider system, communicate with third-party agents and allow them to draw correlations from each action taken to resolve each incident. In this way, each organization’s incident history becomes the training data for its AI agents, ensuring that the actions they take are organization-specific and relevant.


The three refactorings every developer needs most

If I had to rely on only one refactoring, it would be Extract Method, because it is the best weapon against creating a big ball of mud. The single best thing you can do for your code is to never let methods get bigger than 10 or 15 lines. The mess created when you have nested if statements with big chunks of code in between the curly braces is almost always ripe for extracting methods. One could even make the case that an if statement should have only a single method call within it. ... It’s a common motif that naming things is hard. It’s common because it is true. We all know it. We all struggle to name things well, and we all read legacy code with badly named variables, methods, and classes. Often, you name something and you know what the subtleties are, but the next person that comes along does not. Sometimes you name something, and it changes meaning as things develop. But let’s be honest, we are going too fast most of the time and as a result we name things badly. ... In other words, we pass a function result directly into another function as part of a boolean expression. This is… problematic. First, it’s hard to read. You have to stop and think about all the steps. Second, and more importantly, it is hard to debug. If you set a breakpoint on that line, it is hard to know where the code is going to go next.


ENISA launches EU Vulnerability Database to strengthen cybersecurity under NIS2 Directive, boost cyber resilience

The EU Vulnerability Database is publicly accessible and serves various stakeholders, including the general public seeking information on vulnerabilities affecting IT products and services, suppliers of network and information systems, and organizations that rely on those systems and services. ... To meet the requirements of the NIS2 Directive, ENISA initiated a cooperation with different EU and international organisations, including MITRE’s CVE Programme. ENISA is in contact with MITRE to understand the impact and next steps following the announcement of the funding to the Common Vulnerabilities and Exposures Program. CVE data, data provided by Information and Communication Technology (ICT) vendors disclosing vulnerability information through advisories, and relevant information, such as CISA’s Known Exploited Vulnerability Catalogue, are automatically transferred into the EU Vulnerability Database. This will also be achieved with the support of member states, who established national Coordinated Vulnerability Disclosure (CVD) policies and designated one of their CSIRTs as the coordinator, ultimately making the EUVD a trusted source for enhanced situational awareness in the EU. 


Welcome to the age of paranoia as deepfakes and scams abound

Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off. ... Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their résumé, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details. Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.


CEOs Sound Alarm: C-Suite Behind in AI Savviness

According to the survey, CEOs now see upskilling internal teams as the cornerstone of AI strategy. The top two limiting factors impacting AI's deployment and use, they said, are the inability to hire adequate numbers of skilled people and to calculate value or outcomes. "CEOs have shifted their view of AI from just a tool to a transformative way of working," said Jennifer Carter, senior principal analyst at Gartner. Contrary to the CEOs' assessments by Gartner, most CIOs view themselves as the key drivers and leaders of their organizations' AI strategies. According to a recent report by CIO.com, 80% of CIOs said they are responsible for researching and evaluating AI products, positioning them as "central figures in their organizations' AI strategies." As CEOs increasingly prioritize AI, customer experience and digital transformation, these agenda items are directly shaping the evolving role and responsibilities of the CIO. But 66% of CEOs say their business models are not fit for AI purposes. Billions continue to be spent on enterprisewide AI use cases but little has come in way of returns. Gartner's forecast predicts a 76.4% surge in worldwide spending on gen AI by 2024, fueled by better foundational models and a global quest for AI-powered everything. But organizations are yet to see consistent results despite the surge in investment. 


Dropping the SBOM, why software supply chains are too flaky

“Mounting software supply chain risk is driving organisations to take action. [There is a] 200% increase in organistions making software supply chain security a top priority and growing use of SBOMs,” said Josh Bressers, vice president of security at Anchore. ... “There’s a clear disconnect between security goals and real-world implementation. Since open source code is the backbone of today’s software supply chains, any weakness in dependencies or artifacts can create widespread risk. To effectively reduce these risks, security measures need to be built into the core of artifact management processes, ensuring constant and proactive protection,” said Douglas. If we take anything from these market analysis pieces, it may be true that organisations struggle to balance the demands of delivering software at speed while addressing security vulnerabilities to a level which is commensurate with the composable interconnectedness of modern cloud-native applications in the Kubernetes universe. ... Alan Carson, Cloudsmith’s CSO and co-founder, remarked, “Without visibility, you can’t control your software supply chain… and without control, there’s no security. When we speak to enterprises, security is high up on their list of most urgent priorities. But security doesn’t have to come at the cost of speed. ...”


Does agentic AI spell doom for SaaS?

The reason agentic AI is perceived as a threat to SaaS and not traditional apps is that traditional apps have all but disappeared, replaced in favor of on-demand versions of former client software. But it goes beyond that. AI is considered a potential threat to SaaS for several reasons, mostly because of how it changes who is in control and how software is used. Agentic AI changes how work gets done because agents act on behalf of users, performing tasks across software platforms. If users no longer need to open and use SaaS apps directly because the agents are doing it for them, those apps lose their engagement and perceived usefulness. That ultimately translates into lost revenue, since SaaS apps typically charge either per user or by usage. An advanced AI agent can automate the workflows of an entire department, which may be covered by multiple SaaS products. So instead of all those subscriptions, you just use an agent to do it all. That can lead to significant savings in software costs. On top of the cost savings are time savings. Jeremiah Stone, CTO with enterprise integration platform vendor SnapLogic, said agents have resulted in a 90% reduction in time for data entry and reporting into the company’s Salesforce system. 


Ask a CIO Recruiter: Where Is the ‘I’ in the Modern CIO Role?

First, there are obviously huge opportunities AI can provide the business, whether it’s cost optimization or efficiencies, so there is a lot of pressure from boards and sometimes CEOs themselves saying ‘what are we doing in AI?’ The second side is that there are significant opportunities AI can enable the business in decision-making. The third leg is that AI is not fully leveraged today; it’s not in a very easy-to-use space. That is coming, and CIOs need to be able to prepare the organization for that change. CIOs need to prepare their teams, as well as business users, and say ‘hey, this is coming, we’ve already experimented with a few things. There are a lot of use cases applied in certain industries; how are we prepared for that?’ ... Just having that vision to see where technology is going and trying to stay ahead of it is important. Not necessarily chasing the shiny new toy,, new technology, but just being ahead of it is the most important skill set. Look around the corner and prepare the organization for the change that will come. Also, if you retrained some of the people, you have to be more analytical, more business minded. Those are good skills. That’s not easy to find. A lot of people [who] move into the CIO role are very technical, whether it is coding or heavily on the infrastructure side. That is a commodity today; you need to be beyond that.


Insider risk management needs a human strategy

A technical-only response to insider risk can miss the mark, we need to understand the human side. That means paying attention to patterns, motivations, and culture. Over-monitoring without context can drive good people away and increase risk instead of reducing it. When it comes to workplace monitoring, clarity and openness matter. “Transparency starts with intentional communication,” said Itai Schwartz, CTO of MIND. That means being upfront with employees, not just that monitoring is happening, but what’s being monitored, why it matters, and how it helps protect both the company and its people. According to Schwartz, organizations often gain employee support when they clearly connect monitoring to security, rather than surveillance. “Employees deserve to know that monitoring is about securing data – not surveilling individuals,” he said. If people can see how it benefits them and the business, they’re more likely to support it. Being specific is key. Schwartz advises clearly outlining what kinds of activities, data, or systems are being watched, and explaining how alerts are triggered. ... Ethical monitoring also means drawing boundaries. Schwartz emphasized the importance of proportionality: collecting only what’s relevant and necessary. “Allow employees to understand how their behavior impacts risk, and use that information to guide, not punish,” he said.


Sharing Intelligence Beyond CTI Teams, Across Wider Functions and Departments

As companies’ digital footprints expand exponentially, so too do their attack surfaces. And since most phishing attacks can be carried out by even the least sophisticated hackers due to the prevalence of phishing kits sold in cybercrime forums, it has never been harder for security teams to plug all the holes, let alone other departments who might be undertaking online initiatives which leave them vulnerable. CTI, digital brand protection and other cyber risk initiatives shouldn’t only be utilized by security and cyber teams. Think about legal teams, looking to protect IP and brand identities, marketing teams looking to drive website traffic or demand generation campaigns. They might need to implement digital brand protection to safeguard their organization’s online presence against threats like phishing websites, spoofed domains, malicious mobile apps, social engineering, and malware. In fact, deepfakes targeting customers and employees now rank as the most frequently observed threat by banks, according to Accenture’s Cyber Threat Intelligence Research. For example, there have even been instances where hackers are tricking large language models into creating malware that can be used to hack customers’ passwords.

Daily Tech Digest - May 13, 2025


Quote for the day:

"If you genuinely want something, don't wait for it -- teach yourself to be impatient." -- Gurbaksh Chahal



How to Move from Manual to Automated to Autonomous Testing

As great as test automation is, it would be a mistake to put little emphasis on or completely remove manual testing. Automated testing's strength is its ability to catch issues while scanning code. Conversely, a significant weakness is that it is not as reliable as manual testing in noticing unexpected issues that manifest themselves during usability tests. While developing and implementing automated tests, organizations should integrate manual testing into their overall quality assurance program. Even though manual testing may not initially benefit the bottom line, it definitely adds a level of protection against issues that could wreak havoc down the road, with potential damage in the areas of cost, quality, and reputation. ... The end goal is to have an autonomous testing program that has a clear focus on helping the organization achieve its desired business outcomes. There is a consistent theme in successfully developing and implementing automated testing programs: planning and patience. With the right strategy and a deliberate rollout, test automation opens the door to smoother operations and the ability to remain competitive and profitable in the ever-changing world of software development. To guarantee a successful implementation of automation practices, it is necessary to invest in training and creating best practices. 


The Hidden Dangers of Artifactory Tokens: What Every Organization Should Know

If tokens with read access are dangerous, those with write permissions are cybersecurity nightmares made flesh. They enable the most feared attack vector in modern software: supply chain poisoning. The playbook is elegant in its simplicity and devastating in its impact. Attackers identify frequently downloaded packages within your Artifactory instance, insert malicious code into these dependencies, then repackage and upload them as new versions. From there, they simply wait as unsuspecting users throughout your organization automatically upgrade to the compromised versions during routine updates. The cascading damage expands exponentially depending on which components get poisoned. Compromising build environments leads to persistent backdoors in all future software releases. Targeting developer tools gives attackers access to engineer workstations and credentials. ... The first line of defense must be preventing leaks before they happen. Implementing secret detection tools that can catch credentials before they're published to repositories. Establishing monitoring systems can identify exposed tokens on public forums, even from personal developer accounts. And following JFrog's evolving security guidance — such as moving away from deprecated API keys — ensures you're not using authentication methods with known weaknesses.


Is Model Context Protocol the New API?

With APIs, we learned that API design matters. Great APIs, like those from Stripe or Twilio, were designed for the developer. With MCP, design matters too. But who are we authoring for? You’re not authoring for a human; you’re authoring for a model that will pay close attention to every word you write. And it’s not just design, it’s the operationalization of MCP that is also important and another point of parallelism with the world of APIs. As we used to say at Apigee, there are good APIs and bad APIs. If your backend descriptions are domain-centric — as opposed to business or end-user centric — integration, adoption and developers’ overall ability to use your APIs will be impaired. A similar issue can arise with MCP. An AI might not recognize or use an MCP server’s tools if its description isn’t clear, action-oriented or AI friendly. A final thing to note, which in many ways is very new to the AI world, is the fact that every action is “on the meter.” In the LLM world, everything turns into tokens, and tokens are dollars, as NVIDIA CEO Jensen Huang reminded us in his Nvidia GTC keynote this year. So, AI-native apps — and by extension the MCP servers that those apps connect to — need to pay attention to token optimization techniques necessary for cost optimization. There’s also a question of resource optimization outside of the token/GPU space.


CISOs must speak business to earn executive trust

The key to building broader influence is translating security into business impact language. I’ve found success by guiding conversations around what executives and customers truly care about: business outcomes, not technical implementations. When I speak with the CEO or board members, I discuss how our security program protects revenue, ensures business continuity and enables growth. With many past breaches, organizations detected the threat but failed to take timely action, resulting in significant business impact. By emphasizing how our approach prevents these outcomes, I’m speaking their language. ... Successfully shifting a security organization from being perceived as the “department of no” to a strategic enabler requires a fundamental change in mindset, engagement model and communication style. It begins with aligning security goals to the broader business strategy, understanding what drives growth, customer trust and operational efficiency. Security leaders must engage cross-functionally early and often, embedding their teams within product development, IT and go-to-market functions to co-create secure solutions rather than imposing controls after the fact. This proactive, partnership-driven approach reduces friction and builds credibility.


Enterprise IAM could provide needed critical mass for reusable digital identity

Acquisitions, different business goals, and even rogue teams can prevent a single, unified platform from serving the whole organization. And then there are partnerships, employees contracted to customers, customer onboarding and a host of other situations that force identity information to move from an internal system to another one. “The result is we end up building difficult, complicated integrations that are hard to maintain,” Esplin says. Further, people want services that providers can only deliver by receiving trusted information, but people are hesitant to share their information. And then there are the attendant regulatory concerns, particularly where biometrics are involved. Intermediaries clearly have a big role to play. Some of those intermediaries may be AI agents, which can ease data sharing, but does not address the central concern about how to limit information sharing while delivering trust. Esplin argues for verifiable credentials as the answer, with the signature of the issuer providing the trust and the consent-based sharing model of VCs satisfying user’s desire to limit data sharing. Because VCs are standardized, the need for complicated integrations is removed. Biometric templates are stored by the user, enabling strong binding without the data privacy concerns that come with legacy architectures.


Beyond speed: Measuring engineering success by impact, not velocity

From a planning and accountability perspective, velocity gives teams a clean way to measure output vs. effort. It can help them plan for sprints and prioritize long-term productivity targets. It can even help with accountability, allowing teams to rightsize their work and communicate it cross-departmentally. The issues begin when it is used as the sole metric of success for teams, as it fails to reveal the nuances necessary for high-level strategic thinking and positioning by leadership. It sets up a standard that over-emphasizes pure workload rather than productive effort towards organizational objectives. ... When leadership works with their engineering teams to find solutions to business challenges, they create a highly visible value stream between each individual developer and the customer at the end of the line. For engineering-forward organizations, developer experience and satisfaction is a top priority, so factors like transparency and recognition of work go a long way towards developer well-being. Perhaps most vital is for business and tech leaders to create roadmaps of success for engineers that clearly align with the goals of the overall business. LinearB cofounder and COO Lines acknowledges that these business goals can differ wildly between businesses: “For some of the leaders that I work with, real business impact might be as simple as, we got to get to production faster…”


Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains

Sakana AI’s Continuous Thought Machine is not designed to chase leaderboard-topping benchmark scores, but its early results indicate that its biologically inspired design does not come at the cost of practical capability. On the widely used ImageNet-1K benchmark, the CTM achieved 72.47% top-1 and 89.89% top-5 accuracy. While this falls short of state-of-the-art transformer models like ViT or ConvNeXt, it remains competitive—especially considering that the CTM architecture is fundamentally different and was not optimized solely for performance. What stands out more are CTM’s behaviors in sequential and adaptive tasks. In maze-solving scenarios, the model produces step-by-step directional outputs from raw images—without using positional embeddings, which are typically essential in transformer models. Visual attention traces reveal that CTMs often attend to image regions in a human-like sequence, such as identifying facial features from eyes to nose to mouth. The model also exhibits strong calibration: its confidence estimates closely align with actual prediction accuracy. Unlike most models that require temperature scaling or post-hoc adjustments, CTMs improve calibration naturally by averaging predictions over time as their internal reasoning unfolds. 


How to build (real) cloud-native applications

Cloud-native applications are designed and built specifically to operate in cloud environments. It’s not about just “lifting and shifting” an existing application that runs on-premises and letting it run in the cloud. Unlike traditional monolithic applications that are often tightly coupled, cloud-native applications are modular in a way that monolithic applications are not. A cloud-native application is not an application stack, but a decoupled application architecture. Perhaps the most atomic level of a cloud-native application is the container. A container could be a Docker container, though really any type of container that matches the Open Container Interface (OCI) specifications works just as well. Often you’ll see the term microservices used to define cloud-native applications. Microservices are small, independent services that communicate over APIs—and they are typically deployed in containers. A microservices architecture allows for independent scaling in an elastic way that supports the way the cloud is supposed to work. While a container can run on all different types of host environments, the most common way that containers and microservices are deployed is inside of an orchestration platform. The most commonly deployed container orchestration platform today is the open source Kubernetes platform, which is supported on every major public cloud.


Responsible AI as a Business Necessity: Three Forces Driving Market Adoption

AI systems introduce operational, reputational, and regulatory risks that must be actively managed and mitigated. Organizations implementing automated risk management tools to monitor and mitigate these risks operate more efficiently and with greater resilience. The April 2024 RAND report, “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed,” highlights that underinvestment in infrastructure and immature risk management are key contributors to AI project failures. ... Market adoption is the primary driver for AI companies, while organizations implementing AI solutions seek internal adoption to optimize operations. In both scenarios, trust is the critical factor. Companies that embed responsible AI principles into their business strategies differentiate themselves as trustworthy providers, gaining advantages in procurement processes where ethical considerations are increasingly influencing purchasing decisions. ... Stakeholders extend beyond regulatory bodies to include customers, employees, investors, and affected communities. Engaging these diverse perspectives throughout the AI lifecycle, from design and development to deployment and decommissioning, yields valuable insights that improve product-market fit while mitigating potential risks.


Leading high-performance engineering teams: Lessons from mission-critical systems

Conducting blameless post-mortems was imperative to focus on improving the systems without getting into blame avoidance or blame games. Building trust required consistency from me: admitting mistakes, getting feedback, going through exercises suggesting improvements, and responding in a constructive way. At the heart of this was creating the conditions for the team to feel safe taking interpersonal risks, so it was my role to steer conversation towards systemic factors that contributed to failures (“What process or procedures change could prevent this?”) and I was regularly looking for the opportunity to discuss, or later analyze, patterns across incidents so I could work towards higher order improvements. ... For teams just starting out, my advice is to take a staged approach. Pick one or two practices they can begin, evolve their plan for how they will evolve the practice and some metrics for the team to realize early value. Questions to ask yourselfHow comfortable are team members sharing reliability concerns? Does your team look for ways to prevent incidents through your reviews or look for ways to blame others? How often does your team practice responding to failure? ... In my experience, leading top engineering teams requires a set of skills. Building a strong technical culture, focusing on people, guiding teams through difficult times, and establishing durable practices.

Daily Tech Digest - May 12, 2025


Quote for the day:

"Our greatest fear should not be of failure but of succeeding at things in life that don't really matter." -- Francis Chan



The rise of vCISO as a viable cybersecurity career path

Companies that don’t have the means to hire a full-time CISO still face the same harsh realities their peers do — heightened compliance demands, escalating cyber incidents, and growing tech-related risks. A part-time security leader can help them assess their state of security and build out a program from scratch, or assist a full-time director-level security leader with a project. ... In some of these ongoing relationships this could be to fill the proverbial chair of the CISO, doing all the traditional work of the role on a part-time basis. This is the kind of arrangement most likely to be referred to as a fractional role. Other retainer arrangements may just be for an advisory position where the client is buying regular mindshare of the vCISO to supplement their tech team’s knowledge pool. They could be a strategic sounding board to the CIO or even a subject-matter expert to the director of security or newly installed CISO. But vCISOs can work on a project-by-project or hourly basis as well. “It’s really what works best for my potential client,” says Demoranville. “I don’t want to force them into a box. So, if a subscription model works or a retainer, cool. If they only want me here for a short engagement, maybe we’re trying to put in a compliance regimen for ISO 27001 or you need me to review NIST, that’s great too.”


Why Indian Banks Need a Sovereign Cloud Strategy

Enterprises need to not only implement better compliance strategies but also rethink the entire IT operating model. Managed sovereign cloud services can help enterprises address this need. ... The need for true sovereignty becomes crucial in a world where many global cloud providers, even when operating within Indian data centers, are subject to foreign laws such as the U.S. Clarifying Lawful Overseas Use of Data Act or the Foreign Intelligence Surveillance Act. These regulations can compel disclosure of Indian banking data to overseas governments, undermining trust and violating the spirit of data localization mandates. "When an Indian bank chooses a global cloud provider with U.S. exposure, they're essentially opening a backdoor for foreign jurisdictions to access sensitive Indian financial data," Rajgopal said. "Sovereignty is a strategic necessity." Managed sovereign clouds not only align with India's compliance frameworks but also reduce complexity by integrating regulatory controls directly into the cloud stack. Instead of treating compliance as an afterthought, it is incorporated in the architecture. ... "Banks today are not just managing money; they are managing trust, security and compliance at unprecedented levels. Sovereign cloud is no longer optional. It's the future of financial resilience," said Pai.


Study Suggests Quantum Entanglement May Rewrite the Rules of Gravity

Entanglement entropy measures the degree of quantum correlation between different regions of space and plays a key role in quantum information theory and quantum computing. Because entanglement captures how information is shared across spatial boundaries, it provides a natural bridge between quantum theory and the geometric fabric of spacetime. In conventional general relativity, the curvature of spacetime is determined by the energy and momentum of matter and radiation. The new framework adds another driver: the quantum information shared between fields. This extra term modifies Einstein’s equations and offers an explanation for some of gravity’s more elusive behaviors, including potential corrections to Newton’s gravitational constant. ... One of the more striking implications involves black hole thermodynamics. Traditional equations for black hole entropy and temperature rely on Newton’s constant being fixed. If gravity “runs” with energy scale — as the study proposes — then these thermodynamic quantities also shift. ... Ultimately, the study does not claim to resolve quantum gravity, but it does reframe the problem. By showing how entanglement entropy can be mathematically folded into Einstein’s equations, it opens a promising path that links spacetime to information — a concept familiar to quantum computer scientists and physicists alike.


Maximising business impact: Developing mission-critical skills for organisational success

Often, L&D is perceived merely as an HR-led function tasked with building workforce capabilities. However, this narrow framing extensively limits its potential impact. As Cathlea shared, “It’s time to educate leaders that L&D is not just a support role—it’s a business-critical responsibility that must be shared across the organisation. By understanding what success looks like through the eyes of different functions, L&D teams can design programmes that support those ambitions — and crucially, communicate value in language that business leaders understand. The panel referenced a case from a tech retailer with over 150,000 employees, where the central L&D team worked to identify cross-cutting capability needs, such as communication, project management, and leadership, while empowering local departments to shape their training solutions. This balance of central coordination and local autonomy enabled the organisation to scale learning in a way that was both relevant and impactful. ... The shift towards skill-based development is also transforming how learning experiences are designed and delivered. What matters most is whether these learning moments are recognised, supported, and meaningfully connected to broader organisational goals.


What software developers need to know about cybersecurity

Training developers to write secure code shouldn’t be looked at as a one-time assignment. It requires a cultural shift. Start by making secure coding techniques are the standard practice across your team. Two of the most critical (yet frequently overlooked) practices are input validation and input sanitization. Input validation ensures incoming data is appropriate and safe for its intended use, reducing the risk of logic errors and downstream failures. Input sanitization removes or neutralizes potentially malicious content—like script injections—to prevent exploits like cross-site scripting (XSS). ... Authentication and authorization aren’t just security check boxes—they define who can access what and how. This includes access to code bases, development tools, libraries, APIs, and other assets. ... APIs may be less visible, but they form the connective tissue of modern applications. APIs are now a primary attack vector, with API attacks growing 1,025% in 2024 alone. The top security risks? Broken authentication, broken authorization, and lax access controls. Make sure security is baked into API design from the start, not bolted on later. ... Application logging and monitoring are essential for detecting threats, ensuring compliance, and responding promptly to security incidents and policy violations. Logging is more than a check-the-box activity—for developers, logging can be a critical line of defense.


Why security teams cannot rely solely on AI guardrails

The core issue is that most guardrails are implemented as standalone NLP classifiers—often lightweight models fine-tuned on curated datasets—while the LLMs they are meant to protect are trained on far broader, more diverse corpora. This leads to misalignment between what the guardrail flags and how the LLM interprets inputs. Our findings show that prompts obfuscated with Unicode, emojis, or adversarial perturbations can bypass the classifier, yet still be parsed and executed as intended by the LLM. This is particularly problematic when guardrails fail silently, allowing semantically intact adversarial inputs through. Even emerging LLM-based judges, while promising, are subject to similar limitations. Unless explicitly trained to detect adversarial manipulations and evaluated across a representative threat landscape, they can inherit the same blind spots. To address this, security teams should move beyond static classification and implement dynamic, feedback-based defenses. Guardrails should be tested in-system with the actual LLM and application interface in place. Runtime monitoring of both inputs and outputs is critical to detect behavioral deviations and emergent attack patterns. Additionally, incorporating adversarial training and continual red teaming into the development cycle helps expose and patch weaknesses before deployment. 


Finding the Right Architecture for AI-Powered ESG Analysis

Rather than choosing between competing approaches, we developed a hybrid architecture that leverages the strengths of both deterministic workflows and agentic AI: For report analysis: We implemented a structured workflow that removes the Intent Agent and Supervisor from the process, instead providing our own intention through a report workflow. This orchestrates the process using the uploaded sustainability file, synchronously chaining prompts and agents to obtain the company name and relevant materiality topics, then asynchronously producing a comprehensive analysis of environmental, social, and governance aspects. For interactive exploration: We maintained the conversational, agentic architecture as a core component of the solution. After reviewing the initial structured report, analysts can ask follow-up questions like, “How does this company’s emissions reduction claims compare to their industry peers?” ... By marrying these approaches, enterprise architects can build systems that maintain human oversight while leveraging AI to handle data-intensive tasks – keeping human analysts firmly in the driver’s seat with AI serving as powerful analytical tools rather than autonomous decision-makers. As we navigate the rapidly evolving landscape of AI implementation, this balanced approach offers a valuable pathway forward.


The Rise of xLMs: Why One-Size-Fits-All AI Models Are Fading

To reach its next evolution, the LLM market will follow all other widely implemented technologies and fragment into an “xLM” market of more specialized models, where the x stands for various models. Language models are being implemented in more places with application- and use case-specific demands, such as lower power or higher security and safety measures. Size is another factor, but we’ll also see varying functionality and models that are portable, remote, hybrid, and domain and region-specific. With this progression, greater versatility and diversity of use cases will emerge, with more options for pricing, security, and latency. ... We must rethink how AI models are trained to fully prepare for and embrace the xLM market. The future of more innovative AI models and the pursuit of artificial general intelligence hinge on advanced reasoning capabilities, but this necessitates restructuring data management practices. ... Preparing real-time data pipelines for the xLM age inherently increases pressure on data engineering resources, especially for organizations currently relying on static batch data uploads and fine-tuning. Historically, real-time accuracy has demanded specialized teams to complete regular batch uploads while maintaining data accuracy, which presents cost and resource barriers. 


Ernst & Young exec details the good, bad and future of genAI deployments

“There is a huge skills gap in data science in terms of the number of people that can do that well, and that is not changing. Everywhere else we can talk about what jobs are changing and where the future is. But AI scientists, data scientists, continue to be the top two in terms of what we’re looking for. I do think organizations are moving to partner more in terms of trying to leverage those skills gap….” The more specific the case for the use of AI, the more easily you can calculate the ROI. “Healthcare is going to be ripe for it. I’ve talked to a number of doctors who are leveraging the power of AI and just doing their documentation requirements, using it in patient booking systems, workflow management tools, supply chain analysis. There, there are clear productivity gains, and they will be different per sector. “Are we also far enough along to see productivity gains in R&D and pharmaceuticals? Yes, we are. Is it the Holy Grail? Not yet, but we are seeing gains and that’s where I think it gets more interesting. “Are we far enough along to have systems completely automated and we just work with AI and ask the little fancy box in front of us to print out the balance sheet and everything’s good? No, we’re a hell of a long way away from that.


How Human-Machine Partnerships Are Evolving in 2025

“Soon, there will be no function that does not have AI as a fundamental ingredient. While it’s true that AI will replace some jobs, it will also create new ones and reduce the barrier of entry into many markets that have traditionally been closed to just a technical or specialized group,” says Bukhari. “AI becoming a part of day-to-day life will also force us to embrace our humanity more than ever before, as the soft skills AI can’t replace will become even more critical for success in the workplace and beyond.” ... CIOs and other executives must be data and AI literate, so they are better equipped to navigate complex regulations, lead teams through AI-driven transformations and ensure that AI implementations are aligned with business goals and values. Cross-functional collaboration is also critical. ... AI innovation is already outpacing organizational readiness, so continuous learning, proactive strategy alignment and iterative implementation approaches are important. CIOs must balance infrastructure investments, like GPU resource allocation, with flexibility in computing strategies to stay competitive without compromising financial stability. “As the enterprise landscape increasingly incorporates AI-driven processes, the C-suite must cultivate specific skills that will cascade effectively through their management structures and their entire human workforce,” says Miskawi. 


Daily Tech Digest - May 11, 2025


Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche



The Human-Centric Approach To Digital Transformation

Involving employees from the beginning of the transformation process is vital for fostering buy-in and reducing resistance. When employees feel they have a say in how new tools and processes will be implemented, they’re more likely to support them. In practice, early involvement can take many forms, including workshops, pilot programs, and regular feedback sessions. For instance, if a company is considering adopting a new project management tool, it can start by inviting employees to test various options, provide feedback, and voice their preferences. ... As companies increasingly adopt digital tools, the need for digital literacy grows. Employees who lack confidence or skills in using new technology are more likely to feel overwhelmed or resistant. Providing comprehensive training and support is essential to ensuring that all employees feel capable and empowered to leverage digital tools. Digital literacy training should cover the technical aspects of new tools and focus on their strategic benefits, helping employees see how these technologies align with broader company goals. ... The third pillar, adaptability, is crucial for sustaining digital transformation. In a human-centered approach, adaptability is encouraged and rewarded, creating a growth-oriented culture where employees feel safe to experiment, take risks, and share ideas. 


Forging OT Security Maturity: Building Cyber Resilience in EMEA Manufacturing

When it comes to OT security maturity, pragmatic measures that are easily implementable by resource-constrained SME manufacturers are the name of the game. Setting up an asset visibility program, network segmentation, and simple threat detection can attain significant value without requiring massive overhauls. Meanwhile, cultural alignment across IT and OT teams is essential. ... “To address evolving OT threats, organizations must build resilience from the ground up,” Mashirova told Industrial Cyber. “They should enhance incident response, invest in OT continuous monitoring, and promote cross-functional collaboration to improve operational resilience while ensuring business continuity and compliance in an increasingly hostile cyber environment.” ... “Manufacturers throughout the region are increasingly recognizing that cyber threats are rapidly shifting toward OT environments,” Claudio Sangaletti, OT leader at medmix, told Industrial Cyber. “In response, many companies are proactively developing and implementing comprehensive OT security programs. These initiatives aim not only to safeguard critical assets but also to establish robust business recovery plans to swiftly address and mitigate the impacts of potential attacks.”


Quantum Leap? Opinion Split Over Quantum Computing’s Medium-Term Impact

“While the actual computations are more efficient, the environment needed to keep quantum machines running, especially the cooling to near absolute zero, is extremely energy-intensive,” he says. When companies move their infrastructure to cloud platforms and transition key platforms like CRM, HCM, and Unified Comms Platform (UCP) to cloud-native versions, they can reduce the energy use associated with running large-scale physical servers 24/7. “If and when quantum computing becomes commercially viable at scale, cloud partners will likely absorb the cooling and energy overhead,” Johnson says. “That’s a win for sustainability and focus.” Alexander Hallowell, principal analyst at Omdia’s advanced computing division, says that unless one of the currently more “out there” technology options proves itself (e.g., photonics or something semiconductor-based), quantum computing is likely to remain infrastructure-intensive and environmentally fragile. “Data centers will need to provide careful isolation from environmental interference and new support services such as cryogenic cooling,” he says. He predicts the adoption of quantum computing within mainstream data center operations is at least five years out, possibly “quite a bit more.” 


Introduction to Observability

Observability has become a concept, in the field of information technology in areas like DevOps and system administration. Essentially, observability involves measuring a system’s states by observing its outputs. This method offers an understanding of how systems behave, enabling teams to troubleshoot problems, enhance performance and ensure system reliability. In today’s IT landscape, the complexity and size of applications have grown significantly. Traditional monitoring techniques have struggled to keep up with the rise of technologies like microservices, containers and serverless architectures. ... Transitioning from monitoring to observability signifies a progression, in the management and upkeep of systems. Although monitoring is crucial for keeping tabs on metrics and reacting to notifications, observability offers a comprehensive perspective and the in-depth analysis necessary for comprehending and enhancing system efficiency. By combining both methods, companies can attain a more effective IT infrastructure. ... Observability depends on three elements to offer a perspective of system performance and behavior: logs, metrics and traces. These components, commonly known as the “three pillars of observability,” collaborate to provide teams, with the information to analyze and enhance their systems.


Cloud Strategy 2025: Repatriation Rises, Sustainability Matures, and Cost Management Tops Priorities

After more than twenty years of trial-and-error, the cloud has arrived at its steady state. Many organizations have seemingly settled on the cloud mix best suited to their business needs, embracing a hybrid strategy that utilizes at least one public and one private cloud. ... Sustainability is quickly moving from aspiration to expectation for businesses. ... Cost savings still takes the top spot for a majority of organizations, but notably, 31% now report equal prioritization between cost optimization and sustainability. The increased attention on sustainability comes as the internal and external regulatory pressures mount for technology firms to meet environmental requirements. There is also the reputational cost at play – scrutiny over sustainability efforts is on the rise from customers and employees alike. ... As organizations maintain a laser focus on cost management, FinOps has emerged as a viable solution for combating cost management challenges. A comprehensive FinOps infrastructure is a game-changer when it comes to an organization’s ability to wrangle overspending and maximize business value. Additionally, FinOps helps businesses activate on timely, data-driven insights, improving forecasting and encouraging cross-functional financial accountability.


Building Adaptive and Future-Ready Enterprise Security Architecture: A Conversation with Yusfarizal Yusoff

Securing Operational Technology (OT) environments in critical industries presents a unique set of challenges. Traditional IT security solutions are often not directly applicable to OT due to the distinctive nature of these environments, which involve legacy systems, proprietary protocols, and long lifecycle assets that may not have been designed with cybersecurity in mind. As these industries move toward greater digitisation and connectivity, OT systems become more vulnerable to cyberattacks. One major challenge is ensuring interoperability between IT and OT environments, especially when OT systems are often isolated and have been built to withstand physical and environmental stresses, rather than being hardened against cyber threats. Another issue is the lack of comprehensive security monitoring in many OT environments, which can leave blind spots for attackers to exploit. To address these challenges, security architects must focus on network segmentation to separate IT and OT environments, implement robust access controls, and introduce advanced anomaly detection systems tailored for OT networks. Furthermore, organisations must adopt specialised OT security tools capable of addressing the unique operational needs of industrial environments. 


CDO and CAIO roles might have a built-in expiration date

“The CDO role is likely to be durable, much due to the long-term strategic value of data; however, it is likely to evolve to encompass more strategic business responsibility,” he says. “The CAIO, on the other hand, is likely to be subsumed into CTO or CDO roles as AI technology folds into core technologies and architectures standardize.” For now, both CIAOs and CDOs have responsibilities beyond championing the use of AI and good data governance, Stone adds. They will build the foundation for enterprise-wide benefits of AI and good data management. “As AI and data literacy take hold across the enterprise, CDOs and CAIOs will shift from internal change enablers and project champions to strategic leaders and organization-wide enablers,” he says. “They are, and will continue to grow more, responsible for setting standards, aligning AI with business goals, and ensuring secure, scalable operations.” Craig Martell, CAIO at data security and management vendor Cohesity, agrees that the CDO position may have a better long-term prognosis than the CAIO position. Good data governance and management will remain critical for many organizations well into the future, he says, and that job may not be easy to fold into the CIO’s responsibilities. “What the chief data officer does is different than what the CIO does,” says Martell, 


Chaos Engineering with Gremlin and Chaos-as-a-Service: An Empirical Evaluation

As organizations increasingly adopt microservices and distributed architectures, the potential for unpredictable failures grows. Traditional testing methodologies often fail to capture the complexity and dynamism of live systems. Chaos engineering addresses this gap by introducing carefully planned disturbances to test system responses under duress. This paper explores how Gremlin can be used to perform such experiments on AWS EC2 instances, providing actionable insights into system vulnerabilities and recovery mechanisms. ... Chaos engineering originated at Netflix with the development of the Chaos Monkey tool, which randomly terminated instances in production to test system reliability. Since then, the practice has evolved with tools like Gremlin, LitmusChaos, and Chaos Toolkit offering more controlled and systematic approaches. Gremlin offers a SaaS-based chaos engineering platform with a focus on safety, control, and observability. ... Chaos engineering using Gremlin on EC2 has proven effective in validating the resilience of distributed systems. The experiments helped identify areas for improvement, including better configuration of health checks and fine-tuning auto-scaling thresholds. The blast radius concept ensured safe testing without risking the entire system.


How digital twins are reshaping clinical trials

While the term "digital twin" is often associated with synthetic control arms, Walsh stressed that the most powerful and regulatory-friendly application lies in randomized controlled trials (RCTs). In this context, digital twins do not replace human subjects but act as prognostic covariates, enhancing trial efficiency while preserving randomization and statistical rigor. "Digital twins make every patient more valuable," Walsh explained. "Applied correctly, this means that trials may be run with fewer participants to achieve the same quality of evidence." ... "Digital twins are one approach to enable highly efficient replication studies that can lower the resource burden compared to the original trial," Walsh clarified. "This can include supporting novel designs that replicate key results while also assessing additional clinical or biological questions of interest." In effect, this strategy allows for scientific reproducibility without repeating entire protocols, making it especially relevant in therapeutic areas with limited eligible patient populations or high participant burden. In early development -- particularly phase 1b and phase 2 -- digital twins can be used as synthetic controls in open-label or single-arm studies. This design is gaining traction among sponsors seeking to make faster go/no-go decisions while minimizing patient exposure to placebos or standard-of-care comparators.


The Great European Data Repatriation: Why Sovereignty Starts with Infrastructure

Data repatriation is not merely a reactive move driven by fear. It’s a conscious and strategic pivot. As one industry leader recently noted in Der Spiegel, “We’re receiving three times as many inquiries as usual.” The message is clear: European companies are actively evaluating alternatives to international cloud infrastructures—not out of nationalism, but out of necessity. The scale of this shift is hard to ignore. Recent reports have cited a 250% user growth on platforms offering sovereign hosting, and inquiries into EU-based alternatives have surged over a matter of months. ... Challenges remain: Migration is rarely a plug-and-play affair. As one European CEO emphasized to The Register, “Migration timelines tend to be measured in months or years.” Moreover, many European providers still lack the breadth of features offered by global cloud platforms, as a KPMG report for the Dutch government pointed out. Yet the direction is clear.  ... Europe’s data future is not about isolation, but balance. A hybrid approach—repatriating sensitive workloads while maintaining flexibility where needed—can offer both resilience and innovation. But this journey starts with one critical step: ensuring infrastructure aligns with European values, governance, and control.