Daily Tech Digest - September 01, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain


The AI-powered cyberattack era is here

In the deepfake era, the crime was unprecedented and exotic. In the genAI era, it’s a banality.
You just need a three-second recording of a person talking, according to McAfee experts. With that snippet, you can create convincing fake messages or phone calls. If someone sounds like a trusted person, people are ready to hand over their cash or secrets. In 2024, the company’s global study found one in four people had suffered or knew someone hit by an AI voice scam. ... One challenge in the field of AI-enabled attacks — which is to say, attacks that didn’t exist or weren’t possible before genAI — is how quickly everything changes. Take AI browsers, for example. This new category of web browser includes Perplexity Comet, Dia (by The Browser Company), Fellou, Opera Neon, Sigma AI Browser, Arc Max, Microsoft Edge Copilot, Brave Leo, Wave Browser Pro, SigmaOS, Opera Aria, Genspark AI Browser, Poly, Quetta Browser, Browserbase, Phew AI Tab, and the upcoming OpenAI browser. ... The truth is that most attacks are still the old-fashioned kind, performed without help from AI. And most still involve human error. So all the standard guidelines and best practices apply. Companies should update software regularly, require multifactor authentication for all logins, and give employees training about fake emails and malicious links. Outside experts should run penetration tests twice a year. Making regular offline backups can save thousands after AI-based ransomware attacks.


How to Make Data Work for What’s Next

Too often, companies begin by auditing the data they already have. A better question is, “What outcome are we trying to drive?” Whether it’s scaling operations, improving retention, or guiding smarter investments, the path forward starts with understanding where you want to go. ... Not everything needs to be measured. The goal is to curate the data, pulling in what’s most useful rather than everything that’s available. Focus on what’s going to help people make decisions in real time. Some metrics help you look ahead, while others explain what already happened. A good mix can be helpful, but only if it still aligns with the outcome you’re tracking. This shift can feel unfamiliar. Many teams are used to starting from their existing systems–what’s already tracked, what can be pulled from a dashboard–and working backward. But that often leads to noise or gaps. Managing too much data isn’t just overwhelming; it’s also costly. Teams spend time storing, maintaining, and cleaning data that often doesn’t lead to better decisions. ... Trust is built in small moments. When early reports reflect what people expect based on their lived experience, they begin to rely on the system. ... A stronger data culture isn’t just about systems. It’s about building skills and helping people see how their work connects to outcomes. When data reinforces what people already know and shows up in context—visually, interactively, and on time—it becomes a tool they trust, use, and want to leverage.


Cybercrime increasingly moving beyond financial gains

“We are very redundant when talking about cybercrime, because we always associate it with economic motivations,” says Hervé Lambert, global consumer operations manager at Panda Security. “But they are not the only reasons out there.” Lambert also refers to political and military cyber espionage, “states or actors linked to different governments” that seek to infiltrate to obtain strategic information. It also includes cyberwarfare, “attacks designed to do damage, disable, render important systems useless. There is no lucrative purpose, but to enhance or win a war or facilitate sabotage.” ... “These very different motivations are not mutually exclusive, as they seek different objectives,” adds Alonso García. “We can find them as the sole motivation or they complement each other, making cyberattacks more elaborate and complex to analyze.” In other words, a person or group may have political interests but ask for a ransom to cover up their actions or seek funding; or in a context of turmoil between countries, take advantage to launch attacks that seek to profit. ... But the strategy to be followed will have to be reoriented or reinforced if, for example, we are working in a critical sector from a geopolitical point of view, in which, among other things, disinformation will have to be taken into account. 
"The old software world is gone, giving way to a new set of truths being defined by AI. To navigate the changes, technical leaders should carry out rigorous validation on AI assistants. Managers should establish formal AI governance policies and invest in training for emerging technologies. Security professionals should update their threat models to include AI-specific risks and leverage SBOMs [Software Bill of Materials] as a strategic asset for risk management to achieve true scale application security." ... "Without SBOMs, we're flying blind. With them, we're finally turning the lights on in the supply chain cockpit," said Helen Oakley, Director of Secure Software Supply Chains and Secure Development at SAP. "AI coding assistants are like interns with rocket fuel. They accelerate everything, including errors, if you don't set boundaries." ... "For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies."


How Digital Twins Transform Drug Development Processes

A key technological advancement emerging from these hubs is the application of digital twins in pharmaceutical research. Initially used in engineering and manufacturing sectors, digital twins in the pharmaceutical industry are virtual models of human systems that replicate biological processes. These replicas are built using vast volumes of biological, clinical and genomic data, enabling researchers to test how different patient profiles might respond to specific drugs without exposing individuals to experimental therapies. The implications of this approach are transformative. Through digital twins, pharmaceutical scientists can simulate the progression of diseases, predict Adverse Drug Reactions (ADRs) and model patient diversity across age, gender, genetic traits and comorbidities. This ability to run in-silico trials, which are clinical trials conducted through virtual simulations, reduces the cost, duration and risk associated with traditional clinical testing. ... AI is transforming every clinical development phase worldwide, from trial design to execution and outcome analysis. According to industry estimates, AI is expected to support 60–70 per cent of clinical trials by 2030, potentially saving $20–30 billion annually. While digital twins represent just one facet of this broader AI integration, their capacity to virtually assess drug safety and efficacy could significantly accelerate the journey from discovery to patient delivery.


Break the Link: The Fastest Way to Contain a Cyber-Attack

Hardware-enforced network isolation gives operators the ability to physically disconnect servers, storage and network segments on demand, using secure, out-of-band commands that sit entirely outside the attack surface. The simplicity here is the ultimate strength: if malware can’t make contact, it can’t cause damage. If a breach does happen? You can trigger isolation in milliseconds, stopping the spread before it locks systems, exfiltrates data or drains accounts. Unlike software-only isolation, which depends on the very systems it’s defending, hardware isolation can’t be tampered with remotely. No IP address, no exploitable code, just a clean physical break. ... Hardware isolation cuts the response to milliseconds, preserving both data integrity and regulatory compliance. It stops an incident at the source, shutting it down before operations are disrupted. The power of isolation is especially effective in high-stakes environments where speed and certainty matter. In colocation facilities, automated isolation prevents cross-tenant contamination by cutting off a compromised tenant before the threat can spread. At disaster recovery sites, it enables network segments to remain fully offline until they are needed, improving security and efficiency. In AI-heavy workloads, hardware isolation prevents model tampering and data exfiltration. In backup environments, selective disconnection ensures ransomware cannot encrypt or corrupt critical archives.


Prioritize these 4 processes to balance innovation and responsibility in banking model risk management

As AI/ML capabilities often require specialized software, datasets and computational tools, many financial institution—especially smaller ones—turn to third-party vendors. While this can accelerate adoption, it also introduces critical vulnerabilities related to oversight, accountability and systemic dependence. Third-party models often come with limited visibility into how they were developed, what data was used and how they behave under stress. Smaller institutions may lack the bargaining power or technical resources to demand transparency or perform deep due diligence. This lack of insight can delay detection of errors, increase compliance risk and even result in operational disruptions. ... AI/ML models thrive on vast datasets. In banking, where customer data is highly sensitive and tightly regulated, this presents a critical dual-risk challenge: Protecting privacy and preventing/detecting hidden learning where AI models may inadvertently infer protected/sensitive attributes. One risk is unauthorized or improper use of personal data during model training. Unintended inclusion of restricted data sets can lead to privacy breaches and violations of data protection laws such as the General Data Protection Regulation (GDPR). Another, more subtle, risk is the inadvertent encoding of sensitive attributes such as race or gender through proxy variables, even when such data is not explicitly used.


Achieving a Secure Cloud with Restructured NHIs

At its core, NHIs restructuring involves the process of redefining and aligning the various identities and linked secrets within your organization’s cloud infrastructure. The aim is to have a more effective, efficient, and secure system capable of monitoring and governing NHIs. This restructuring process includes a comprehensive review of the existing NHIs, secrets, and their permissions. It also involves determining which secrets are associated with which NHIs, who owns them, how they are used, and which vulnerabilities they may be exposed to. By performing this activity, a strong foundation can be laid for establishing a secure cloud environment that harnesses the power of NHIs management. ... Why is the restructuring of NHIs not just a requirement but a strategic move for any digital enterprise? The answer lies in the potential weaknesses and vulnerabilities that can arise from poorly managed NHIs. Restructuring NHIs is not merely about enhancing cybersecurity but developing a strategic advantage. This requires realizing the significance of NHIs in providing a compelling line of defense against potential security breaches. By properly managing and restructuring NHIs, organizations can build comprehensive, effective, and potent cyber defenses. It enables them to anticipate potential threats, detect vulnerabilities, and implement proactive measures to mitigate risks.


Boards are being told to rethink their role in cybersecurity

The report describes how ransomware attacks have become more targeted and disruptive. Threat actors are no longer just encrypting files. They are exploiting identity systems, help desks, and cloud infrastructure. One example highlighted is the growing use of social engineering against help desk staff, where attackers impersonate employees and convince support teams to reset credentials or modify multifactor authentication settings. By doing so, they bypass technical defenses and gain control of accounts. The report emphasizes that boards should pay attention to how identity is protected inside their organizations. Security teams may face resistance when trying to roll out stronger protections such as phishing-resistant multifactor authentication. Boards, according to the report, are in a position to set the tone and ensure these measures are adopted. ... The third area of focus is how boards can support innovation while ensuring cybersecurity is not left behind. The report argues that strong cybersecurity practices can help a company stand out by building trust with customers and enabling faster adoption of new technology. Boards are urged to encourage a risk-first mindset when new products or services are developed. That means security should be considered early in the process rather than added later. 


How to Overcome Five Key GenAI Deployment Challenges

Data is the lifeblood of artificial intelligence. Fortunately, with generative AI, data does not have to be perfect and pristine compared to the requirements for traditional, transaction-based deterministic systems. The key is ensuring AI has sufficient context from your business environment to deliver meaningful outputs – not perfect data, but the right data that’s relevant to the target use case. Don’t make the mistake of making data preparation too complex. Focus on giving AI systems the key information they need to create reliable and meaningful results. Partners can find your most important data. They help build a practical data base that balances quality and access. They also guide you to add more data as the project grows. ... AI initiatives are often rife with the most technical challenges when they’re just being launched. From model updates to data inconsistencies, a reliable partner ensures smooth deployment by anticipating and addressing these hurdles. Once these projects have gotten off the ground, they actively monitor performance while troubleshooting issues like AI models drifting or mitigating data security and regulatory compliance challenges to keep the project on track. ... It’s not just technical issues that make GenAI hard. There’s also a human challenge. AI adoption requires buy-in among both business and IT leaders and support from actual end users.