Showing posts with label cyber crime. Show all posts
Showing posts with label cyber crime. Show all posts

Daily Tech Digest - September 11, 2025


Quote for the day:

"You live longer once you realize that any time spent being unhappy is wasted." -- Ruth E. Renkl



Six hard truths for software development bosses

Everyone behaves differently when the boss is around. Everyone. And you, as a boss, need to realize this. There are two things to realize here. Firstly, when you are present, people will change who they are and what they say. Secondly, you should consider that fact when deciding whether to be in the room. ... Bosses need to realize that what they say, even comments that you might think are flippant and not meant to be taken seriously, will be taken seriously. ... The other side of that coin is that your silence and non-action can have profound effects. Maybe you space out in a meeting and miss a question. The team might think you blew them off and left the great idea hanging. Maybe you forgot to answer an email. Maybe you had bigger fish to fry and you were a bit short and dismissive of an approach by a direct report. Small lapses can be easily misconstrued by your team. ... You are the boss. You have the power to promote, demote, and award raises and bonuses. These powers are important, and people will see you in that light. Even your best attempts at being cordial, friendly, and collegial will not overcome the slight apprehension your authority will engender. Your mood on any given day will be noticed and tracked. ... You can and should have input into technical decisions and design decisions, but your team will want to be the ones driving what direction things take and how things get done. 


AI prompt injection gets real — with macros the latest hidden threat

“Broadly speaking, this threat vector — ‘malicious prompts embedded in macros’ — is yet another prompt injection method,” Roberto Enea, lead data scientist at cybersecurity services firm Fortra, told CSO. “In this specific case, the injection is done inside document macros or VBA [Visual Basic for Applications] scripts and is aimed at AI systems that analyze files.” Enea added: “Typically, the end goal is to mislead the AI system into classifying malware as safe.” ... “Attackers could embed hidden instructions in common business files like emails or Word documents, and when Copilot processed the file, it executed those instructions automatically,” Quentin Rhoads-Herrera, VP of cybersecurity services at Stratascale, explained. In response to the vulnerability, Microsoft recommended patching, restricting Copilot access, stripping hidden metadata from shared files, and enabling its built-in AI security controls. ... “We’ve already seen proof-of-concept attacks where malicious prompts are hidden inside documents, macros, or configuration files to trick AI systems into exfiltrating data or executing unintended actions,” Stratascale’s Rhoads-Herrera commented. “Researchers have also demonstrated how LLMs can be misled through hidden instructions in code comments or metadata, showing the same principle at work.” Rhoads-Herrera added: “While some of these remain research-driven, the techniques are quickly moving into the hands of attackers who are skilled at weaponizing proof-of-concepts.”


Are you really ready for AI? Exposing shadow tools in your organisation

When an organisation doesn’t regulate an approved framework of AI tools in place, its employees will commonly turn to using these applications across everyday actions. By now, everyone is aware of the existence of generative AI assets, whether they are actively using them or not, but without a proper ruleset in place, everyday employee actions can quickly become security nightmares. This can be everything from employees pasting sensitive client information or proprietary code into public generative AI tools to developers downloading promising open-source models from unverified repositories. ... The root cause of turning to shadow AI isn’t malicious intent. Unlike cyber actors, aiming to disrupt and exploit business infrastructure weaknesses for a hefty payout, employees aren’t leaking data outside of your organisation intentionally. AI is simply an accessible, powerful tool that many find exciting. In the absence of clear policies, training and oversight, and the increased pressure of faster, greater delivery, people will naturally seek the most effective support to get the job done. ... Regardless, you cannot protect against what you can’t see. Tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), which detect unauthorised AI use, must be an essential part of your security monitoring toolkit. Ensuring these alerts connect directly to your SIEM and defining clear processes for escalation and correction are also key for maximum security.


How to error-proof your team’s emergency communications

Hierarchy paralysis occurs when critical information is withheld by junior staff due to the belief that speaking up may undermine the chain of command. Junior operators may notice an anomaly or suspect a procedure is incorrect, but often neglect to disclose their concerns until after a mistake has happened. They may assume their input will be dismissed or even met with backlash due to their position. In many cases, their default stance is to believe that senior staff are acting on insight that they themselves lack. CRM trains employees to follow a structured verbal escalation path during critical incidents. Similar to emergency operations procedures (EOPs), staff are taught to express their concerns using short, direct phrases. This approach helps newer employees focus on the issue itself rather than navigating the interaction’s social aspects — an area that can lead to cognitive overload or delayed action. In such scenarios, CRM recommends the “2-challenge rule”: team members should attempt to communicate an observed issue twice, and if the issue remains unaddressed, escalate it to upper management. ... Strengthening emergency protocols can help eliminate miscommunication between employees and departments. Owners and operators can adopt strategies from other mission-critical industries to reduce human error and improve team responsiveness. While interpersonal issues between departments and individuals in different roles are inevitable, tighter emergency procedures can ensure consistency and more predictable team behavior.


SpamGPT – AI-powered Attack Tool Used By Hackers For Massive Phishing Attack

SpamGPT’s dark-themed user interface provides a comprehensive dashboard for managing criminal campaigns. It includes modules for setting up SMTP/IMAP servers, testing email deliverability, and analyzing campaign results features typically found in Fortune 500 marketing tools but repurposed for cybercrime. The platform gives attackers real-time, agentless monitoring dashboards that provide immediate feedback on email delivery and engagement. ... Attackers no longer need strong writing skills; they can simply prompt the AI to create scam templates for them. The toolkit’s emphasis on scale is equally concerning, as it promises guaranteed inbox delivery to popular providers like Gmail, Outlook, and Microsoft 365 by abusing trusted cloud services such as Amazon AWS and SendGrid to mask its malicious traffic. ... What once required significant technical expertise can now be executed by a single operator with a ready-made toolkit. The rise of such AI-driven platforms signals a new evolution in cybercrime, where automation and intelligent content generation make attacks more scalable, convincing, and difficult to detect. To counter this emerging threat, organizations must harden their email defenses. Enforcing strong email authentication protocols such as DMARC, SPF, and DKIM is a critical first step to make domain spoofing more difficult. Furthermore, enterprises should deploy AI-powered email security solutions capable of detecting the subtle linguistic patterns and technical signatures of AI-generated phishing content.


How attackers weaponize communications networks

The most attractive targets for advanced threat actors are not endpoint devices or individual servers, but the foundational communications networks that connect everything. This includes telecommunications providers, ISPs, and the routing infrastructure that forms the internet’s backbone. These networks are a “target-rich environment” because compromising a single point of entry can grant access to a vast amount of data from a multitude of downstream targets. The primary motivation is overwhelmingly geopolitical. We’re seeing a trend of nation-state actors, such as those behind the Salt Typhoon campaign, moving beyond corporate espionage to a more strategic, long-term intelligence-gathering mission. ... Two recent trends are particularly telling and serve as major warning signs. The first is the sheer scale and persistence of these attacks. ... The second trend is the fusion of technical exploits with AI-powered social engineering. ... A key challenge is the lack of a standardized global approach. Differing regulations around data retention, privacy, and incident reporting can create a patchwork of security requirements that threat actors can easily exploit. For a global espionage campaign, a weak link in one country’s regulatory framework can compromise an entire international communications chain. The goal of international policy should be to establish a baseline of security that includes mandatory incident reporting, a unified approach to patching known vulnerabilities, and a focus on building a collective defense.


AI's free web scraping days may be over, thanks to this new licensing protocol

AI companies are capturing as much content as possible from websites while also extracting information. Now, several heavyweight publishers and tech companies -- Reddit, Yahoo, People, O'Reilly Media, Medium, and Ziff Davis (ZDNET's parent company) -- have developed a response: the Really Simple Licensing (RSL) standard. You can think of RSL as Really Simple Syndication's (RSS) younger, tougher brother. While RSS is about syndication, getting your words, stories, and videos out onto the wider web, RSL says: "If you're an AI crawler gobbling up my content, you don't just get to eat for free anymore." The idea behind RSL is brutally simple. Instead of the old robots.txt file -- which only said, "yes, you can crawl me," or "no, you can't," and which AI companies often ignore -- publishers can now add something new: machine-readable licensing terms. Want an attribution? You can demand it. Want payment every time an AI crawler ingests your work, or even every time it spits out an answer powered by your article? Yep, there's a tag for that too. ... It's a clever fix for a complex problem. As Tim O'Reilly, the O'Reilly Media CEO and one of the RSL initiative's high-profile backers, said: "RSS was critical to the internet's evolution…but today, as AI systems absorb and repurpose that same content without permission or compensation, the rules need to evolve. RSL is that evolution."


AI is changing the game for global trade: Nagendra Bandaru, Wipro

AI is revolutionising global supply chain and trade management by enabling businesses across industries to make real-time, intelligent decisions. This transformative shift is driven by the deployment of AI agents, which dynamically respond to changing tariff regimes, logistics constraints, and demand fluctuations. Moving beyond traditional static models, AI agents are helping create more adaptive and responsive supply chains. ... The strategic focus is also evolving. While cost optimisation remains important, AI is now being leveraged to de-risk operations, anticipate geopolitical disruptions, and ensure continuity. In essence, agentic AI is reshaping supply chains into predictive, adaptive ecosystems that align more closely with the complexities of global trade. ... The next frontier is going to be threefold: first, the rise of agentic AI at scale marks a shift from isolated use cases to enterprise-wide deployment of autonomous agents capable of managing end-to-end trade ecosystems; second, the development of sovereign and domain-specific language models is enabling lightweight, highly contextualised solutions that uphold data sovereignty while delivering robust, enterprise-grade outcomes; and third, the convergence of AI with emerging technologies—including blockchain for provenance and quantum computing for optimisation—is poised to redefine global trade dynamics.


5 challenges every multicloud strategy must address

Transferring AI data among various cloud services and providers also adds complexity — but also significant risks. “Tackling software sprawl, especially as organizations accelerate their adoption of AI, is a top action for CIOs and CTOs,” says Mindy Lieberman, CIO at database platform provider MongoDB. ... A multicloud environment can complicate the management of data sovereignty. Companies need to ensure that data remains in line with the laws and regulations of the specific geographic regions where it is stored and processed. ... Deploying even one cloud service can present cybersecurity risks for an enterprise, so having a strong security program in place is all the more vital for a multicloud environment. The risks stem from expanded attack surfaces, inconsistent security practices among service providers, increased complexity of the IT infrastructure, fragmented visibility, and other factors. IT needs to be able to manage user access to cloud services and detect threats across multiple environments — in many cases without even having a full inventory of cloud services. ... “With greater complexity comes more potential avenues of failure, but also more opportunities for customization and optimization,” Wall says. “Each cloud provider offers unique strengths and weaknesses, which means forward-thinking enterprises must know how to leverage the right services at the right time.”


What Makes Small Businesses’ Data Valuable to Cybercriminals?

Small businesses face unique challenges that make them particularly vulnerable. They often lack dedicated IT or cybersecurity teams, sophisticated systems, and enterprise-grade protections. Budget constraints mean many cannot afford enterprise-level cybersecurity solutions, creating easily exploitable gaps. Common issues include outdated software, reduced security measures, and unpatched systems, which weaken defenses and provide easy entry points for criminals. A significant vulnerability is the lack of employee cybersecurity awareness. ... Small businesses, just like large organizations, collect and store vast amounts of valuable data. Customer data represents a goldmine for cybercriminals, including first and last names, home and email addresses, phone numbers, financial information, and even medical information. Financial records are equally attractive targets, including business financial information, payment details, and credit/debit card payment data. Intellectual property and trade secrets represent valuable proprietary assets that can be sold to competitors or used for corporate espionage. ... Small businesses are undeniably attractive targets for cybercriminals, not because they are financial giants, but because they are perceived as easier to breach due to resource constraints and common vulnerabilities. Their data, from customer PII to financial records and intellectual property, is highly valuable for resale, fraud, and as gateways to larger targets.

Daily Tech Digest - September 10, 2025


Quote for the day:

"Don't be pushed around by the fears in your mind. Be led by the dreams in your heart." -- Roy T. Bennet



Identify and eliminate the silent killers of developer productivity

Code reviews are a critical part of the development lifecycle, designed to improve code quality, share knowledge, and catch bugs before they get to production. But they are a significant bottleneck when not handled with care. ... This isn’t just a matter of lost time; it’s a killer of flow. Developers are forced into a constant state of context switching, losing their focus and momentum. You need to establish clear expectations and protocols for code reviews. ... Poor documentation forces a constant stream of interruptions and meetings that pull senior developers away from their own work to answer questions. It’s a prime example of a process failure that creates a huge amount of hidden, unproductive work. Make documentation a first-class citizen in your development process. ... Then there’s the peer who, perhaps with good intentions, cuts corners. They deliver a feature that “looks like it works” for a project manager who is hungry for a win. The PM, not seeing the technical debt or the flawed logic, approves it and pushes for immediate deployment. This undermines the entire team, as it normalizes a low-quality standard and signals that bad behavior is rewarded. You must step in and resolve these interpersonal and process conflicts. Use one-on-one meetings to address these issues directly and set clear expectations. It’s your job to ensure that the team’s decisions are respected and that the quality bar is not lowered for the sake of speed.


Industry leaders urge strong strategies for post-quantum readiness

Questions remain about the readiness of cryptographic solutions to withstand future quantum attacks. Sinha addressed these concerns directly: "Post quantum cryptography is here. DigiCert has been working along with other cryptography experts. We've been collaborating with the National Institute of Standards and Technology, NIST. Last year...NIST had announced the first three post quantum cryptography algorithms. One for encryption and two for authentication. They are the FIPS 203, 204 and 205 standards." ... Panelists underscored the importance of cryptographic inventory. "Creating the cryptographic inventory is the step zero of beginning any migration. And the complexity of creating...the cryptographic inventory cannot be overstated. It's a...real hard task, but it's really essential. It's the step zero because the inventory gives you the roadmap. How do you begin the journey? How do you start prioritising your systems and your applications?" said Chauhan. Luke Valenta added, "A cryptographic inventory is never going to be complete. So it's all really about the...process, and, and journey of putting that together. At Cloudflare in our migration, we started this inventory and we used that to figure out what are the highest priority systems to transition to post quantum first." Reilly noted, "Just raising the awareness and visibility of all the places where an enterprise uses cryptography - it can be a shock when that depth and breadth of the required transformation becomes apparent..."


Tech Debt: Why Fixing the Foundation Comes Before Building the Castle

Tech debt is about everything that stems from unstable foundations. I had to learn this during our scaling journey. Early on, we made quick decisions to ship features fast. But as we grew, those shortcuts started choking our growth. Companies pay an additional 10 to 20 percent to address tech debt on top of the costs of any project, and we felt every percentage point. The real killer isn't just the extra time – it's the opportunity cost. While your team is fixing yesterday's shortcuts, your competitors are building tomorrow's features. Developers working on the right things can accelerate a company's move into new markets or product areas and help companies differentiate themselves at disproportionate rates. But there's a human cost too. Nobody likes working with a significant handicap and being unproductive day after day. ... Here's where most companies get it wrong. They think innovation means constantly adding new features, launching new products, exploring new markets. But true innovation requires a stable foundation. 30 percent of CIOs surveyed believe that more than 20 percent of their technical budget ostensibly dedicated to new products is diverted to resolving issues related to tech debt. You're essentially pouring money into a bucket with holes in it. I've learned that the most innovative companies aren't necessarily the ones building the flashiest features – they're the ones who've mastered the discipline of maintaining clean, stable systems that can support rapid innovation.


Regulatory bodies close in on AI chatbots as LLMs face greater scrutiny

As regulators roll out online safety laws designed to protect kids from harms associated with porn and social media, a new threat has crept up behind them that could overshadow both. AI chatbots – exemplified by OpenAI’s large language model, ChatGPT – have been around long enough to prove themselves popular, and risky. ... Inman-Grant says schools have “been reporting that 10- and 11-year-old children are spending up to six hours per day on AI companions.” Moreover, it’s not just that they’re befriending LLMs – it’s that they’re often friends with benefits, or “sexualized chatbots.” “We don’t need to see a body count to know that this is the right thing for the companies to do,” says the commissioner. “I don’t want to see Australian lives ruined or lost as a result of the industry’s insatiable need to move fast and break things.” ... Brazilian authorities are pressuring Meta to immediately remove AI chatbots that “simulate child profiles and engage in sexual conversations with users.” According to PPC Land, the bots in question are those created using Meta AI Studio, a tool for developing custom AI chatbots. In mid-August, Brazil’s Attorney General (AGU) issued an “extrajudicial notice” giving Meta 72 hours to remove the erotic kiddie chatbots. It references Article 217-A of Brazil’s Penal Code, which criminalizes sexual acts with minors under 14 years old.The AGU argues that this includes simulated sexual interactions with AI. Under Brazilian law, platforms are liable for harmful content hosted on their services.


The Value-Driven AI Roadmap

The use of value stream management helps organizations map their processes, identifying impediments to delivering software that has value, and using automation to collect metrics that give insights into those processes – and even anticipate where the next hurdles might pop up, Knight said. “I’m going to map the process out, look at where things are and say, hey, I could put an AI agent here, then create a program and a plan to do that in a technology roadmap to line up with it,” he explained. Technology roadmapping involves aligning AI – what the organization is using now and what its needs might be a few years down the road – with business value. Staying on top of technology involves changes being driven by the market, the level of capability maturity within the organization, and finding where the gaps in your technology exist. “Roadmapping is more about helping organizations line up the change of different technologies and how to roll that out,” he said. Finally, Knight pointed out, assessing the skills within your workforce, where training is needed, and how willing the workers are to change, is critical. “It’s about how people in the future, in organizations, will have AI agents that work for them. And you think about it having extra capabilities where I’m going to have this set of skills with these people, but I may have an agent that works for me,” Knight said. “Maybe that agent does paralegal work for me.


The Hidden Cost of Overuse and Misuse of Data Storage

At first glance, storing everything might not seem like a huge problem. But when you factor in rising energy prices and ballooning data volumes, the cracks in that strategy start to show. Over time, outdated storage practices, from legacy systems to underused cloud buckets, can become a surprisingly expensive problem. ... what often gets overlooked are the hidden costs: the backup of low-value data, the power consumption of idle systems, or the surprise charges that come from cloud services which are not being monitored properly. Then there’s the operational cost. Disorganised or poorly labelled data makes access slower and compliance tougher. It also increases security risks, especially if sensitive information is spread across uncontrolled environments. The longer these issues go unchecked, the more danger there is of a snowball effect. ... Cutting storage costs is an obvious benefit but it’s far from the only one. A smarter, edge-driven strategy helps businesses build a more efficient, resilient, and sustainable digital infrastructure ... By processing and filtering data locally, organisations reduce the energy demands of transmitting and storing large volumes centrally, supporting both carbon reduction targets and lower utility costs. As sustainability reporting becomes more critical, this can also help meet Scope 2 emissions goals.


9 cloud strategy questions every IT leader must answer

Cloud platforms are increasingly procured by non-IT teams. Establishing a unified decision framework that brings together expertise from across the enterprise to guide the cloud lifecycle, from selection to sunsetting, is key. Without this, “organizations face fragmented architectures, redundant tools, and compliance gaps,” says CIO Mentor’s Topinka ... Working with multiple cloud partners can offer negotiating leverage and access to best-of-breed services, but it also compounds complexity and requires a range of expertise. ... “The maturity and advancement of cloud solutions depend on the team’s culture and their ability to operate and innovate within the cloud,” Hackett Group’s Nathan adds. ... “Clear visibility into consumption patterns, resource allocation, and usage metrics is essential,” says Nathan, noting that cloud financial management practices help maintain accountability and prevent cost overruns, particularly in multicloud environments. Allocating cloud costs directly to business units or product teams also increases transparency and encourages more efficient use of cloud resources, according to Kocherlakota. ... Cloud adoption without attendant legacy modernization can backfire, S&P Global’s Kocherlakota says. “Simply using the cloud as a data center while maintaining legacy applications can lead to cost creep,” he says. “Investing in transforming legacy systems optimizes infrastructure and boosts efficiency.”


Has Cloud Security Reached Its Breaking Point?

The comfortable assumptions that have guided cloud security for the past decade are crumbling. Supply chain attacks cascade through thousands of projects simultaneously. ... The GitHub Actions compromise (CVE-2025-30066) represents an evolutionary leap in supply chain attacks. What started as a single compromised Personal Access Token cascaded through 23,000+ repositories by exploiting dependency chains. Attackers retroactively modified version tags and implemented memory dumping to extract AWS keys, GitHub tokens and RSA keys from CI/CD logs. ... 89 percent of enterprises run multi-cloud environments, but only 23 percent have full visibility across their infrastructure. This creates a perfect storm where 70 percent of attacks span three or more cloud surfaces simultaneously ... While experts predict quantum computers will break current encryption by 2027 to 2030, the 'harvest now, decrypt later' attacks are already underway. Only 24 percent of organizations have started post-quantum cryptography preparation, leaving millions of encrypted communications vulnerable to future decryption. ... The evidence is clear that incremental improvements cannot address the mathematical realities we face. Security already struggled to scale for cloud workloads without core organizational and process changes; with AI adoption accelerating, it is impossible unless enterprises address foundational gaps.


Probably Secure: A Look at the Security Concerns of Deterministic vs Probabilistic Systems

From a security standpoint, there are places where probability belongs, and places where it absolutely does not. Identity authentication, transaction authorization, cryptographic key validation, and agent permissions must be rooted in deterministic validation, not statistical confidence. Generative AI, while powerful, can easily mislead developers, suggesting insecure code, leaking secrets through logs, or introducing unsafe patterns without clear visibility. Even well-structured retrieval-augmented generation (RAG) systems have a fundamental limitation: you can’t “tune” them for security beyond scrutinizing all input and output, leaving room for mistakes that attackers can exploit. Your tooling needs to treat probabilistic intelligence as a supplement rather than a trust anchor, reinforcing every critical security decision with deterministic, provable checks. ... Probabilistic tools are powerful for risk detection, prioritization, and context enrichment. Generative AI may accelerate development, but without deterministic guardrails, it can also accelerate risk. Teams need to focus on closing this gap by combining the strengths of AI-driven detection with hardened, verifiable validation for every secret, token, and non-human identity. This layered model ensures that organizations can safely leverage AI-driven insights while preserving a foundation of cryptographic certainty.


What do cybercriminals know about the retail sector that we don’t?

“Stolen customer data is valuable to fraudsters. So, retail is particularly vulnerable because retailers store large quantities of consumer data.” With so much to lose, retailers should be taking more care to protect themselves, but that is no easy feat. The scale of their operations means their businesses have many moving parts. Their supply chains are long and complex, involving an intricate and ever-changing network of suppliers. ... While external cybersecurity advisors are often called in after a breach has occurred, it is also wise to have them on board as a pre-emptive measure, as Kirsten Whitfield, co-head of law firm Fieldfisher’s cyber breach team in London, explains “Get a forensics provider on board to help close down an incident, and engage them in advance, as they could stress test the systems against common attack vectors from their knowledge of hacking groups,” she says. “Even engage a professional ransomware negotiator who can profile attackers.” On the technical front, the biggest challenge is to keep pace with the growth in AI. Hackers are using it, so retailers need to invest in defensive AI to fight fire with fire. “Investing as regulators expect you to will not necessarily mean you are iron clad,” says Whitfield. “Hackers are increasingly sophisticated and use tools like AI, so it is a good idea to invest in it, too, though you don’t want to rush into buying AI that you think will protect you but has not been fully understood.”

Daily Tech Digest - September 01, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain


The AI-powered cyberattack era is here

In the deepfake era, the crime was unprecedented and exotic. In the genAI era, it’s a banality.
You just need a three-second recording of a person talking, according to McAfee experts. With that snippet, you can create convincing fake messages or phone calls. If someone sounds like a trusted person, people are ready to hand over their cash or secrets. In 2024, the company’s global study found one in four people had suffered or knew someone hit by an AI voice scam. ... One challenge in the field of AI-enabled attacks — which is to say, attacks that didn’t exist or weren’t possible before genAI — is how quickly everything changes. Take AI browsers, for example. This new category of web browser includes Perplexity Comet, Dia (by The Browser Company), Fellou, Opera Neon, Sigma AI Browser, Arc Max, Microsoft Edge Copilot, Brave Leo, Wave Browser Pro, SigmaOS, Opera Aria, Genspark AI Browser, Poly, Quetta Browser, Browserbase, Phew AI Tab, and the upcoming OpenAI browser. ... The truth is that most attacks are still the old-fashioned kind, performed without help from AI. And most still involve human error. So all the standard guidelines and best practices apply. Companies should update software regularly, require multifactor authentication for all logins, and give employees training about fake emails and malicious links. Outside experts should run penetration tests twice a year. Making regular offline backups can save thousands after AI-based ransomware attacks.


How to Make Data Work for What’s Next

Too often, companies begin by auditing the data they already have. A better question is, “What outcome are we trying to drive?” Whether it’s scaling operations, improving retention, or guiding smarter investments, the path forward starts with understanding where you want to go. ... Not everything needs to be measured. The goal is to curate the data, pulling in what’s most useful rather than everything that’s available. Focus on what’s going to help people make decisions in real time. Some metrics help you look ahead, while others explain what already happened. A good mix can be helpful, but only if it still aligns with the outcome you’re tracking. This shift can feel unfamiliar. Many teams are used to starting from their existing systems–what’s already tracked, what can be pulled from a dashboard–and working backward. But that often leads to noise or gaps. Managing too much data isn’t just overwhelming; it’s also costly. Teams spend time storing, maintaining, and cleaning data that often doesn’t lead to better decisions. ... Trust is built in small moments. When early reports reflect what people expect based on their lived experience, they begin to rely on the system. ... A stronger data culture isn’t just about systems. It’s about building skills and helping people see how their work connects to outcomes. When data reinforces what people already know and shows up in context—visually, interactively, and on time—it becomes a tool they trust, use, and want to leverage.


Cybercrime increasingly moving beyond financial gains

“We are very redundant when talking about cybercrime, because we always associate it with economic motivations,” says Hervé Lambert, global consumer operations manager at Panda Security. “But they are not the only reasons out there.” Lambert also refers to political and military cyber espionage, “states or actors linked to different governments” that seek to infiltrate to obtain strategic information. It also includes cyberwarfare, “attacks designed to do damage, disable, render important systems useless. There is no lucrative purpose, but to enhance or win a war or facilitate sabotage.” ... “These very different motivations are not mutually exclusive, as they seek different objectives,” adds Alonso García. “We can find them as the sole motivation or they complement each other, making cyberattacks more elaborate and complex to analyze.” In other words, a person or group may have political interests but ask for a ransom to cover up their actions or seek funding; or in a context of turmoil between countries, take advantage to launch attacks that seek to profit. ... But the strategy to be followed will have to be reoriented or reinforced if, for example, we are working in a critical sector from a geopolitical point of view, in which, among other things, disinformation will have to be taken into account. 
"The old software world is gone, giving way to a new set of truths being defined by AI. To navigate the changes, technical leaders should carry out rigorous validation on AI assistants. Managers should establish formal AI governance policies and invest in training for emerging technologies. Security professionals should update their threat models to include AI-specific risks and leverage SBOMs [Software Bill of Materials] as a strategic asset for risk management to achieve true scale application security." ... "Without SBOMs, we're flying blind. With them, we're finally turning the lights on in the supply chain cockpit," said Helen Oakley, Director of Secure Software Supply Chains and Secure Development at SAP. "AI coding assistants are like interns with rocket fuel. They accelerate everything, including errors, if you don't set boundaries." ... "For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies."


How Digital Twins Transform Drug Development Processes

A key technological advancement emerging from these hubs is the application of digital twins in pharmaceutical research. Initially used in engineering and manufacturing sectors, digital twins in the pharmaceutical industry are virtual models of human systems that replicate biological processes. These replicas are built using vast volumes of biological, clinical and genomic data, enabling researchers to test how different patient profiles might respond to specific drugs without exposing individuals to experimental therapies. The implications of this approach are transformative. Through digital twins, pharmaceutical scientists can simulate the progression of diseases, predict Adverse Drug Reactions (ADRs) and model patient diversity across age, gender, genetic traits and comorbidities. This ability to run in-silico trials, which are clinical trials conducted through virtual simulations, reduces the cost, duration and risk associated with traditional clinical testing. ... AI is transforming every clinical development phase worldwide, from trial design to execution and outcome analysis. According to industry estimates, AI is expected to support 60–70 per cent of clinical trials by 2030, potentially saving $20–30 billion annually. While digital twins represent just one facet of this broader AI integration, their capacity to virtually assess drug safety and efficacy could significantly accelerate the journey from discovery to patient delivery.


Break the Link: The Fastest Way to Contain a Cyber-Attack

Hardware-enforced network isolation gives operators the ability to physically disconnect servers, storage and network segments on demand, using secure, out-of-band commands that sit entirely outside the attack surface. The simplicity here is the ultimate strength: if malware can’t make contact, it can’t cause damage. If a breach does happen? You can trigger isolation in milliseconds, stopping the spread before it locks systems, exfiltrates data or drains accounts. Unlike software-only isolation, which depends on the very systems it’s defending, hardware isolation can’t be tampered with remotely. No IP address, no exploitable code, just a clean physical break. ... Hardware isolation cuts the response to milliseconds, preserving both data integrity and regulatory compliance. It stops an incident at the source, shutting it down before operations are disrupted. The power of isolation is especially effective in high-stakes environments where speed and certainty matter. In colocation facilities, automated isolation prevents cross-tenant contamination by cutting off a compromised tenant before the threat can spread. At disaster recovery sites, it enables network segments to remain fully offline until they are needed, improving security and efficiency. In AI-heavy workloads, hardware isolation prevents model tampering and data exfiltration. In backup environments, selective disconnection ensures ransomware cannot encrypt or corrupt critical archives.


Prioritize these 4 processes to balance innovation and responsibility in banking model risk management

As AI/ML capabilities often require specialized software, datasets and computational tools, many financial institution—especially smaller ones—turn to third-party vendors. While this can accelerate adoption, it also introduces critical vulnerabilities related to oversight, accountability and systemic dependence. Third-party models often come with limited visibility into how they were developed, what data was used and how they behave under stress. Smaller institutions may lack the bargaining power or technical resources to demand transparency or perform deep due diligence. This lack of insight can delay detection of errors, increase compliance risk and even result in operational disruptions. ... AI/ML models thrive on vast datasets. In banking, where customer data is highly sensitive and tightly regulated, this presents a critical dual-risk challenge: Protecting privacy and preventing/detecting hidden learning where AI models may inadvertently infer protected/sensitive attributes. One risk is unauthorized or improper use of personal data during model training. Unintended inclusion of restricted data sets can lead to privacy breaches and violations of data protection laws such as the General Data Protection Regulation (GDPR). Another, more subtle, risk is the inadvertent encoding of sensitive attributes such as race or gender through proxy variables, even when such data is not explicitly used.


Achieving a Secure Cloud with Restructured NHIs

At its core, NHIs restructuring involves the process of redefining and aligning the various identities and linked secrets within your organization’s cloud infrastructure. The aim is to have a more effective, efficient, and secure system capable of monitoring and governing NHIs. This restructuring process includes a comprehensive review of the existing NHIs, secrets, and their permissions. It also involves determining which secrets are associated with which NHIs, who owns them, how they are used, and which vulnerabilities they may be exposed to. By performing this activity, a strong foundation can be laid for establishing a secure cloud environment that harnesses the power of NHIs management. ... Why is the restructuring of NHIs not just a requirement but a strategic move for any digital enterprise? The answer lies in the potential weaknesses and vulnerabilities that can arise from poorly managed NHIs. Restructuring NHIs is not merely about enhancing cybersecurity but developing a strategic advantage. This requires realizing the significance of NHIs in providing a compelling line of defense against potential security breaches. By properly managing and restructuring NHIs, organizations can build comprehensive, effective, and potent cyber defenses. It enables them to anticipate potential threats, detect vulnerabilities, and implement proactive measures to mitigate risks.


Boards are being told to rethink their role in cybersecurity

The report describes how ransomware attacks have become more targeted and disruptive. Threat actors are no longer just encrypting files. They are exploiting identity systems, help desks, and cloud infrastructure. One example highlighted is the growing use of social engineering against help desk staff, where attackers impersonate employees and convince support teams to reset credentials or modify multifactor authentication settings. By doing so, they bypass technical defenses and gain control of accounts. The report emphasizes that boards should pay attention to how identity is protected inside their organizations. Security teams may face resistance when trying to roll out stronger protections such as phishing-resistant multifactor authentication. Boards, according to the report, are in a position to set the tone and ensure these measures are adopted. ... The third area of focus is how boards can support innovation while ensuring cybersecurity is not left behind. The report argues that strong cybersecurity practices can help a company stand out by building trust with customers and enabling faster adoption of new technology. Boards are urged to encourage a risk-first mindset when new products or services are developed. That means security should be considered early in the process rather than added later. 


How to Overcome Five Key GenAI Deployment Challenges

Data is the lifeblood of artificial intelligence. Fortunately, with generative AI, data does not have to be perfect and pristine compared to the requirements for traditional, transaction-based deterministic systems. The key is ensuring AI has sufficient context from your business environment to deliver meaningful outputs – not perfect data, but the right data that’s relevant to the target use case. Don’t make the mistake of making data preparation too complex. Focus on giving AI systems the key information they need to create reliable and meaningful results. Partners can find your most important data. They help build a practical data base that balances quality and access. They also guide you to add more data as the project grows. ... AI initiatives are often rife with the most technical challenges when they’re just being launched. From model updates to data inconsistencies, a reliable partner ensures smooth deployment by anticipating and addressing these hurdles. Once these projects have gotten off the ground, they actively monitor performance while troubleshooting issues like AI models drifting or mitigating data security and regulatory compliance challenges to keep the project on track. ... It’s not just technical issues that make GenAI hard. There’s also a human challenge. AI adoption requires buy-in among both business and IT leaders and support from actual end users. 

Daily Tech Digest - August 28, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill


Emerging Infrastructure Transformations in AI Adoption

Balanced scaling of infrastructure storage and compute clusters optimizes resource use in the face of emerging elastic use cases. Throughput, latency, scalability, and resiliency are key metrics for measuring storage performance. Scaling storage with demand for AI solutions without contributing to technical debt is a careful balance to contemplate for infrastructure transformations. ... Data governance in AI extends beyond traditional access control. ML workflows have additional governance tasks such as lineage tracking, role-based permissions for model modification, and policy enforcement over how data is labeled, versioned, and reused. This includes dataset documentation, drift tracking, and LLM-specific controls over prompt inputs and generated outputs. Governance frameworks that support continuous learning cycles are more valuable: Every inference and user correction can become training data. ... As models become more stateful and retain context over time, pipelines must support real-time, memory-intensive operations. Even Apache Spark documentation hints at future support for stateful algorithms (models that maintain internal memory of past interactions), reflecting a broader industry trend. AI workflows are moving toward stateful "agent" models that can handle ongoing, contextual tasks rather than stateless, single-pass processing.


The rise of the creative cybercriminal: Leveraging data visibility to combat them

In response to the evolving cyber threats faced by organisations and governments, a comprehensive approach that addresses both the human factor and their IT systems is essential. Employee training in cybersecurity best practices, such as adopting a zero-trust approach and maintaining heightened vigilance against potential threats, like social engineering attacks, are crucial. Similarly, cybersecurity analysts and Security Operations Centres (SOCs) play a pivotal role by utilising Security Information and Event Management (SIEM) solutions to continuously monitor IT systems, identifying potential threats, and accelerating their investigation and response times. Given that these tasks can be labor-intensive, integrating a modern SIEM solution that harnesses generative AI (GenAI) is essential. ... By integrating GenAI's data processing capabilities with an advanced search platform, cybersecurity teams can search at scale across vast amounts of data, including unstructured data. This approach supports critical functions such as monitoring, compliance, threat detection, prevention, and incident response. With full-stack observability, or in other words, complete visibility across every layer of their technology stack, security teams can gain access to content-aware insights, and the platform can swiftly flag any suspicious activity.


How to secure digital trust amid deepfakes and AI

To ensure resilience in the shifting cybersecurity landscape, organizations should proactively adopt a hybrid fraud-prevention approach, strategically integrating AI solutions with traditional security measures to build robust, layered defenses. Ultimately, a comprehensive, adaptive, and collaborative security framework is essential for enterprises to effectively safeguard against increasingly sophisticated cyberattacks – and there are several preemptive strategies organizations must leverage to counteract threats and strengthen their security posture. ... Fraudsters are adaptive, usually leveraging both advanced methods (deepfakes and synthetic identities) and simpler techniques (password spraying and phishing) to exploit vulnerabilities. By combining AI with tools like strong and continuous authentication, behavioral analytics, and ongoing user education, organizations can build a more resilient defense system. This hybrid approach ensures that no single point of failure exposes the entire system, and that both human and machine vulnerabilities are addressed. Recent threats rely on social engineering to obtain credentials, bypass authentication, and steal sensitive data, and it is evolving along with AI. Utilizing real-time verification techniques, such as liveness detection, can reliably distinguish between legitimate users and deepfake impersonators. 


Why Generative AI's Future Isn't in the Cloud

Instead of telling customers they needed to bring their data to the AI in the cloud, we decided to bring AI to the data where it's created or resides, locally on-premises or at the edge. We flipped the model by bringing intelligence to the edge, making it self-contained, secure and ready to operate with zero dependency on the cloud. That's not just a performance advantage in terms of latency, but in defense and sensitive use cases, it's a requirement. ... The cloud has driven incredible innovation, but it's created a monoculture in how we think about deploying AI. When your entire stack depends on centralized compute and constant connectivity, you're inherently vulnerable to outages, latency, bandwidth constraints, and, in defense scenarios, active adversary disruption. The blind spot is that this fragility is invisible until it fails, and by then the cost of that failure can be enormous. We're proving that edge-first AI isn't just a defense-sector niche, it's a resilience model every enterprise should be thinking about. ... The line between commercial and military use of AI is blurring fast. As a company operating in this space, how do you navigate the dual-use nature of your tech responsibly? We consider ourselves a dual-use defense technology company and we also have enterprise customers. Being dual use actually helps us build better products for the military because our products are also tested and validated by commercial customers and partners. 


Why DEI Won't Die: The Benefits of a Diverse IT Workforce

For technology teams, diversity is a strategic imperative that drives better business outcomes. In IT, diverse leadership teams generate 19% more revenue from innovation, solve complex problems faster, and design products that better serve global markets — driving stronger adoption, retention of top talent, and a sustained competitive edge. Zoya Schaller, director of cybersecurity compliance at Keeper Security, says that when a team brings together people with different life experiences, they naturally approach challenges from unique perspectives. ... Common missteps, according to Ellis, include over-focusing on meeting diversity hiring targets without addressing the retention, development, and advancement of underrepresented technologists. "Crafting overly broad or tokenistic job descriptions can fail to resonate with specific tech talent communities," she says. "Don't treat DEI as an HR-only initiative but rather embed it into engineering and leadership accountability." Schaller cautions that bias often shows up in subtle ways — how résumés are reviewed, who is selected for interviews, or even what it means to be a "culture fit." ... Leaders should be active champions of inclusivity, as it is an ongoing commitment that requires consistent action and reinforcement from the top.


The Future of Software Is Not Just Faster Code - It's Smarter Organizations

Using AI effectively doesn't just mean handing over tasks. It requires developers to work alongside AI tools in a more thoughtful way — understanding how to write structured prompts, evaluate AI-generated results and iterate them based on context. This partnership is being pushed even further with agentic AI. Agentic systems can break a goal into smaller steps, decide the best order to tackle them, tap into multiple tools or models, and adapt in real time without constant human direction. For developers, this means AI can do more than suggesting code. It can act like a junior teammate who can design, implement, test and refine features on its own. ... But while these tools are powerful, they're not foolproof. Like other AI applications, their value depends on how well they're implemented, tuned and interpreted. That's where AI-literate developers come in. It's not enough to simply plug in a tool and expect it to catch every threat. Developers need to understand how to fine-tune these systems to their specific environments — configuring scanning parameters to align with their architecture, training models to recognize application-specific risks and adjusting thresholds to reduce noise without missing critical issues. ... However, the real challenge isn't just finding AI talent, its reorganizing teams to get the most out of AI's capabilities. 


Industrial Copilots: From Assistants to Essential Team Members

Behind the scenes, industrial copilots are supported by a technical stack that includes predictive analytics, real-time data integration, and cross-platform interoperability. These assistants do more than just respond — they help automate code generation, validate engineering logic, and reduce the burden of repetitive tasks. In doing so, they enable faster deployment of production systems while improving the quality and efficiency of engineering work. Despite these advances, several challenges remain. Data remains the bedrock of effective copilots, yet many workers on the shop floor are still not accustomed to working with data directly. Upskilling and improving data literacy among frontline staff is critical. Additionally, industrial companies are learning that while not all problems need AI, AI absolutely needs high-quality data to function well. An important lesson shared during Siemens’ AI with Purpose Summit was the importance of a data classification framework. To ensure copilots have access to usable data without risking intellectual property or compliance violations, one company adopted a color-coded approach: white for synthetic data (freely usable), green for uncritical data (approval required), yellow for sensitive information, and red for internal IP (restricted to internal use only). 


Will the future be Consolidated Platforms or Expanding Niches?

Ramprakash Ramamoorthy believes enterprise SaaS is already making moves in consolidation. “The initial stage of a hype cycle includes features disguised as products and products disguised as companies. Well we are past that, many of these organizations that delivered a single product have to go through either vertical integration or sell out. In fact a lot of companies are mimicking those single-product features natively on large platforms.” Ramamoorthy says he also feels AI model providers will develop into enterprise SaaS organizations themselves as they continue to capture the value proposition of user data and usage signals for SaaS providers. This is why Zoho built their own AI backbone—to keep pace with competitive offerings and to maintain independence. On the subject of vibe-code and low-code tools, Ramamoorthy seems quite clear-eyed about their suitability for mass-market production. “Vibe-code can accelerate you from 0 to 1 faster, but particularly with the increase in governance and privacy, you need additional rigor. For example, in India, we have started to see compliance as a framework.” In terms of the best generative tools today, he observes “Anytime I see a UI or content generated by AI—I can immediately recognize the quality that is just not there yet.”


Beyond the Prompt: Building Trustworthy Agent Systems

While a basic LLM call responds statically to a single prompt, an agent system plans. It breaks down a high-level goal into subtasks, decides on tools or data needed, executes steps, evaluates outcomes, and iterates – potentially over long timeframes and with autonomy. This dynamism unlocks immense potential but can introduce new layers of complexity and security risk. ... Technology controls are vital but not comprehensive. That’s because the most sophisticated agent system can be undermined by human error or manipulation. This is where principles of human risk management become critical. Humans are often the weakest link. How does this play out with agents? Agents should operate with clear visibility. Log every step, every decision point, every data access. Build dashboards showing the agent’s “thought process” and actions. Enable safe interruption points. Humans must be able to audit, understand, and stop the agent when necessary. ... The allure of agentic AI is undeniable. The promise of automating complex workflows, unlocking insights, and boosting productivity is real. But realizing this potential without introducing unacceptable risk requires moving beyond experimentation into disciplined engineering. It means architecting systems with context, security, and human oversight at their core.


Where security, DevOps, and data science finally meet on AI strategy

The key is to define isolation requirements upfront and then optimize aggressively within those constraints. Make the business trade-offs explicit and measurable. When teams try to optimize first and secure second, they usually have to redo everything. However, when they establish their security boundaries, the optimization work becomes more focused and effective. ... The intersection with cost controls is immediate. You need visibility into whether your GPU resources are being utilized or just sitting idle. We’ve seen companies waste a significant portion of their budget on GPUs because they’ve never been appropriately monitored or because they are only utilized for short bursts, which makes it complex to optimize. ... Observability also helps you understand the difference between training workloads running on 100% utilization and inference workloads, where buffer capacity is needed for response times. ... From a security perspective, the very reason teams can get away with hoarding is the reason there may be security concerns. AI initiatives are often extremely high priority, where the ends justify the means. This often makes cost control an afterthought, and the same dynamic can also cause other enterprise controls to be more lax as innovation and time to market dominate.

Daily Tech Digest - August 01, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni


It’s time to sound the alarm on water sector cybersecurity

The U.S. Environmental Protection Agency (EPA) identified 97 drinking water systems serving approximately 26.6 million users as having either critical or high-risk cybersecurity vulnerabilities. Water utility leaders are especially worried about ransomware, malware, and phishing attacks. American Water, the largest water and wastewater utility company in the US, experienced a cybersecurity incident that forced the company to shut down some of its systems. That came shortly after a similar incident forced Arkansas City’s water treatment facility to temporarily switch to manual operations. These attacks are not limited to the US. Recently, UK-based Southern Water admitted that criminals had breached its IT systems. In Denmark, hackers targeted the consumer data services of water provider Fanø Vand, resulting in data theft and operational hijack. These incidents show that this is a global risk, and authorities believe they may be the work of foreign actors. ... The EU is taking a serious approach to cybersecurity, with stricter enforcement and long-term investment in essential services. Through the NIS2 Directive, member states are required to follow security standards, report incidents, and coordinate national oversight. These steps are designed to help utilities strengthen their defenses and improve resilience.


AI and the Democratization of Cybercrime

Cheap, off-the-shelf language models are erasing the technical hurdles. FraudGPT and WormGPT subscriptions start at roughly $200 per month, promising ‘undetectable’ malware, flawless spear-phishing prose, and step-by-step exploit guidance. An aspiring criminal no longer needs the technical knowledge to tweak GitHub proof-of-concepts. They paste a prompt such as ‘Write a PowerShell loader that evades EDR’ and receive usable code in seconds. ... Researchers pushed the envelope further with ReaperAI and AutoAttacker, proof-of-concept ‘agentic’ systems that chain LLM reasoning with vulnerability scanners and exploit libraries. In controlled tests, they breached outdated Web servers, deployed ransomware, and negotiated payment over Tor, without human input once launched. Fully automated cyberattacks are just around the corner. ... Core defensive practice now revolves around four themes. First, reducing the attack surface through relentless automated patching. Second, assuming breach via Zero-Trust segmentation and immutable off-line backups that neuter double-extortion leverage. Third, hardening identity with universal multi-factor authentication (MFA) and phishing-resistant authentication. Finally, exercising incident-response plans with table-top and red-team drills that mirror AI-assisted adversaries.


Digital Twins and AI: Powering the future of creativity at Nestlé

NVIDIA Omniverse on Azure allows for building and seamlessly integrating advanced simulation and generative AI into existing 3D workflows. This cloud-based platform includes APIs and services enabling developers to easily integrate OpenUSD, as well as other sensor and rendering applications. OpenUSD’s capabilities accelerate workflows, teams, and projects when creating 3D assets and environments for large-scale, AI-enabled virtual worlds. The Omniverse Development Workstation on Azure accelerates the process of building Omniverse apps and tools, removing the time and complexity of configuring individual software packages and GPU drivers. With NVIDIA Omniverse on Azure and OpenUSD, marketing teams can create ultra-realistic 3D product previews and environments so that customers can explore a retailer’s products in an engaging and informative way. The platform also can deliver immersive augmented and virtual reality experiences for customers, such as virtually test-driving a car or seeing how new furniture pieces would look in an existing space. For retailers, NVIDIA Omniverse can help create digital twins of stores or in-store displays to simulate and evaluate different layouts to optimize how customers interact with them. 


Why data deletion – not retention – is the next big cyber defence

Emerging data privacy regulations, coupled with escalating cybersecurity risks, are flipping the script. Organisations can no longer afford to treat deletion as an afterthought. From compliance violations to breach fallout, retaining data beyond its lifecycle has a real downside. Many organisations still don’t have a reliable, scalable way to delete data. Policies may exist on paper, but consistent execution across environments, from cloud storage to aging legacy systems, is rare. That gap is no longer sustainable. In fact, failing to delete data when legally required is quickly becoming a regulatory, security, and reputational risk. ... From a cybersecurity perspective, every byte of retained data is a potential breach exposure. In many recent cases, post-incident investigations have uncovered massive amounts of sensitive data that should have been deleted, turning routine breaches into high-stakes regulatory events. But beyond the legal risks, excess data carries hidden operational costs. ... Most CISOs, privacy officers, and IT leaders understand the risks. But deletion is difficult to operationalise. Data lives across multiple systems, formats, and departments. Some repositories are outdated or no longer supported. Others are siloed or partially controlled by third parties. And in many cases, existing tools lack the integration or governance controls needed to automate deletion at scale.


IT Strategies to Navigate the Ever-Changing Digital Workspace

IT teams need to look for flexible, agnostic workspace management solutions that can respond to whether endpoints are running Windows 11, MacOS, ChromeOS, virtual desktops, or cloud PCs. They want to future proof their endpoint investments, knowing that their workspace management must be highly adaptable as business requirements change. To support this disparate endpoint estate, DEX solutions have come to the forefront as they have evolved from a one-off tool for monitoring employee experience to an integrated platform by which administrators can manage endpoints, security tools, and performance remediation. ... In the composite environment IT has the challenge of securing workflows across the endpoint estate, regardless of delivery platform, and doing so without interfering with the employee experience. As the number of both installed and SaaS applications grows, IT teams can leverage automation to streamline patching and other security updates and to monitor SaaS credentials effectively. Automation becomes invaluable in operational efficiency across an increasingly complex application landscape. Another security challenge is the existence of ‘Shadow SaaS’ in which employees, like shadow IT/AI, use unsanctioned tools they believe will help productivity.


Who’s Really Behind the Mask? Combatting Identity Fraud

Effective identity investigations start with asking the right questions and not merely responding to alerts. Security teams need to look deeper: Is this login location normal for the user? Is the device consistent with their normal configuration? Is the action standard for their role? Are there anomalies between systems? These questions create necessary context, enabling defenders to differentiate between standard deviations and hostile activity. Without that investigative attitude, security teams might pursue false positives or overlook actual threats. By structuring identity events with focused, behavior-based questions, analysts can get to the heart of the activity and react with accuracy and confidence. ... Identity theft often hides in plain sight, flourishing in the ordinary gaps between expected and actual behavior. Its deception lies in normalcy, where activity at the surface appears authentic but deviates quietly from established patterns. That’s why trust in a multi-source approach to truth is essential. Connecting insights from network traffic, authentication logs, application access, email interactions, and external integrations can help teams build a context-aware, layered picture of every user. This blended view helps uncover subtle discrepancies, confirm anomalies, and shed light on threats that routine detection will otherwise overlook, minimizing false positives and revealing actual risks.


The hidden crisis behind AI’s promise: Why data quality became an afterthought

Addressing AI data quality requires more human involvement, not less. Organizations need data stewardship frameworks that include subject matter experts who understand not just technical data structures, but business context and implications. These data stewards can identify subtle but crucial distinctions that pure technical analysis might miss. In educational technology, for example, combining parents, teachers, and students into a single “users” category for analysis would produce meaningless insights. Someone with domain expertise knows these groups serve fundamentally different roles and should be analyzed separately. ... Despite the industry’s excitement about new AI model releases, a more disciplined approach focused on clearly defined use cases rather than maximum data exposure proves more effective. Instead of opting for more data to be shared with AI, sticking to the basics and thinking about product concepts produces better results. You don’t want to just throw a lot of good stuff in a can and assume that something good will happen. ... Future AI systems will need “data entitlement” capabilities that automatically understand and respect access controls and privacy requirements. This goes beyond current approaches that require manual configuration of data permissions for each AI application.


Agentic AI is reshaping the API landscape

With agentic AI, APIs evolve from passive endpoints into active dialogue partners. They need to handle more than single, fixed transactions. Instead, APIs must support iterative engagement, where agents adjust their calls based on prior results and current context. This leads to more flexible communication models. For instance, an agent might begin by querying one API to gather user data, process it internally, and then call another endpoint to trigger a workflow. APIs in such environments must be reliable, context-aware and be able to handle higher levels of interaction – including unexpected sequences of calls. One of the most powerful capabilities of agentic AI is its ability to coordinate complex workflows across multiple APIs. Agents can manage chains of requests, evaluate priorities, handle exceptions, and optimise processes in real time. ... Agentic AI is already setting the stage for more responsive, autonomous API ecosystems. Get ready for systems that can foresee workload shifts, self-tune performance, and coordinate across services without waiting for any command from a human. Soon, agentic AI will enable seamless collaboration between multiple AI systems—each managing its own workflow, yet contributing to larger, unified business goals. To support this evolution, APIs themselves must transform. 


Removing Technical Debt Supports Cybersecurity and Incident Response for SMBs

Technical debt is a business’s running tally of aging or defunct software and systems. While workarounds can keep the lights on, they come with risks. For instance, there are operational challenges and expenses associated with managing older systems. Additionally, necessary expenses can accumulate if technical debt is allowed to get out of control, ballooning the costs of a proper fix. While eliminating technical debt is challenging, it’s fundamentally an investment in a business’s future security. Excess technical debt doesn’t just lead to operational inefficiencies. It also creates cybersecurity weaknesses that inhibit threat detection and response. ... “As threats evolve, technical debt becomes a roadblock,” says Jeff Olson, director of software-defined WAN product and technical marketing at Aruba, a Hewlett Packard Enterprise company. “Security protocols and standards have advanced to address common threats, but if you have older technology, you’re at risk until you can upgrade your devices.” Upgrades can prove challenging, however. ... The first step to reducing technical debt is to act now, Olson says. “Sweating it out” for another two or three years will only make things worse. Waiting also stymies innovation, as reducing technical debt can help SMBs take advantage of advanced technologies such as artificial intelligence.


Third-party risk is everyone’s problem: What CISOs need to know now

The best CISOs now operate less like technical gatekeepers and more like orchestral conductors, aligning procurement, legal, finance, and operations around a shared expectation of risk awareness. ... The responsibility for managing third-party risk no longer rests solely on IT security teams. CISOs must transform their roles from technical protectors to strategic leaders who influence enterprise risk management at every level. This evolution involves:Embracing enterprise-wide collaboration: Effective management of third-party risk requires cooperation among diverse departments such as procurement, legal, finance, and operations. By collaborating across the organization, CISOs ensure that third-party risk management is comprehensive and proactive rather than reactive. Integrating risk management into governance frameworks: Third-party risk should be a top agenda item in board meetings and strategic planning sessions. CISOs need to work with senior leadership to embed vendor risk management into the organization’s overall risk landscape. Fostering transparency and accountability: Establishing clear reporting lines and protocols ensures that issues related to third-party risk are promptly escalated and addressed. Accountability should span every level of the organization to ensure effective risk management.

Daily Tech Digest - July 01, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett


CIOs rethink public cloud as AI strategies mature

Regulatory and compliance concerns are a big driver toward the private cloud or on-premises solutions, says Bastien Aerni, vice president of strategy and technology adoption at GTT. Many companies are shifting their sensitive workloads to private clouds as a piece of broader multicloud and hybrid strategies to support agentic AI and other complex AI initiatives, he adds. “Most of the time, AI is touching confidential data or business-critical data,” Aerni says. “Then the thinking about the architecture and what the workload should be public vs. private, or even on-prem, is becoming a true question.” The public cloud still provides maximum scalability for AI projects, and in recent years, CIOs have been persuaded by the number of extra capabilities available there, he says. “In some of the conversations I had with CIOs, let’s say five years ago, they were mentioning, ‘There are so many features, so many tools,’” Aerni adds. ... “The paradox is clear: AI workloads are driving both massive cloud growth and selective repatriation simultaneously, because the market is expanding so rapidly it’s accommodating multiple deployment models at once,” Kirschner says. “What we are seeing is the maturation from a naive ‘everything-to-the-cloud’ strategy toward intelligent, workload-specific decisions.”


India’s DPDP law puts HR under the microscope—Here’s why that’s a good thing

At first glance, DPDP appears to mirror other data privacy frameworks like GDPR or CCPA. There’s talk of consent, purpose limitation, secure storage, and rights of the data principal (i.e., the individual). But the Indian legislation’s implications ripple far beyond IT configurations or privacy policies. “Mention data protection, and it often gets handed off to the legal or IT teams,” says Gupta. “But that misses the point. Every team that touches personal data is responsible under this law.” For HR departments, this shift is seismic. Gupta underscores how HR sits atop a “goldmine” of personal information—addresses, Aadhaar numbers, medical history, performance reviews, family details, even biometric data in some cases. And this isn't limited to employees; applicants and former workers are also in scope. ... With India housing thousands of global capability centres and outsourcing hubs, DPDP challenges multinationals to look inward. The emphasis so far has been on protecting customer data under global laws like GDPR. But now, internal data practices—especially around employees—are under the scanner. “DPDP is turning the lens inward,” says Gupta. “If your GCC in India tightens data practices, it won’t make sense to be lax elsewhere.”


3 ways developers should rethink their data stack for GenAI success

Traditional data stacks optimized for analytics, for the most part, don’t naturally support the vector search and semantic retrieval patterns that GenAI applications require. Thus, real-time GenAI data architectures need native support for embedding generation and vector storage as first-class citizens. This could mean integrating data with vector databases like Pinecone, Weaviate, or Chroma as part of the core infrastructure. It may also mean searching for multi-modal databases that can support all of your required data types out of the box without needing a bunch of separate platforms. Regardless of the underlying infrastructure, plan for needing hybrid search capabilities that combine traditional keyword search with semantic similarity, and consider how you’ll handle embedding model updates and re-indexing. ... Maintaining data relationships and ensuring consistent access patterns across these different storage systems is the real challenge when working with these various data types. While some platforms are beginning to offer enhanced vector search capabilities that can work across different data types, most organizations still need to architect solutions that coordinate multiple storage systems. The key is to design these multi-modal capabilities into your data stack early, rather than trying to bolt them on later when your GenAI applications demand richer data integration. 


Cyber Hygiene Protecting Your Digital and Financial Health

Digital transformation has reshaped the commercial world, integrating technology into nearly every aspect of operations. That has brought incredible opportunities, but it has also opened doors to new threats. Cyber attacks are more frequent and sophisticated, with malevolent actors targeting everyone from individuals to major corporations and entire countries. It is no exaggeration to say that establishing, and maintaining, effective cyber hygiene has become indispensable. According to Microsoft’s 2023 Digital Defense Report, effective cyber hygiene could prevent 99% of cyber attacks. Yet cyber hygiene is not just about preventing attacks, it is also central to maintaining operational stability and resilience in the event of a cyber breach. In that event robust cyber hygiene can limit the operational, financial, and reputational impact of a cyber attack, thereby enhancing an entity’s overall risk profile. ... Even though it’s critical, data suggests that many organizations struggle to implement even basic cyber security measures effectively. For example, a 2024 survey by Extrahop, a Seattle-based cyber security services provider, found that over half of the respondents admitted to using at least one unsecured network protocol, making them susceptible to attacks.


Are Data Engineers Sleepwalking Towards AI Catastrophe?

Data engineers are already overworked. Weigel cited a study that indicated 80% of data engineering teams are already overloaded. But when you add AI and unstructured data to the mix, the workload issue becomes even more acute. Agentic AI provides a potential solution. It’s natural that overworked data engineering teams will turn to AI for help. There’s a bevy of providers building copilots and swarms of AI agents that, ostensibly, can build, deploy, monitor, and fix data pipelines when they break. We are already seeing agentic AI have real impacts on data engineering teams, as well as the downstream data analysts who ultimately are the ones requesting the data in the first place. ... Once human data engineers are out of the loop, bad things can start happening, Weigel said. They potentially face a situation where the volume of data requests–which originally were served by human data engineers but now are being served by AI agents–is beyond their capability to keep up. ... “We’re now back in the dark ages, where we were 10 years ago [when we wondered] why we need data warehouses,” he said. “I know that if person A, B, and C ask a question, and previously they wrote their own queries, they got different results. Right now, we ask the same agent the same question, and because they’re non-deterministic, they will actually create different queries every time you ask it. 


How cybercriminals are weaponizing AI and what CISOs should do about it

Security teams are using AI to keep up with the pace of AI-powered cybercrime, scanning large volumes of data to surface threats earlier. AI helps scan massive amounts of threat data, surface patterns, and prioritize investigations. For example, analysts used AI to uncover a threat actor’s alternate Telegram channels, saving significant manual effort. Another use case: linking sockpuppet accounts. By analyzing slang, emojis, and writing styles, AI can help uncover connections between fake personas, even when their names and avatars are different. AI also flags when a new tactic starts gaining traction on forums or social media. ... As more defenders turn to AI to make sense of vast amounts of threat data, it’s easy to assume that LLMs can handle everything on their own. But interpreting chatter from the underground is not something AI can do well without help. “This diffuse environment, rich in vernacular and slang, poses a hurdle for LLMs that are typically trained on more generic or public internet data,” Ian Gray, VP of Cyber Threat Intelligence at Flashpoint, told Help Net Security. The problem goes deeper than just slang. Threat actors often communicate across multiple niche platforms, each with its own shorthand and tone. 


How To Keep AI From Making Us Stupid

The allure of AI is undeniable. It drafts emails, summarizes lengthy reports, generates code snippets, and even whips up images faster than you can say “neural network.” This unprecedented convenience, however, carries a subtle but potent risk. A study from MIT has highlighted concerns that overuse of AI tools might be degrading our thinking capabilities. That degradation is the digital equivalent of using a GPS so much that you forget how to read a map. Suddenly, your internal compass points vaguely toward convenience and not much else. When we offload critical cognitive tasks entirely to AI, our muscles for those tasks can begin to atrophy, leading to cognitive offloading. ... Treat AI-generated content like a highly caffeinated first draft — full of energy but possibly a little messy and prone to making things up. Your job isn’t to simply hit “generate” and walk away, unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss. Or worse, your audience. Always, always, aggressively edit, proofread, and, most critically, fact-check every single output. ... The real risk isn’t AI taking over our jobs; it’s us letting AI take over our brains. To maintain your analytical edge, continuously challenge yourself. Practice skills that AI complements but doesn’t replace, such as critical thinking, complex problem-solving, nuanced synthesis, ethical judgment, and genuine human creativity.


Governance meets innovation: Protiviti’s strategy for secure, scalable growth in BFSI and beyond

In today’s BFSI landscape, technology alone is no longer a differentiator. True competitive advantage lies in the orchestration of innovation with governance. The deployment of AI in underwriting, the migration of customer data to the cloud, or the use of IoT in insurance all bring immense opportunity—but also profound risks. Without strong guardrails, these initiatives can expose firms to cyber threats, data sovereignty violations, and regulatory scrutiny. Innovation without governance is a gamble; governance without innovation is a graveyard. ... In cloud transformation projects, for instance, we work with clients to proactively assess data localisation risks, cloud governance maturity, and third-party exposures, ensuring resilience is designed from day one. As AI adoption scales across financial services, we bring deep expertise in Responsible AI governance. From ethical frameworks and model explainability to regulatory alignment with India’s DPDP Act and the EU AI Act, our solutions ensure that automated systems remain transparent, auditable, and trustworthy. Our AI risk models integrate regulatory logic into system design, bridging the gap between innovation and accountability.


Cybercriminals take malicious AI to the next level

Cybercriminals are tailoring AI models for specific fraud schemes, including generating phishing emails tailored by sector or language, as well as writing fake job posts, invoices, or verification prompts. “Some vendors even market these tools with tiered pricing, API access, and private key licensing, mirroring the [legitimate] SaaS economy,” Flashpoint researchers found. “This specialization leads to potentially greater success rates and automated complex attack stages,” Flashpoint’s Gray tells CSO. ... Cybercrime vendors are also lowering the barrier for creating synthetic video and voice, with deepfake as a service (DaaS) offerings ... “This ‘prompt engineering as a service’ (PEaaS) lowers the barrier for entry, allowing a wider range of actors to leverage sophisticated AI capabilities through pre-packaged malicious prompts,” Gray warns. “Together, these trends create an adaptive threat: tailored models become more potent when refined with illicit data, PEaaS expands the reach of threat actors, and the continuous refinement ensures constant evolution against defenses,” he says. ... Enterprises need to balance automation with expert analysis, separating hype from reality, and continuously adapt to the rapidly evolving threat landscape. “Defenders should start by viewing AI as an augmentation of human expertise, not a replacement,” Flashpoint’s Gray says. 


“DevOps is Dead? Long Live DevOps-Powered Platforms”

If DevOps and platform engineering needed a common enemy — or ally — to bond over, AI provided it. A panel featuring Nvidia, Google, Rootly and Thoughtworks explained how large language models are automating “the last mile” of toil, from incident response bots that reason over Grafana dashboards to code-gen pipelines that spit out compliant Terraform. ... The logic is straightforward: You can’t automate what you can’t see. For DevOps practitioners, high-fidelity telemetry is now table stakes — whether you’re feeding an agentic AI, debugging an ephemeral sandbox, or proving compliance to auditors. Expect platform blueprints to ship with observability baked in, not bolted on. Look at the badges behind every coffee urn and you’ll spot familiar DevOps and DevSecOps logos — GitHub Actions, Mezmo, Teleport, Cortex, Sedai, Tailscale. Many of these vendors cut their teeth in CI/CD, IaC, or shift-left security long before “platform engineering” was a LinkedIn hashtag. ... So why the funeral garb? My guess: A tongue-in-cheek jab at hype cycles. Just as “DevOps is dead” clickbait pushed us to sharpen our message, the sash was a reminder that real value — not buzzwords — keeps a movement alive. Judging by the hallway traffic and workshop queues, platform engineering is passing that test.