Daily Tech Digest - September 16, 2025


Quote for the day:

“Too many of us are not living our dreams because we are living our fears.” -- Les Brown



Your employees are feeling ‘OK’ – and that’s a serious problem

At first glance, OK doesn’t sound dangerous. Teams aren’t unhappy enough to trigger alarms, nor are they burning out; they keep delivering at an acceptable level. But ‘acceptable’ is not the same as ‘successful’. Teams stuck in OK lack the energy, creativity and ambition to truly thrive. They’re passable, not powerful – and that complacency can quietly erode performance. ... In fact, the lifetime value of a happy employee is more than twice that of an OK one. This is not soft sentiment – it’s hard economics. By contrast, OK teams bring hidden costs. They are about twice as likely to miss targets as happy teams and have 50% higher staff turnover. They are also less collaborative, less creative and less resilient when challenges arise. ... First, reframe happiness as a serious business metric. It’s not vague or fluffy. It’s measurable, trackable and improvable. It connects directly to performance, retention and, ultimately, profit. Second, focus on the drivers of happiness. I’ve identified five ways to develop happiness at work: connect, be fair, empower, challenge and inspire. ... Third, embed a rhythm of measure-meet-repeat. Measure: Use light-touch weekly pulses and deeper quarterly surveys to gather data; Meet: Bring teams together to discuss results, identify blockers and celebrate progress; and  Repeat: Build momentum with regular reflection and action. This rhythm transforms data into dialogue, which helps organisations to improve.


Are cloud providers neglecting security to chase AI?

an unsettling trend now challenges this narrative. Recent research, including the “State of Cloud and AI Security 2025” report conducted by the Cloud Security Alliance (CSA) in partnership with cybersecurity company Tenable, highlights that cloud security, once considered best in class, is becoming more fragmented and misaligned, leaving organizations vulnerable. The issue isn’t a lack of resources or funding—it’s an alarming shift in priorities by cloud providers. As investment and innovative energies focus more on artificial intelligence and hybrid cloud development, security efforts appear to be falling behind. ... The dangers of this complexity are made worse by what the report calls the weakest link in cloud security: identity and access management (IAM). Nearly 59% of respondents cited insecure identities and risky permissions as their main concerns, with excessive permissions and poor identity hygiene among the top reasons for breaches. ... Deprioritizing security in favor of AI products is a gamble cloud providers appear willing to take, but there are clear signs that enterprises might not follow them down this path forever. The CSA/Tenable report highlights that 31% of surveyed respondents believe their executive leadership fails to grasp the nuances of cloud security, and many have uncritically relied on native tools from cloud vendors without adding extra protections.


The Future of Global A.I.

The accelerating development and adoption of AI products, services and platforms present both challenges and opportunities for regions like the Middle East and North Africa (MENA) and India that have ambitions of integrating AI into their economies. Data presented in the report suggests that the mobile user bases in India and MENA are primed for AI products and services on mobile platforms. For the Middle East, AI is a crucial enabler of economic diversification beyond its hydrocarbon industries, whereas for India, AI can be transformative for its world-leading digital public infrastructure, public service delivery, and digital payments platforms.  ... The BOND report notes that the current wave of AI development and adoption is unprecedented when compared to previous technological waves. It uses OpenAI’s ChatGPT as a benchmark to showcase the explosive growth of user adoption as the platform achieved 1 million users within five days, 800 million weekly active users within 17 months, and registered 90 percent of its users from non-US geographies by its third year. ... In an era of increasing geopolitical competition, countries are supporting efforts to achieve digital sovereignty. The BOND report notes a growing interest in Sovereign AI projects, as demonstrated by NVIDIA’s partnerships in countries like France, Spain, Switzerland, Ecuador, Japan, Vietnam, and Singapore.


Zero Trust Is 15 Years Old — Why Full Adoption Is Worth the Struggle

Effective ZT will not eliminate all breaches – there are simply too many ways into a network – but it would certainly limit the effectiveness of stolen credentials and inhibit lateral movement by intruders, and malicious activity by insiders inside the enterprise network. “Here’s the part most people miss: Zero Trust is just as important for reducing insider risk as it is for keeping out external threats.,” comments Chad Cragle. “Zero Trust is just as important for reducing insider risk as it is for keeping out external threats.” ... Putting people first is good people management and good PR, but bad security. It gives too much leeway to three basic human characteristics: a propensity to trust on sight, a tendency to be lazy, and a deep rooted curiosity. We have a natural tendency to trust first and ask questions later; to skirt security controls when they are too intrusive and hinder our work, and we are naturally curious. ... Technology first is becoming more essential in the emerging world of AI-enhanced deepfakes. We can no longer rely on people being able to recognize people. We are easily fooled into believing this entity is the entity we know and trust. ... Getting the technology ready for ZT is also hard, partly because many applications were not built with ZT in mind. “Many older programs just don’t play nice with modern security,” comments J Stephen Kowski, “so businesses end up stuck between keeping things secure and not slowing down the way they work.”


Crafting an Effective AI Strategy for Your Organization: A Comprehensive Approach

Without a deliberate strategy, AI initiatives might remain small pilot projects that never scale, or they might stray from business needs. A well-crafted AI strategy acts as a compass to guide AI investments and projects. It helps answer critical questions upfront: Which problems are we trying to solve with AI? How do these tie to our business KPIs? Do we have the right data and infrastructure? By addressing these, the strategy ensures AI adoption is purposeful rather than purely experimental. Crucially, the strategy also weaves in ethical and regulatory considerations ... An AI CoE is a dedicated team or organizational unit that centralizes AI expertise and resources to support the entire company’s AI initiatives. Think of it as an in-house “AI SWAT team” that bridges the gap between high-level strategy and the technical execution of AI projects. ... As organizations deploy AI more widely, ethical, legal, and societal responsibilities become non-negotiable. Responsible AI is all about ensuring that systems are fair, transparent, safe, and aligned with human values. ... Many AI models, especially deep learning systems, are often criticized for being “black boxes”—making decisions that are difficult to interpret. Explainable AI (XAI) is about creating methods and tools to make these models transparent and their outputs understandable.


Building security that protects customers, not just auditors

Good engineering usually leads to strong security, and cautions against just going through the motions to meet compliance requirements. ... Sadly, threat actors don’t need to improve, most of the market is very far behind and old-school attacks like phishing still work easily. One trend we’re seeing in the last few years is a strong focus on crypto attacks, and on crypto exchanges. Even these usually involve classic techniques. Another are “SMS abuse” attacks, where attackers exploit endpoints that trigger sending sms messages, which they send to premium numbers they want to bump up. Many such attacks are only discovered when the bill from the SMS provider arrives. ... Current Security Information and Event Management (SIEM) vendors often offer stacks and pricing models that just don’t fit the sheer scale and speed of transactions. Sure, you can make them work, if you spend millions! ... If you just check boxes, you are not protecting your customers, you are just protecting your company from the auditor. Try to understand the rationale behind the control and implement it according to your company’s architecture. Think of it philosophically, would you be happy being a box-ticker or would you prefer to have impact? ... Your goal is to find a way to collaborate with your QSA, they can be true partners for driving positive change in the company. 


Enterprise-Grade Data Ethics: How to Implement Privacy, Policy, and Architecture

Embedding ethics and privacy into daily business operations involves practical, continuous steps integrated deeply into organizational processes. Core recommendations include developing clear and understandable data policies and making them accessible to all stakeholders, regularly training teams to maintain updated awareness of ethical data standards, building privacy considerations directly into system architecture from inception, and collaborating with legal and technical teams on application programming interfaces (APIs) and data models to incorporate explicit privacy rules. ... An enterprise architecture framework creates fundamental support by outlining precise methods for data storage, transfer, and access permissions. Organizations use new and emerging technologies alongside other comprehensive tools to establish systematic policies while implementing strong encryption and data masking approaches for secure data management. ... Executive leaders who dedicate themselves to ethical data handling create profound changes in corporate cultural values. Organizations can demonstrate their strategic dedication to data ethics through executive-level visibility of privacy and ethics system design oversight, combined with employee training investments and performance accountability systems.


CIOs are stressed — and more or less loving it

Not surprisingly AI has upped the ante for stress — or in Richard’s case, concern over the quick adoption of AI tools by end users who may or may not know what to do with them. “I would say that’s probably the thing I worry about the most. I don’t know that it stresses me out,” but he constantly thinks about what tools employees are using and how they are using them. “We don’t want to suck away all the productivity gains by limiting access to great tools, but at the same time, we don’t want to let people run wild with [personally identifiable information] or data” by tools not managed by IT. ... Even with all the pressures on CIOs today and the need to wear many hats, most say the job is still worth it. Pressure, it seems, is not always a bad thing. “I’m still in it, so it must be worth it,’’ Grinnell says. “CIOs have a certain personality; we know you’re not getting into the job and it’ll be smooth sailing. We have to solve a challenge — whatever the challenge is.” … It’s tiring, it’s stressful, but I get up energized every day to go tackle that. That’s who I am.” Driscoll says she likes pressure and finds her role “worth it more now than ever because the job of CIO and CTO has evolved to where the expectation is you will be responsible for the technology, but also be a core partner in where the business is going. For me, that ability to help drive business outcomes, and shape wherever we go as a company makes my job more exciting and worth it.”


How AI and Machine Learning Are Shaping the Fight Against Ransomware

Machine learning algorithms can recognise and understand complex patterns within data sets. Analysing historical information facilitates the identification of behavioural patterns associated with ransomware attacks, enabling strategies to be developed to prevent these attacks in the future. One of the best examples is the use of AI tools that have proven successful in detecting and protecting against cyber threats, including ransomware, by examining and analysing network traffic and user behaviour. ... When it comes to ransomware, speed is everything. As noted by IBM, AI-enabled systems allow organizations to respond to threats 85% faster than traditional methods. This rapid response reduces the damage caused by an attack while also delivering cost savings of unimaginable value to enterprises.  ... Machine learning algorithms are given information about a user’s network activity that is considered normal. Any subsequent actions are deemed abnormal if they involve changes to files and data that are out of the norm for the user. These activities are flagged so that they can be pursued further. This level of automation allows the detection of the presence of ransomware prior to encryption, allowing for timely user intervention. With ransomware pre-encryption detection algorithms, 999 out of 1000 threats can be accurately identified. CrowdStrike also claims to have captured remarkable behaviour-based ransomware detection accuracy.


Navigating the new frontier: Data sovereignty, AI and the role of global infrastructure

Data centers, once mere warehouses of information, are now the backbone of AI-driven economies. In an ever-expanding universe of digital information and content, data center operators are now faced with the daunting task of balancing operational efficiencies against the stringent need for regulatory compliance. As governments worldwide tighten regulations around data residency, cybersecurity, and AI governance, multinational companies face a complex challenge: how to maintain seamless operations while adhering to diverse and often conflicting legal frameworks. ... The integration of programmable infrastructure and cloud-Edge capabilities into cross-border networks and operations further enhances flexibility, allowing customers to localize data processing without duplicating costly physical assets. This hybrid model, underpinned by scalable, region-sensitive architecture, positions compliance as an intrinsic design principle rather than an afterthought. As data sovereignty laws proliferate, governments must support these efforts through fundamental research, clear regulatory frameworks, and partnerships with industry leaders to avoid a fragmented digital landscape that could stifle innovation. ... The convergence of data sovereignty, AI governance, and critical infrastructure security demands a new model of digital governance - one where compliance, innovation, and resilience are seamlessly integrated. 

Daily Tech Digest - September 15, 2025


Quote for the day:

“A leader takes people where they want to go. A great leader takes people where they don’t necessarily want to go, but ought to be.” -- Rosalynn Carter



MCP’s biggest security loophole is identity fragmentation

Almost every attack, excepting the odd zero-day exploit, begins with a mistake, like exposing a password or giving a junior employee access to privileged data. It’s why phishing via credentials abuse is such a common attack vector. It’s also why the risk of protocols being exploited to breach IT infrastructure doesn’t come from the protocol itself, but the identities interacting with the protocol. Any human or machine user reliant on static credentials or standing privileges is vulnerable to phishing. This makes any AI or protocol (MCP) interacting with that user vulnerable, too. This is MCP’s biggest blindspot. While MCP allows AI systems to request only relevant context from data repositories or tools, it doesn’t stop AI from surrendering sensitive data to identities that have been impersonated via stolen credentials. ... So, replace those standing secrets for agents with strong, ephemeral authentication, combined with just-in-time access. Speaking of access, the access controls of your chosen LLM should be tied to the same identity system as the rest of your company. Otherwise, there’s not much stopping it from disclosing sensitive data to the intern asking for the highest-paid employees. You need a single source of truth for identity and access that applies to all identities. Without that, it becomes impossible to enforce meaningful guardrails.


Is Software Engineering Dead?

Software engineering is the systematic application of engineering principles to the design, development, testing and maintenance of software systems. It involves structured processes, tools and methodologies to ensure software is reliable, scalable, and meets user requirements. ... Generative AI is transforming software engineering by allowing applications to interact intelligently and autonomously, similar to human interactions. More than 50% of software engineering teams will be actively building LLM-based features by 2027. “Successfully building LLM-based applications and agents requires software engineering leaders to rethink their strategies,” Herschmann says. “This means investing in upskilling, experimenting with GenAI outputs and implementing strong guardrails to manage risks.” ... The bottom line: In the age of GenAI, is software engineering dead? No. GenAI automates many coding tasks, but software engineering is much more than just writing code. It involves architecture, business grasp, cybersecurity and scalability by design, testing, maintenance and human-centered problem solving. GenAI can assist, but it doesn’t replace the need for engineers who understand context, constraints and consequences. Talent density—the concentration of highly skilled professionals within teams—has become a key differentiator for high-performing engineering organizations. 


Walmart's AI Gamble Is Rewriting the Rules of Retail

As part of its AI agents road map, Walmart introduced WIBEY, a developer-focused agent that serves as a unified entry point for intelligent action across Walmart systems. "Built on Element, WIBEY is not a dashboard or portal; it's an invocation layer that interprets developer intent and orchestrates execution across Walmart's agentic ecosystem. It abstracts complexity and connects systems through clean prompts, shared context and intelligent delegation," said Sravana Kumar Karnati ... Initially built for overnight stocking, Walmart's AI-powered workflow tool now guides associates on where to focus their efforts. Early results show that team leads and managers have cut shift planning time from 90 minutes to 30 minutes. The tool is currently being piloted for broader use across other shifts and locations. ... AI also powers Walmart's conversational shopping tools. Its AI-enabled search and chat interface lets customers ask natural language questions and receive tailored suggestions. The result: higher basket sizes and stronger customer retention. "Customers can use Walmart Voice Order, which enables them to pair their Walmart accounts to their smart speakers and mobile devices. By using base natural language understanding capabilities to understand queries and determine which actions are required, the systems can quickly identify the conversation's context and a customer's needs," said Anil Madan.


Bake Relentless Cybersecurity Into DevOps Without Slowing Releases

If we want teams to care about cybersecurity, we’ve got to measure it in engineering terms, not policy poetry. Let’s pick a few outcome metrics and wire them into the same dashboards we use for latency and errors. The simplest start is time-to-fix. Track median and p95 time to remediate critical vulns from first detection to merged fix; it’s concrete, actionable, and perfect for trend lines. We can pair that with exposure windows: how long a vulnerable artifact was actually running in production. ... “Shift left” can become “shift everything and burn the CPU.” Let’s be picky. The highest-return early checks are simple, fast, and close to developers’ daily flow: secrets detection, dependency scanning, and lightweight static analysis. Secrets first, because even one leak is too many. Then dependencies, because a surprising percent of our code’s risk hides in someone else’s library. And finally static checks that catch obvious footguns without drowning us in false positives. ... Least privilege isn’t a one-time ceremony; it’s a lifestyle backed by code. We write IAM in Terraform or CloudFormation, generate roles per workload, and avoid catch-all policies that feel like duct tape. The technique that works for us is “deny by default, allow the minimum, and tag everything.” Deny statements with conditions are great posture insurance. Scoped access with time-bound credentials ensures the keys we inevitably forget don’t outlive their usefulness.


Go big or go home: Should UK IT buyers favour US clouds or homegrown providers?

With many European companies seemingly pulling back from using overseas clouds, the UK’s reliance on them continues to grow, backed by government guidance – released at the start of 2025 – offering support to public sector organisations that want to host more of their workloads and applications in overseas clouds. In a nutshell, the guidance permits UK public sector organisations to use cloud services hosted outside the UK for “resilience, capacity and access to innovation reasons”, and further states that “non-UK services can be more cost-effective and sustainable” than homegrown ones. ... In the wake of this, the pool of UK-based cloud infrastructure providers that can offer genuine sovereign cloud services has all but dried up, as private and public sector organisations continue to increase their IT spend with US-based cloud firms. Evidence of this can be seen in figures released in late June 2025 by public sector IT market watcher Tussell in its Tech Titans report. The document details the UK public sector’s top 150 highest-earning technology suppliers, revealing that around a quarter of these companies are based in the US – although the majority are from the UK.  ... Another concern cited by customers, continues Michels, is whether the issuing of a US government order could result in them being shut off from using the services of their chosen cloud provider, as allegedly occurred during the aforementioned ICC case.


AI’s near shore: early productivity gains meet long-term uncertainty

The next five years, what we might call the "near shore," will not be defined by a single narrative. It is not going to be purely utopian or dystopian. It is a time where abundance and inequality will rise together, sometimes within the same household, perhaps even within the same moment. Early signs of abundance are becoming tangible. AI tutors help children struggling with algebra to grasp concepts. Real-time translation tools dissolve language barriers, enabling intercultural exchange and small businesses to reach global markets once out of reach. Legal research that once took days now takes minutes, reducing costs and making justice more accessible. In these ways, intelligence increasingly feels like a public utility. This will be more commonplace as AI becomes seamlessly integrated into daily life and nearly invisible. ... Leaders now will not be measured by how fluently they can invoke AI at a conference or in a press release. Instead, their leadership will be measured by whether they can build trust and coherence amid uncertainty. Real leadership now requires an uncommon combination of traits, starting with the ability to acknowledge both the promise and perils of AI. Speaking only of opportunity rings hollow to those facing displacement, while focusing only on disruption risks despair. Both are possible outcomes, perhaps in equal measure. 


Most enterprise AI use is invisible to security teams

“One of the biggest surprises was how much innovation was hiding inside already-sanctioned apps (SaaS and In-house apps). For example, a sales team discovered that uploading ZIP code demographic data into Salesforce Einstein boosted upsell conversion rates. Great for revenue, but it violated state insurance rules against discriminatory pricing. “On paper, Salesforce was an ‘approved’ platform. In practice, the embedded AI created regulatory risk the CISO never saw.” ... “We engineered our prompt detection model to run directly on laptops and browsers, without traffic leaving the device perimeter. The hard part was compressing detection into something lightweight enough that doesn’t hurt performance, while still rich enough to detect prompt interactions, not just app names. “Once we know an interaction is AI, our SaaS has risk and workflow-intelligence models that cluster prompt patterns instead of scanning for static keywords. That preserves privacy, minimizes latency, and lets us scale across thousands of endpoints without draining performance.” ... the focus is on giving CISOs and other leaders the information they need to make decisions. By seeing which tools are being used, companies can evaluate them for risk and decide which to approve or limit. For regulated industries like healthcare, Reese said distinguishing between safe and unsafe AI use requires going beyond app-level monitoring. 


Risks in data center lending: Development delays and SLA breaches

Two major risks dominate the landscape: development delays and operational performance failures. Construction delays can trigger tenant penalties or even lease terminations, while performance-related SLA breaches during operations can have the same outcome. These risks are magnified by common financing structures that use stabilized data centers as collateral for new developments. If one facility fails, the financial ripple effects can destabilize the entire loan portfolio. ... Data centers are infrastructure, not just real estate. Their value lies in consistent digital performance. Lenders must move beyond traditional underwriting and treat operational resilience as part of the credit analysis. Tier certifications, redundancy design (e.g., 2N), and operator track records should all be evaluated alongside tenant creditworthiness. Contracts must be examined for early termination rights, rent abatement clauses, and SLA enforcement mechanisms. And, critically, financial institutions need new tools to transfer these risks. SLA insurance is one such tool. Purpose-built to mirror contractual SLA terms, it provides automatic payouts when performance failures occur. For lenders, this kind of protection turns SLA exposure into a manageable, insurable risk rather than a hidden threat to cash flow and asset value. ... As data centers power the next generation of AI and cloud infrastructure, banks have a critical role to play in supporting their growth. 


Engineering India’s Global Edge: From Talent to Transformation

The word sustainability often drifts into the language of policy. For engineers, it is far more tangible. It is the watt saved in a cooling system, the recycled drop of water in a data center, the line of code that optimises energy draw. Across India, engineers are imbuing the blueprint with the motif of sustainability for designing power-efficient hardware, advancing renewable grids, and developing smarter water and waste solutions for our growing cities. These are not afterthoughts. They are choices made at the drawing board, long before a product is shipped or a system deployed. ... A self-reliant semiconductor ecosystem is not built overnight. It requires decades of accumulated expertise. But each package designed, each layout tested, each failure analysed is a step toward resilience. In this, Indian engineers are not just participants; they are custodians of a future where technology independence is inseparable from economic sovereignty. And as the “Make in India” initiative gathers momentum, engineers are uniquely positioned to transform this vision into world-class products and platforms. ... There is no paucity of opportunity. Global R&D partnerships are deepening. Government missions are laying a foundation for scale. Startups are challenging conventions in electric mobility, clean energy, and electronics. Domestic demand continues to surge. Yet the challenges are not trifling.


Balancing Workloads In AI Processor Designs

“It’s important to think about workloads on the system level,” Piry said. “In mobile, applications running in the background could affect how processes are run, requiring designers to consider branch prediction and prefetch learning rates. In cloud environments, cores may share code and memory mapping, impacting cache replacement policies. Even the software stack has implications for structure sizing and performance consistency. Processor developers also need to think about how features are used in real workloads. Different applications may use security features differently, depending on how they interact with other applications, how secure the coding is, and the level of overall security required. ... Companies with a solid understanding of the workload can then optimize their own designs because they know how a device will be used. This offers significant benefits over a generic solution. “The whole design arc is bent to service those much more narrowly understood needs, rather than having to work for any possible input, and that gives advantages right there,” said Marc Swinnen, product marketing manager at Ansys, now part of Synopsys. ... Similarly with AI, the key factors to consider are the data type and general use cases. “A vision-only NPU might do quite well with being primarily an INT8 machine (8 x 8 MACs),” said Quadric’s Roddy. 

Daily Tech Digest - September 14, 2025


Quote for the day:

"Courage doesn't mean you don't get afraid. Courage means you don't let fear stop you." -- Bethany Hamilton


The first three things you’ll want during a cyberattack

The first wave of panic a cyberattack comes from uncertainty. Is it ransomware? A phishing campaign? Insider misuse? Which systems are compromised? Which are still safe? Without clarity, you’re guessing. And in cybersecurity, guesswork can waste precious time or make the situation worse. ... Clarity transforms chaos into a manageable situation. With the right insights, you can quickly decide: What do we isolate? What do we preserve? What do we shut down right now? The MSPs and IT teams that weather attacks best are the ones who can answer those questions without delays. ... Think of it like firefighting: Clarity tells you where the flames are, but control enables you to prevent the blaze from consuming the entire building. This is also where effective incident response plans matter. It’s not enough to have the tools; you need predefined roles, playbooks and escalation paths so your team knows exactly how to assert control under pressure. Another essential in this scenario is having a technology stack with integrated solutions that are easy to manage. ... Even with visibility and containment, cyberattacks can leave damage behind. They can encrypt data and knock systems offline. Panicked clients demand answers. At this stage, what you’ll want most is a lifeline you can trust to bring everything back and get the organization up and running again.


Emotional Blueprinting: 6 Leadership Habits To See What Others Miss

Most organizations use tools like process mapping, journey mapping, and service blueprinting. All valuable. But often, these efforts center on what needs to happen operationally—steps, sequences, handoffs. Even journey maps that include emotional states tend to track generalized sentiment (“frustrated,” “confused”) at key stages. What’s often missing is an observational discipline that reveals emotional nuance in real time. ... People don’t just come to get things done. They come with emotional residue—worries, power dynamics, pride, shame, hope, exhaustion. And while you may capture some of this through traditional tools, observation fills in what the tools can’t name. ... Set aside assumptions and resist the urge to explain. Just watch. Let insight come without forcing interpretation. ... Focus on micro-emotions in the moment, then pull back to observe the emotional arc of a journey. ... Observe what happens in thresholds—hallways, entries, exits, loading screens. These in-between moments often hold the strongest emotional cues. ... Track how people react, not just what they do. Does their behavior show trust, ease, confusion, or hesitance? ... Trace where momentum builds—or breaks. Energy flow is often a more reliable signal than feedback forms.


Cloud security gaps widen as skills & identity risks persist

According to the report, today's IT environment is increasingly complicated. The data shows that 82% of surveyed organisations now operate hybrid environments, and 63% make use of multiple cloud providers. As the use of cloud services continues to expand, organisations are required to achieve unified security visibility and enforce consistent security policies across fragmented platforms. However, the research found that most organisations currently lack the necessary controls to manage this complexity. This deficiency is leading to blind spots that can be exploited by attackers. ... The research identifies identity management as the central vulnerability in current cloud security practices. A majority of respondents (59%) named insecure identities and permissions as their primary cloud security concern. ... "Identity has become the cloud's weakest link, but it's being managed with inconsistent controls and dangerous permissions. This isn't just a technical oversight; it's a systemic governance failure, compounded by a persistent expertise gap that stalls progress from the server room to the boardroom. Until organisations get back to basics, achieving unified visibility and enforcing rigorous identity governance, they will continue to be outmanoeuvred by attackers," said Liat Hayun, VP of Product and Research at Tenable.


Biometrics inspire trust, policy-makers invite backlash

The digital ID ambitions of the EU and World are bold, the adoption numbers still to come, they hope. Romania is reducing the number of electronic identity cards it is planning to issue for free by a million and a half following a cut to the project’s budget. It risks fines that eventually in theory could stretch into hundreds of millions of euros for missing the EU’s digital ID targets. World now gives fans of IDs issued by the private sector, iris biometrics, decentralized systems and blockchain technologies an opportunity to invest in them on the NASDAQ. ... An analysis of the Online Safety Act by the ITIF cautions that any attempt to protect children from online harms invites backlash if it blocks benign content, or if it isn’t crystal clear about the lines between harmful and legal content. Content that promotes self-harm is being made illegal in the UK under the OSA, shifting the responsibility of online platforms from age assurance to content moderation. By making the move under the OSA, new UK Tech Secretary Liz Kendall risks strengthening arguments that the government is surreptitiously increasing censorship.  Her predecessor Peter Kyle, having presided over the project so far, now gets to explain it to the American government as Trade Secretary. Domestically, more children than adults consider age checks effective, survey respondents tell Sumsub, but nearly half of UK consumers worry about the OSA leading to censorship.


How to make your people love change

The answer lies in a core need every person has: self-concordance. When change is aligned with a person’s aspirations, values, and purpose, they are more likely to embrace it. To make that happen, we need a mindset shift. This needs to happen at two levels. ... The first thing to consider is that we have to think of employees not as objects of change but as internal customers. Just like marketers try to study consumer behaviour and aspirations with deep granularity, we must try to understand employees in similar detail. And not just see them as professionals but as individuals. ... Second, it meets the employees where they are, instead of trying to push them towards an agenda. And third, and most importantly, it makes them not just invested in the change process but turns them into the change architects. What these architects will build may not be the same as what we want them to, but there will be some overlaps. And because we empowered them to do this, they become fellow travelers, and this creates a positive change momentum, which we can harvest to effect the changes we want as well. ... We worked with a client where there was a need to get out of excessively critical thinking—a practice that had kept them compliant and secure, but was now coming in the way of growth—and move towards a more positive culture. 


Cloud-Native Security in 2025: Why Runtime Visibility Must Take Center Stage

For years, cloud security has leaned heavily on preventative controls like code scanning, configuration checks, and compliance enforcement. While essential, these measures provide only part of the picture. They identify theoretical risks, but not whether those risks are active and exploitable in production. Runtime visibility fills that gap. By observing what workloads are actually running — and how they behave — security teams gain the highest fidelity signal for prioritizing threats. ... Modern enterprises face an avalanche of alerts across vulnerability scanners, cloud posture tools, and application security platforms. The volume isn't just overwhelming — it's unsustainable. Analysts often spend more time triaging alerts than actually fixing problems. To be effective, organizations must map vulnerabilities and misconfigurations to:The workloads that are actively running. The business applications they support. The teams responsible for fixing them. This alignment is critical for bridging the gap between security and development. Developers often see security findings as disruptive, low-context interruptions. ... Another challenge enterprises face is accountability. Security findings are only valuable if they reach the right owner with the right context. Yet in many organizations, vulnerabilities are reported without clarity about which team should fix them.


Want to get the most out of agentic AI? Get a good governance strategy in place

The core challenge for CIOs overseeing agentic AI deployments will lie in ensuring that agentic decisions remain coherent with enterprise-level intent, without requiring constant human arbitration. This demands new governance models that define strategic guardrails in machine-readable logic and enforce them dynamically across distributed agents. ... Agentic agents in the network, especially those retrained or fine-tuned locally, may fail to grasp the nuance embedded in these regulatory thresholds. Worse, their decisions might be logically correct yet legally indefensible. Enterprises risk finding themselves in court arguing the ethical judgment of an algorithm. The answer lies in hybrid intelligence: pairing agents’ speed with human interpretive oversight for edge cases, while developing agentic systems capable of learning the contours of ambiguity. ... Enterprises must build policy meshes that understand where an agent operates, which laws apply, and how consent and access should behave across borders. Without this, global companies risk creating algorithmic structures that are legal in no country at all. In regulated industries, ethical norms require human accountability. Yet agent-to-agent systems inherently reduce the role of the human operator. This may lead to catastrophic oversights, even if every agent performs within parameters.


The Critical Role of SBOMs (Software Bill of Materials) In Defending Medtech From Software Supply Chain Threats

One of the primary benefits of an SBOM is enhanced transparency and traceability. By maintaining an accurate and up-to-date inventory of all software components, organizations can trace the origin of each component and monitor any changes or updates. ... SBOMs play a vital role in vulnerability management. By knowing exactly what components are present in their software, organizations can quickly identify and address vulnerabilities as they are discovered. Automated tools can scan SBOMs against known vulnerability databases, alerting organizations to potential risks and enabling timely remediation. ... For medical device manufacturers, compliance with regulatory requirements is paramount. Regulatory bodies, such as the U.S. FDA (Federal Drug Administration) and the EMA (European Medicines Agency), have recognized the importance of SBOMs in ensuring the security and safety of medical devices. ... As part of this regulatory framework, the FDA emphasizes the importance of incorporating cybersecurity measures throughout the product lifecycle, from design and development to post-market surveillance. One of the critical components of this guidance is the inclusion of an SBOM in premarket submissions. The SBOM serves as a foundational element in identifying and managing cybersecurity risks. The FDA’s requirement for an SBOM is not just about listing software components; it’s about promoting a culture of transparency and accountability within the medical device industry.


Shedding light on Shadow AI: Turning Risk to Strategic Advantage

The fact that employees are adopting these tools on their own tells us something important: they are eager for greater efficiency, creativity, and autonomy. Shadow AI often emerges because enterprise tools lag what’s available in the consumer market, or because official processes can’t keep pace with employee needs. Much like the early days of shadow IT, this trend is a response to bottlenecks. People want to work smarter and faster, and AI offers a tempting shortcut. The instinct of many IT and security teams might be to clamp down, block access, issue warnings, and attempt to regain control. ... Employees using AI independently are effectively prototyping new workflows. The real question isn’t whether this should happen, but how organisations can learn from and build on these experiences. What tools are employees using? What are they trying to accomplish? What workarounds are they creating? This bottom-up intelligence can inform top-down strategies, helping IT teams better understand where existing solutions fall short and where there’s potential for innovation. Once shadow AI is recognised, IT teams can move from a reactive to a proactive stance, offering secure, compliant alternatives and frameworks that still allow for experimentation. This might include vetted AI platforms, sandbox environments, or policies that clarify appropriate use without stifling initiative.


Why Friction Should Be a Top Consideration for Your IT Team

Some friction can be good, such as access controls that may require users to take a few seconds to authenticate their identities but that help to secure sensitive data, or change management processes that enable new ways of doing business. By contrast, bad friction creates delays and stress without adding value. Users may experience bad friction in busywork that delivers little value to an organization, or in provisioning delays that slow down important projects. “You want to automate good friction wherever possible,” Waddell said. “You want to eliminate bad friction.” ... As organizations work to eliminate friction, they can explore new approaches in key areas. The use of platform engineering lessens friction in multiple ways, enabling organizations to reduce the time needed to bring new products and services to market. Further, it can help organizations take advantage of automation and standardization while also cutting operational overhead. Establishing cyber resilience is another important way to remove friction. Organizations certainly want to avoid the massive friction of a data breach, but they also want to ensure that they can minimize the impact of a breach and enable faster incident response and recovery. “AI threats will outpace our ability to detect them,” Waddell said. “As a result, resilience will matter more than prevention.”

Daily Tech Digest - September 13, 2025


Quote for the day:

"Small daily improvements over time lead to stunning results." -- Robin Sharma


When it comes to AI, bigger isn’t always better

Developers were already warming to small language models, but most of the discussion has focused on technical or security advantages. In reality, for many enterprise use cases, smaller, domain-specific models often deliver faster, more relevant results than general-purpose LLMs. Why? Because most business problems are narrow by nature. You don’t need a model that has read TS Eliot or that can plan your next holiday. You need a model that understands your lead times, logistics constraints, and supplier risk. ... Just like in e-commerce or IT architecture, organizations are increasingly finding success with best-of-breed strategies, using the right tool for the right job and connecting them through orchestrated workflows. I contend that AI follows a similar path, moving from proof-of-concept to practical value by embracing this modular, integrated approach. Plus, SLMs aren’t just cheaper than larger models, they can also outperform them. ... The strongest case for the future of generative AI? Focused small language models, continuously enriched by a living knowledge graph. Yes, SLMs are still early-stage. The tools are immature, infrastructure is catching up, and they don’t yet offer the plug-and-play simplicity of something like an OpenAI API. But momentum is building, particularly in regulated sectors like law enforcement where vendors with deep domain expertise are already driving meaningful automation with SLMs.


Building Sovereign Data‑Centre Infrastructure in India

Beyond regulatory drivers, domestic data centre capacity delivers critical performance and compliance advantages. Locating infrastructure closer to users through edge or regional facilities has evidently delivered substantial performance gains, with studies demonstrating latency reductions of more than 80 percent compared to centralised cloud models. This proximity directly translates into higher service quality, enabling faster digital payments, smoother video streaming, and more reliable enterprise cloud applications. Local hosting also strengthens resilience and simplifies compliance by reducing dependence on centralised infrastructure and obligations, such as rapid incident reporting under Section 70B of the Information Technology (Amendment) Act, 2008, that are easier to fulfil when infrastructure is located within the country. ... India’s data centre expansion is constrained by key challenges in permitting, power availability, water and cooling, equipment procurement, and skilled labour. Each of these bottlenecks has policy levers that can reduce risk, lower costs, and accelerate delivery. ... AI-heavy workloads are driving rack power densities to nearly three times those of traditional applications, sharply increasing cooling demand. This growth coincides with acute groundwater stress in many Indian cities, where freshwater use for industrial cooling is already constrained. 


How AI is helping one lawyer get kids out of jail faster

Anderson said his use of AI saves up to 94% of evidence review time for his juvenile clients age 12-18. Anderson can now prepare for a bail hearing in half an hour versus days. The time saved by using AI also results in thousands of dollars in time saved. While the tools for AI-based video analysis are many, Anderson uses Rev, a legal-tech AI tool that transcribes and indexes video evidence to quickly turn overwhelming footage into accurate, searchable information. ... “The biggest ROI is in critical, time-sensitive situations, like a bail hearing. If a DA sends me three hours of video right after my client is arrested, I can upload it to Rev and be ready to make a bail argument in half an hour. This could be the difference between my client being held in custody for a week versus getting them out that very day. The time I save allows me to focus on what I need to do to win a case, like coming up with a persuasive argument or doing research.” ... “We are absolutely at an inflection point. I believe AI is leveling the playing field for solo and small practices. In the past, all of the time-consuming tasks of preparing for trial, like transcribing and editing video, were done manually. Rev has made it so easy to do on the fly, by myself, that I don’t have to anticipate where an officer will stray in their testimony. I can just react in real time. This technology empowers a small practice to have the same capabilities as a large one, allowing me to focus on the work that matters most.”


AI-powered Pentesting Tool ‘Villager’ Combines Kali Linux Tools with DeepSeek AI for Automated Attacks

The emergence of Villager represents a significant shift in the cybersecurity landscape, with researchers warning it could follow the malicious use of Cobalt Strike, transforming from a legitimate red-team tool into a weapon of choice for malicious threat actors. Unlike traditional penetration testing frameworks that rely on scripted playbooks, Villager utilizes natural language processing to convert plain text commands into dynamic, AI-driven attack sequences. Villager operates as a Model Context Protocol (MCP) client, implementing a sophisticated distributed architecture that includes multiple service components designed for maximum automation and minimal detection. ... This tool’s most alarming feature is its ability to evade forensic detection. Containers are configured with a 24-hour self-destruct mechanism that automatically wipes activity logs and evidence, while randomized SSH ports make detection and forensic analysis significantly more challenging. This transient nature of attack containers, combined with AI-driven orchestration, creates substantial obstacles for incident response teams attempting to track malicious activity. ... Villager’s task-based command and control architecture enables complex, multi-stage attacks through its FastAPI interface operating on port 37695.


Cloud DLP Playbook: Stopping Data Leaks Before They Happen

To get started on a cloud DLP strategy, organizations must answer two key questions: Which users should be included in the scope?; and Which communication channels should the DLP system cover Addressing these questions can help organizations create a well-defined and actionable cloud DLP strategy that aligns with their broader security and compliance objectives. ... Unlike business users, engineers and administrators require elevated access and permissions to perform their jobs effectively. While they might operate under some of the same technical restrictions, they often have additional capabilities to exfiltrate files. ... While DLP tools serve as the critical last line of defense against active data exfiltration attempts, organizations should not rely only on these tools to prevent data breaches. Reducing the amount of sensitive data circulating within the network can significantly lower risks. ... Network DLP inspects traffic originating from laptops and servers, regardless of whether it comes from browsers, tools, applications, or command-line operations. It also monitors traffic from PaaS components and VMs, making it a versatile system for cloud environments. While network DLP requires all traffic to pass through a network component, such as a proxy, it is indispensable for monitoring data transfers originating from VMs and PaaS services.


Weighing the true cost of transformation

“Most costs aren’t IT costs, because digital transformation isn’t an IT project,” he says. “There’s the cost of cultural change in the people who will have to adopt the new technologies, and that’s where the greatest corporate effort is required.” Dimitri also highlights the learning curve costs. Initially, most people are naturally reluctant to change and inefficient with new technology. ... “Cultural transformation is the most significant and costly part of digital transformation because it’s essential to bring the entire company on board,” Dimitri says. ... Without a structured approach to change, even the best technological tools fail as resistance manifests itself in subtle delays, passive defaults, or a silent return to old processes. Change, therefore, must be guided, communicated, and cultivated. Skipping this step is one of the costliest mistakes a company can make in terms of unrealized value. Organizations must also cultivate a mindset that embraces experimentation, tolerates failure, and values ​​continuous learning. This has its own associated costs and often requires unlearning entrenched habits and stepping out of comfort zones. There are other implicit costs to consider, too, like the stress of learning a new system and the impact on staff morale. If not managed with empathy, digital transformation can lead to burnout and confusion, so ongoing support through a hyper-assistance phase is needed, especially during the first weeks following a major implementation.


5 Costly Customer Data Mistakes Businesses Will Make In 2025

As AI continues to reshape the business technology landscape, one thing remains unchanged: Customer data is the fuel that fires business engines in the drive for value and growth. Thanks to a new generation of automation and tools, it holds the key to personalization, super-charged customer experience, and next-level efficiency gains. ... In fact, low-quality customer data can actively degrade the performance of AI by causing “data cascades” where seemingly small errors are replicated over and over, leading to large errors further along the pipeline. That isn't the only problem. Storing and processing huge amounts of data—particularly sensitive customer data—is expensive, time-consuming and confers what can be onerous regulatory obligations. ... Synthetic customer data lets businesses test pricing strategies, marketing spend, and product features, as well as virtual behaviors like shopping cart abandonment, and real-world behaviors like footfall traffic around stores. Synthetic customer data is far less expensive to generate and not subject to any of the regulatory and privacy burdens that come with actual customer data. ... Most businesses are only scratching the surface of the value their customer data holds. For example, Nvidia reports that 90 percent of enterprise customer data can’t be tapped for value. Usually, this is because it’s unstructured, with mountains of data gathered from call recordings, video footage, social media posts, and many other sources.


Vibe coding is dead: Agentic swarm coding is the new enterprise moat

“Even Karpathy’s vibe coding term is legacy now. It’s outdated,” Val Bercovici, chief AI officer of WEKA, told me in a recent conversation. “It’s been superseded by this concept of agentic swarm coding, where multiple agents in coordination are delivering… very functional MVPs and version one apps.” And this comes from Bercovici, who carries some weight: He’s a long-time infrastructure veteran who served as a CTO at NetApp and was a founding board member of the Cloud Native Compute Foundation (CNCF), which stewards Kubernetes. The idea of swarms isn't entirely new — OpenAI's own agent SDK was originally called Swarm when it was first released as an experimental framework last year. But the capability of these swarms reached an inflection point this summer. ... Instead of one AI trying to do everything, agentic swarms assign roles. A "planner" agent breaks down the task, "coder" agents write the code, and a "critic" agent reviews the work. This mirrors a human software team and is the principle behind frameworks like Claude Flow, developed by Toronto-based Reuven Cohen. Bercovici described it as a system where "tens of instances of Claude code in parallel are being orchestrated to work on specifications, documentation... the full CICD DevOps life cycle." This is the engine behind the agentic swarm, condensing a month of teamwork into a single hour.


The Role of Human-in-the-Loop in AI-Driven Data Management

Human-in-the-loop (HITL) is no longer a niche safety net—it’s becoming a foundational strategy for operationalizing trust. Especially in healthcare and financial services, where data-driven decisions must comply with strict regulations and ethical expectations, keeping humans strategically involved in the pipeline is the only way to scale intelligence without surrendering accountability. ... The goal of HITL is not to slow systems down, but to apply human oversight where it is most impactful. Overuse can create workflow bottlenecks and increase operational overhead. But underuse can result in unchecked bias, regulatory breaches, or loss of public trust. Leading organizations are moving toward risk-based HITL frameworks that calibrate oversight based on the sensitivity of the data and the consequences of error. ... As AI systems become more agentic—capable of taking actions, not just making predictions—the role of human judgment becomes even more critical. HITL strategies must evolve beyond spot-checks or approvals. They need to be embedded in design, monitored continuously, and measured for efficacy. For data and compliance leaders, HITL isn’t a step backward from digital transformation. It provides a scalable approach to ensure that AI is deployed responsibly—especially in sectors where decisions carry long-term consequences.


AI vs Gen Z: How AI has changed the career pathway for junior developers

Ethical dilemmas aside, an overreliance on AI obviously causes an atrophy of skills for young thinkers. Why spend time reading your textbooks when you can get the answers right away? Why bother working through a particularly difficult homework problem when you can just dump it into an AI to give you the answer? To form the critical thinking skills necessary for not just a fruitful career, but a happy life, must include some of the discomfort that comes from not knowing. AI tools eliminate the discovery phase of learning—that precious, priceless part where you root around blindly until you finally understand. ... The truth is that AI has made much of what junior developers of the past did redundant. Gone are the days of needing junior developers to manually write code or debug, because now an already tenured developer can just ask their AI assistant to do it. There’s even some sentiment that AI has made junior developers less competent, and that they’ve lost some of the foundational skills that make for a successful entry-level employee. See above section on AI in school if you need a refresher on why this might be happening. ... More optimistic outlooks on the AI job market see this disruption as an opportunity for early career professionals to evolve their skillsets to better fit an AI-driven world. If I believe in nothing else, I believe in my generation’s ability to adapt, especially to technology.

Daily Tech Digest - September 11, 2025


Quote for the day:

"You live longer once you realize that any time spent being unhappy is wasted." -- Ruth E. Renkl



Six hard truths for software development bosses

Everyone behaves differently when the boss is around. Everyone. And you, as a boss, need to realize this. There are two things to realize here. Firstly, when you are present, people will change who they are and what they say. Secondly, you should consider that fact when deciding whether to be in the room. ... Bosses need to realize that what they say, even comments that you might think are flippant and not meant to be taken seriously, will be taken seriously. ... The other side of that coin is that your silence and non-action can have profound effects. Maybe you space out in a meeting and miss a question. The team might think you blew them off and left the great idea hanging. Maybe you forgot to answer an email. Maybe you had bigger fish to fry and you were a bit short and dismissive of an approach by a direct report. Small lapses can be easily misconstrued by your team. ... You are the boss. You have the power to promote, demote, and award raises and bonuses. These powers are important, and people will see you in that light. Even your best attempts at being cordial, friendly, and collegial will not overcome the slight apprehension your authority will engender. Your mood on any given day will be noticed and tracked. ... You can and should have input into technical decisions and design decisions, but your team will want to be the ones driving what direction things take and how things get done. 


AI prompt injection gets real — with macros the latest hidden threat

“Broadly speaking, this threat vector — ‘malicious prompts embedded in macros’ — is yet another prompt injection method,” Roberto Enea, lead data scientist at cybersecurity services firm Fortra, told CSO. “In this specific case, the injection is done inside document macros or VBA [Visual Basic for Applications] scripts and is aimed at AI systems that analyze files.” Enea added: “Typically, the end goal is to mislead the AI system into classifying malware as safe.” ... “Attackers could embed hidden instructions in common business files like emails or Word documents, and when Copilot processed the file, it executed those instructions automatically,” Quentin Rhoads-Herrera, VP of cybersecurity services at Stratascale, explained. In response to the vulnerability, Microsoft recommended patching, restricting Copilot access, stripping hidden metadata from shared files, and enabling its built-in AI security controls. ... “We’ve already seen proof-of-concept attacks where malicious prompts are hidden inside documents, macros, or configuration files to trick AI systems into exfiltrating data or executing unintended actions,” Stratascale’s Rhoads-Herrera commented. “Researchers have also demonstrated how LLMs can be misled through hidden instructions in code comments or metadata, showing the same principle at work.” Rhoads-Herrera added: “While some of these remain research-driven, the techniques are quickly moving into the hands of attackers who are skilled at weaponizing proof-of-concepts.”


Are you really ready for AI? Exposing shadow tools in your organisation

When an organisation doesn’t regulate an approved framework of AI tools in place, its employees will commonly turn to using these applications across everyday actions. By now, everyone is aware of the existence of generative AI assets, whether they are actively using them or not, but without a proper ruleset in place, everyday employee actions can quickly become security nightmares. This can be everything from employees pasting sensitive client information or proprietary code into public generative AI tools to developers downloading promising open-source models from unverified repositories. ... The root cause of turning to shadow AI isn’t malicious intent. Unlike cyber actors, aiming to disrupt and exploit business infrastructure weaknesses for a hefty payout, employees aren’t leaking data outside of your organisation intentionally. AI is simply an accessible, powerful tool that many find exciting. In the absence of clear policies, training and oversight, and the increased pressure of faster, greater delivery, people will naturally seek the most effective support to get the job done. ... Regardless, you cannot protect against what you can’t see. Tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), which detect unauthorised AI use, must be an essential part of your security monitoring toolkit. Ensuring these alerts connect directly to your SIEM and defining clear processes for escalation and correction are also key for maximum security.


How to error-proof your team’s emergency communications

Hierarchy paralysis occurs when critical information is withheld by junior staff due to the belief that speaking up may undermine the chain of command. Junior operators may notice an anomaly or suspect a procedure is incorrect, but often neglect to disclose their concerns until after a mistake has happened. They may assume their input will be dismissed or even met with backlash due to their position. In many cases, their default stance is to believe that senior staff are acting on insight that they themselves lack. CRM trains employees to follow a structured verbal escalation path during critical incidents. Similar to emergency operations procedures (EOPs), staff are taught to express their concerns using short, direct phrases. This approach helps newer employees focus on the issue itself rather than navigating the interaction’s social aspects — an area that can lead to cognitive overload or delayed action. In such scenarios, CRM recommends the “2-challenge rule”: team members should attempt to communicate an observed issue twice, and if the issue remains unaddressed, escalate it to upper management. ... Strengthening emergency protocols can help eliminate miscommunication between employees and departments. Owners and operators can adopt strategies from other mission-critical industries to reduce human error and improve team responsiveness. While interpersonal issues between departments and individuals in different roles are inevitable, tighter emergency procedures can ensure consistency and more predictable team behavior.


SpamGPT – AI-powered Attack Tool Used By Hackers For Massive Phishing Attack

SpamGPT’s dark-themed user interface provides a comprehensive dashboard for managing criminal campaigns. It includes modules for setting up SMTP/IMAP servers, testing email deliverability, and analyzing campaign results features typically found in Fortune 500 marketing tools but repurposed for cybercrime. The platform gives attackers real-time, agentless monitoring dashboards that provide immediate feedback on email delivery and engagement. ... Attackers no longer need strong writing skills; they can simply prompt the AI to create scam templates for them. The toolkit’s emphasis on scale is equally concerning, as it promises guaranteed inbox delivery to popular providers like Gmail, Outlook, and Microsoft 365 by abusing trusted cloud services such as Amazon AWS and SendGrid to mask its malicious traffic. ... What once required significant technical expertise can now be executed by a single operator with a ready-made toolkit. The rise of such AI-driven platforms signals a new evolution in cybercrime, where automation and intelligent content generation make attacks more scalable, convincing, and difficult to detect. To counter this emerging threat, organizations must harden their email defenses. Enforcing strong email authentication protocols such as DMARC, SPF, and DKIM is a critical first step to make domain spoofing more difficult. Furthermore, enterprises should deploy AI-powered email security solutions capable of detecting the subtle linguistic patterns and technical signatures of AI-generated phishing content.


How attackers weaponize communications networks

The most attractive targets for advanced threat actors are not endpoint devices or individual servers, but the foundational communications networks that connect everything. This includes telecommunications providers, ISPs, and the routing infrastructure that forms the internet’s backbone. These networks are a “target-rich environment” because compromising a single point of entry can grant access to a vast amount of data from a multitude of downstream targets. The primary motivation is overwhelmingly geopolitical. We’re seeing a trend of nation-state actors, such as those behind the Salt Typhoon campaign, moving beyond corporate espionage to a more strategic, long-term intelligence-gathering mission. ... Two recent trends are particularly telling and serve as major warning signs. The first is the sheer scale and persistence of these attacks. ... The second trend is the fusion of technical exploits with AI-powered social engineering. ... A key challenge is the lack of a standardized global approach. Differing regulations around data retention, privacy, and incident reporting can create a patchwork of security requirements that threat actors can easily exploit. For a global espionage campaign, a weak link in one country’s regulatory framework can compromise an entire international communications chain. The goal of international policy should be to establish a baseline of security that includes mandatory incident reporting, a unified approach to patching known vulnerabilities, and a focus on building a collective defense.


AI's free web scraping days may be over, thanks to this new licensing protocol

AI companies are capturing as much content as possible from websites while also extracting information. Now, several heavyweight publishers and tech companies -- Reddit, Yahoo, People, O'Reilly Media, Medium, and Ziff Davis (ZDNET's parent company) -- have developed a response: the Really Simple Licensing (RSL) standard. You can think of RSL as Really Simple Syndication's (RSS) younger, tougher brother. While RSS is about syndication, getting your words, stories, and videos out onto the wider web, RSL says: "If you're an AI crawler gobbling up my content, you don't just get to eat for free anymore." The idea behind RSL is brutally simple. Instead of the old robots.txt file -- which only said, "yes, you can crawl me," or "no, you can't," and which AI companies often ignore -- publishers can now add something new: machine-readable licensing terms. Want an attribution? You can demand it. Want payment every time an AI crawler ingests your work, or even every time it spits out an answer powered by your article? Yep, there's a tag for that too. ... It's a clever fix for a complex problem. As Tim O'Reilly, the O'Reilly Media CEO and one of the RSL initiative's high-profile backers, said: "RSS was critical to the internet's evolution…but today, as AI systems absorb and repurpose that same content without permission or compensation, the rules need to evolve. RSL is that evolution."


AI is changing the game for global trade: Nagendra Bandaru, Wipro

AI is revolutionising global supply chain and trade management by enabling businesses across industries to make real-time, intelligent decisions. This transformative shift is driven by the deployment of AI agents, which dynamically respond to changing tariff regimes, logistics constraints, and demand fluctuations. Moving beyond traditional static models, AI agents are helping create more adaptive and responsive supply chains. ... The strategic focus is also evolving. While cost optimisation remains important, AI is now being leveraged to de-risk operations, anticipate geopolitical disruptions, and ensure continuity. In essence, agentic AI is reshaping supply chains into predictive, adaptive ecosystems that align more closely with the complexities of global trade. ... The next frontier is going to be threefold: first, the rise of agentic AI at scale marks a shift from isolated use cases to enterprise-wide deployment of autonomous agents capable of managing end-to-end trade ecosystems; second, the development of sovereign and domain-specific language models is enabling lightweight, highly contextualised solutions that uphold data sovereignty while delivering robust, enterprise-grade outcomes; and third, the convergence of AI with emerging technologies—including blockchain for provenance and quantum computing for optimisation—is poised to redefine global trade dynamics.


5 challenges every multicloud strategy must address

Transferring AI data among various cloud services and providers also adds complexity — but also significant risks. “Tackling software sprawl, especially as organizations accelerate their adoption of AI, is a top action for CIOs and CTOs,” says Mindy Lieberman, CIO at database platform provider MongoDB. ... A multicloud environment can complicate the management of data sovereignty. Companies need to ensure that data remains in line with the laws and regulations of the specific geographic regions where it is stored and processed. ... Deploying even one cloud service can present cybersecurity risks for an enterprise, so having a strong security program in place is all the more vital for a multicloud environment. The risks stem from expanded attack surfaces, inconsistent security practices among service providers, increased complexity of the IT infrastructure, fragmented visibility, and other factors. IT needs to be able to manage user access to cloud services and detect threats across multiple environments — in many cases without even having a full inventory of cloud services. ... “With greater complexity comes more potential avenues of failure, but also more opportunities for customization and optimization,” Wall says. “Each cloud provider offers unique strengths and weaknesses, which means forward-thinking enterprises must know how to leverage the right services at the right time.”


What Makes Small Businesses’ Data Valuable to Cybercriminals?

Small businesses face unique challenges that make them particularly vulnerable. They often lack dedicated IT or cybersecurity teams, sophisticated systems, and enterprise-grade protections. Budget constraints mean many cannot afford enterprise-level cybersecurity solutions, creating easily exploitable gaps. Common issues include outdated software, reduced security measures, and unpatched systems, which weaken defenses and provide easy entry points for criminals. A significant vulnerability is the lack of employee cybersecurity awareness. ... Small businesses, just like large organizations, collect and store vast amounts of valuable data. Customer data represents a goldmine for cybercriminals, including first and last names, home and email addresses, phone numbers, financial information, and even medical information. Financial records are equally attractive targets, including business financial information, payment details, and credit/debit card payment data. Intellectual property and trade secrets represent valuable proprietary assets that can be sold to competitors or used for corporate espionage. ... Small businesses are undeniably attractive targets for cybercriminals, not because they are financial giants, but because they are perceived as easier to breach due to resource constraints and common vulnerabilities. Their data, from customer PII to financial records and intellectual property, is highly valuable for resale, fraud, and as gateways to larger targets.