Pages

Daily Tech Digest - September 15, 2025


Quote for the day:

“A leader takes people where they want to go. A great leader takes people where they don’t necessarily want to go, but ought to be.” -- Rosalynn Carter



MCP’s biggest security loophole is identity fragmentation

Almost every attack, excepting the odd zero-day exploit, begins with a mistake, like exposing a password or giving a junior employee access to privileged data. It’s why phishing via credentials abuse is such a common attack vector. It’s also why the risk of protocols being exploited to breach IT infrastructure doesn’t come from the protocol itself, but the identities interacting with the protocol. Any human or machine user reliant on static credentials or standing privileges is vulnerable to phishing. This makes any AI or protocol (MCP) interacting with that user vulnerable, too. This is MCP’s biggest blindspot. While MCP allows AI systems to request only relevant context from data repositories or tools, it doesn’t stop AI from surrendering sensitive data to identities that have been impersonated via stolen credentials. ... So, replace those standing secrets for agents with strong, ephemeral authentication, combined with just-in-time access. Speaking of access, the access controls of your chosen LLM should be tied to the same identity system as the rest of your company. Otherwise, there’s not much stopping it from disclosing sensitive data to the intern asking for the highest-paid employees. You need a single source of truth for identity and access that applies to all identities. Without that, it becomes impossible to enforce meaningful guardrails.


Is Software Engineering Dead?

Software engineering is the systematic application of engineering principles to the design, development, testing and maintenance of software systems. It involves structured processes, tools and methodologies to ensure software is reliable, scalable, and meets user requirements. ... Generative AI is transforming software engineering by allowing applications to interact intelligently and autonomously, similar to human interactions. More than 50% of software engineering teams will be actively building LLM-based features by 2027. “Successfully building LLM-based applications and agents requires software engineering leaders to rethink their strategies,” Herschmann says. “This means investing in upskilling, experimenting with GenAI outputs and implementing strong guardrails to manage risks.” ... The bottom line: In the age of GenAI, is software engineering dead? No. GenAI automates many coding tasks, but software engineering is much more than just writing code. It involves architecture, business grasp, cybersecurity and scalability by design, testing, maintenance and human-centered problem solving. GenAI can assist, but it doesn’t replace the need for engineers who understand context, constraints and consequences. Talent density—the concentration of highly skilled professionals within teams—has become a key differentiator for high-performing engineering organizations. 


Walmart's AI Gamble Is Rewriting the Rules of Retail

As part of its AI agents road map, Walmart introduced WIBEY, a developer-focused agent that serves as a unified entry point for intelligent action across Walmart systems. "Built on Element, WIBEY is not a dashboard or portal; it's an invocation layer that interprets developer intent and orchestrates execution across Walmart's agentic ecosystem. It abstracts complexity and connects systems through clean prompts, shared context and intelligent delegation," said Sravana Kumar Karnati ... Initially built for overnight stocking, Walmart's AI-powered workflow tool now guides associates on where to focus their efforts. Early results show that team leads and managers have cut shift planning time from 90 minutes to 30 minutes. The tool is currently being piloted for broader use across other shifts and locations. ... AI also powers Walmart's conversational shopping tools. Its AI-enabled search and chat interface lets customers ask natural language questions and receive tailored suggestions. The result: higher basket sizes and stronger customer retention. "Customers can use Walmart Voice Order, which enables them to pair their Walmart accounts to their smart speakers and mobile devices. By using base natural language understanding capabilities to understand queries and determine which actions are required, the systems can quickly identify the conversation's context and a customer's needs," said Anil Madan.


Bake Relentless Cybersecurity Into DevOps Without Slowing Releases

If we want teams to care about cybersecurity, we’ve got to measure it in engineering terms, not policy poetry. Let’s pick a few outcome metrics and wire them into the same dashboards we use for latency and errors. The simplest start is time-to-fix. Track median and p95 time to remediate critical vulns from first detection to merged fix; it’s concrete, actionable, and perfect for trend lines. We can pair that with exposure windows: how long a vulnerable artifact was actually running in production. ... “Shift left” can become “shift everything and burn the CPU.” Let’s be picky. The highest-return early checks are simple, fast, and close to developers’ daily flow: secrets detection, dependency scanning, and lightweight static analysis. Secrets first, because even one leak is too many. Then dependencies, because a surprising percent of our code’s risk hides in someone else’s library. And finally static checks that catch obvious footguns without drowning us in false positives. ... Least privilege isn’t a one-time ceremony; it’s a lifestyle backed by code. We write IAM in Terraform or CloudFormation, generate roles per workload, and avoid catch-all policies that feel like duct tape. The technique that works for us is “deny by default, allow the minimum, and tag everything.” Deny statements with conditions are great posture insurance. Scoped access with time-bound credentials ensures the keys we inevitably forget don’t outlive their usefulness.


Go big or go home: Should UK IT buyers favour US clouds or homegrown providers?

With many European companies seemingly pulling back from using overseas clouds, the UK’s reliance on them continues to grow, backed by government guidance – released at the start of 2025 – offering support to public sector organisations that want to host more of their workloads and applications in overseas clouds. In a nutshell, the guidance permits UK public sector organisations to use cloud services hosted outside the UK for “resilience, capacity and access to innovation reasons”, and further states that “non-UK services can be more cost-effective and sustainable” than homegrown ones. ... In the wake of this, the pool of UK-based cloud infrastructure providers that can offer genuine sovereign cloud services has all but dried up, as private and public sector organisations continue to increase their IT spend with US-based cloud firms. Evidence of this can be seen in figures released in late June 2025 by public sector IT market watcher Tussell in its Tech Titans report. The document details the UK public sector’s top 150 highest-earning technology suppliers, revealing that around a quarter of these companies are based in the US – although the majority are from the UK.  ... Another concern cited by customers, continues Michels, is whether the issuing of a US government order could result in them being shut off from using the services of their chosen cloud provider, as allegedly occurred during the aforementioned ICC case.


AI’s near shore: early productivity gains meet long-term uncertainty

The next five years, what we might call the "near shore," will not be defined by a single narrative. It is not going to be purely utopian or dystopian. It is a time where abundance and inequality will rise together, sometimes within the same household, perhaps even within the same moment. Early signs of abundance are becoming tangible. AI tutors help children struggling with algebra to grasp concepts. Real-time translation tools dissolve language barriers, enabling intercultural exchange and small businesses to reach global markets once out of reach. Legal research that once took days now takes minutes, reducing costs and making justice more accessible. In these ways, intelligence increasingly feels like a public utility. This will be more commonplace as AI becomes seamlessly integrated into daily life and nearly invisible. ... Leaders now will not be measured by how fluently they can invoke AI at a conference or in a press release. Instead, their leadership will be measured by whether they can build trust and coherence amid uncertainty. Real leadership now requires an uncommon combination of traits, starting with the ability to acknowledge both the promise and perils of AI. Speaking only of opportunity rings hollow to those facing displacement, while focusing only on disruption risks despair. Both are possible outcomes, perhaps in equal measure. 


Most enterprise AI use is invisible to security teams

“One of the biggest surprises was how much innovation was hiding inside already-sanctioned apps (SaaS and In-house apps). For example, a sales team discovered that uploading ZIP code demographic data into Salesforce Einstein boosted upsell conversion rates. Great for revenue, but it violated state insurance rules against discriminatory pricing. “On paper, Salesforce was an ‘approved’ platform. In practice, the embedded AI created regulatory risk the CISO never saw.” ... “We engineered our prompt detection model to run directly on laptops and browsers, without traffic leaving the device perimeter. The hard part was compressing detection into something lightweight enough that doesn’t hurt performance, while still rich enough to detect prompt interactions, not just app names. “Once we know an interaction is AI, our SaaS has risk and workflow-intelligence models that cluster prompt patterns instead of scanning for static keywords. That preserves privacy, minimizes latency, and lets us scale across thousands of endpoints without draining performance.” ... the focus is on giving CISOs and other leaders the information they need to make decisions. By seeing which tools are being used, companies can evaluate them for risk and decide which to approve or limit. For regulated industries like healthcare, Reese said distinguishing between safe and unsafe AI use requires going beyond app-level monitoring. 


Risks in data center lending: Development delays and SLA breaches

Two major risks dominate the landscape: development delays and operational performance failures. Construction delays can trigger tenant penalties or even lease terminations, while performance-related SLA breaches during operations can have the same outcome. These risks are magnified by common financing structures that use stabilized data centers as collateral for new developments. If one facility fails, the financial ripple effects can destabilize the entire loan portfolio. ... Data centers are infrastructure, not just real estate. Their value lies in consistent digital performance. Lenders must move beyond traditional underwriting and treat operational resilience as part of the credit analysis. Tier certifications, redundancy design (e.g., 2N), and operator track records should all be evaluated alongside tenant creditworthiness. Contracts must be examined for early termination rights, rent abatement clauses, and SLA enforcement mechanisms. And, critically, financial institutions need new tools to transfer these risks. SLA insurance is one such tool. Purpose-built to mirror contractual SLA terms, it provides automatic payouts when performance failures occur. For lenders, this kind of protection turns SLA exposure into a manageable, insurable risk rather than a hidden threat to cash flow and asset value. ... As data centers power the next generation of AI and cloud infrastructure, banks have a critical role to play in supporting their growth. 


Engineering India’s Global Edge: From Talent to Transformation

The word sustainability often drifts into the language of policy. For engineers, it is far more tangible. It is the watt saved in a cooling system, the recycled drop of water in a data center, the line of code that optimises energy draw. Across India, engineers are imbuing the blueprint with the motif of sustainability for designing power-efficient hardware, advancing renewable grids, and developing smarter water and waste solutions for our growing cities. These are not afterthoughts. They are choices made at the drawing board, long before a product is shipped or a system deployed. ... A self-reliant semiconductor ecosystem is not built overnight. It requires decades of accumulated expertise. But each package designed, each layout tested, each failure analysed is a step toward resilience. In this, Indian engineers are not just participants; they are custodians of a future where technology independence is inseparable from economic sovereignty. And as the “Make in India” initiative gathers momentum, engineers are uniquely positioned to transform this vision into world-class products and platforms. ... There is no paucity of opportunity. Global R&D partnerships are deepening. Government missions are laying a foundation for scale. Startups are challenging conventions in electric mobility, clean energy, and electronics. Domestic demand continues to surge. Yet the challenges are not trifling.


Balancing Workloads In AI Processor Designs

“It’s important to think about workloads on the system level,” Piry said. “In mobile, applications running in the background could affect how processes are run, requiring designers to consider branch prediction and prefetch learning rates. In cloud environments, cores may share code and memory mapping, impacting cache replacement policies. Even the software stack has implications for structure sizing and performance consistency. Processor developers also need to think about how features are used in real workloads. Different applications may use security features differently, depending on how they interact with other applications, how secure the coding is, and the level of overall security required. ... Companies with a solid understanding of the workload can then optimize their own designs because they know how a device will be used. This offers significant benefits over a generic solution. “The whole design arc is bent to service those much more narrowly understood needs, rather than having to work for any possible input, and that gives advantages right there,” said Marc Swinnen, product marketing manager at Ansys, now part of Synopsys. ... Similarly with AI, the key factors to consider are the data type and general use cases. “A vision-only NPU might do quite well with being primarily an INT8 machine (8 x 8 MACs),” said Quadric’s Roddy. 

No comments:

Post a Comment