Showing posts with label DevEx. Show all posts
Showing posts with label DevEx. Show all posts

Daily Tech Digest - January 02, 2026


Quote for the day:

“If your ship doesn’t come in, swim out to meet it!” -- Jonathan Winters



Delivering resilience and continuity for AI

Think of it as technical debt, suggests IDC group VP Daniel Saroff as most enterprises underestimate the strain AI puts on connectivity and compute. Siloed infrastructure won’t deliver what AI needs and CIOs need to think about these and other things in a more integrated way to make AI successful. “You have to look at your GPU infrastructure, bandwidth, network availability, and connectivity between respective applications,” he says. “If you have environments not set up for highly transactional, GPU-intensive environments, you’re going to have a problem,” Saroff warns. “And having very fragmented infrastructure means you need to pull data and integrate multiple different systems, especially when you start to look at agentic AI.” ... Making AI scale will almost certainly mean taking a hard look at your data architecture. Every database adds features for AI. And lakehouses promise you can bring operational data and analytics together without affecting the SLAs of production workloads. Or you can go further with data platforms like Azure Fabric that bring in streaming and time series data to use for AI applications. If you’ve already tried different approaches, you likely need to rearchitect your data layer to get away from the operational sprawl of fragmented microservices, where every data hand-off between separate vector stores, graph databases, and document silos introduces latency and governance gaps. Too many points of failure make it hard to deliver high availability guarantees.


Technological Disruption: Strategic Inflection Points From 2026 - 2036

From a defensive standpoint, AI-driven security solutions will provide continuous surveillance, automated remediation, and predictive threat modeling at a scale unattainable by human analysts. Simultaneously, attackers will utilize AI to create polymorphic malware, execute influence operations, and exploit holes at machine speed. The outcome will be an environment where cyber war progresses more rapidly than conventional command-and-control systems can regulate. As we approach 2036, the primary concern will be AI governance rather than AI capacity. ... From 2026 to 2030, enterprises will increasingly recognize that cryptographic agility is vital. The move to post-quantum cryptography standards means that old systems, especially those in critical infrastructure, financial services, and government networks, need to be fully inventoried, evaluated, and upgraded. By the early 2030s, quantum innovation will transcend cryptography, impacting optimization, materials science, logistics, and national security applications. ... In the forthcoming decade, supply chain security will transition from compliance-based evaluations to ongoing risk intelligence. Transparency methods, including software bills of materials, hardware traceability, and real-time vendor risk assessment, will evolve into standard expectations rather than just best practices. Supply chain resilience will strategically impact national competitiveness.


True agentic AI is years away - here's why and how we get there

We're not there yet. We're not even close. Today's bots are limited to chat interactions and often fail outside that narrow operating context. For example, what Microsoft calls an "agent" in the Microsoft 365 productivity suite, probably the best-known instance of an agent, is simply a way to automatically generate a Word document. Market data shows that agents haven't taken off. ... Simple automations can certainly bring about benefits, such as assisting a call center operator or rapidly handling numerous invoices. However, a growing body of scholarly and technical reports has highlighted the limitations of today's agents, which have failed to advance beyond these basic automations. ... Before agents can live up to the "fully autonomous code" hype of Microsoft and others, they must overcome two primary technological shortcomings. Ongoing research across the industry is focused on these two challenges: Developing a reinforcement learning approach to designing agents; and Re-engineering AI's use of memory -- not just memory chips such as DRAM, but the whole phenomenon of storing and retrieving information. Reinforcement learning, which has been around for decades, has demonstrated striking results in enabling AI to carry out tasks over a very long time horizon. ... On the horizon looms a significant shift in reinforcement learning itself, which could be a boon or further complicate matters. Can AI do a better job of designing reinforcement learning than humans?


Why Developer Experience Matters More Than Ever in Banking

Effective AI assistance, in fact, meets developers where they are—or where they work. Some prefer a command-line interface, others live inside an IDE, and still others rely heavily on sample code and language-specific SDKs. A strong DX strategy supports all of these modes, using AI to surface accurate, context-aware guidance without forcing developers into a single workflow. When AI reinforces clarity, it becomes a force multiplier. ... As AI-assisted development becomes more common, the quality of documentation takes on new importance. Because it is no longer read only by humans, documentation increasingly serves as the knowledge base that enables AI agents that help developers search, generate, and validate code. When documentation is vague or poorly structured, it introduces confusion, often in ways that actively undermine developer confidence. ... In highly regulated environments, developers want, and expect, guardrails—but not at the expense of speed and consistency. One of the most effective ways to balance those demands is by codifying business rules and compliance requirements directly into the platform, rather than relying on manual, human-driven review at key milestones. Talluri describes this approach as “policy as code”: embedding rules, validations, and regional requirements into the system so developers receive immediate, actionable prompts and feedback as they work. ... The business case for exceptional developer experience rests on a simple truth: trust drives productivity.


AI-powered testing for strategic leadership

Nearly half of teams still release untested code due to time pressure, creating fragile systems and widening risk exposure. Legacy architectures further compound this, making modernisation difficult and slowing down automated validation,” he said. AI-generated code also introduces new vulnerabilities. Without strong validation pipelines, testing quickly becomes the bottleneck of transformation. Developers often view testing as tedious, and with modern codebases spanning multiple interconnected applications, the challenge intensifies. At the same time, misalignment between leadership and engineering teams leads to unclear priorities and rushed decisions. While the pace of development already feels fast, it is only set to accelerate. To overcome barriers, CIOs can adopt model-based, codeless AI testing that reduces dependence on fragile code-level automation and cuts ongoing maintenance. This approach can reduce manual effort by 80%–90% and enables non-technical experts to participate through natural-language and visual test generation. For Wong, strong governance is vital. This entails domain-trained, testing-specific AI that avoids hallucinations and supports safe, transparent validation. Instead of becoming autonomous, AI can act as a co-pilot working alongside developers. “By aligning teams, modernising toolchains, and embedding guardrails, CIOs can shift from reactive firefighting to proactive, AI-driven quality engineering,” he said.


The Architect’s Dilemma: Choose a Proven Path or Pave Your Own Way?

Platforms and frameworks are like paved roads that may help a team progress faster on their journey, with well-defined "exit ramps" or extension points where a team can extend the platform to meet their needs, but they come with side-effects that may make them undesirable. Teams need to decide when, if ever, they need to leave the path others have paved and find their own way by developing extensions to the platform or framework, or by developing new platforms or frameworks. The challenge teams face when they use platforms or frameworks as the basis for their software architectures is to choose the "paved road" (platform or framework) that gets them closest to their desired destination with minimal diversions or new construction. ... Many platform decisions are innocuous and can be accepted and ignored when they don’t affect the QARs that the team needs to meet. The only way to know whether the decisions are harmful is through experiments that expose when the platform is failing to meet the goals of the system. Since the decisions made by the platform developers are often undocumented and/or unknowable, it’s imperative that teams be able to test their system (including the platforms on which they are built) to make sure that their architectural goals (i.e. QARs) are being met. ... Using the "paved road" metaphor, the LLM provides a proven path but it does not take the team where they need to go. When this happens, they have no choice but to either start extending the platform (if they can), finding a different platform, or building their own platform.


Supply chains, AI, and the cloud: The biggest failures (and one success) of 2025

By compromising a single target with a large number of downstream users—say a cloud service or maintainers or developers of widely used open source or proprietary software—attackers can infect potentially millions of the target’s downstream users. ... Another significant security story cast both Meta and Yandex as the villains. Both companies were caught exploiting an Android weakness that allowed them to de-anonymize visitors so years of their browsing histories could be tracked. The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allowed Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. ... The outage with the biggest impact came in October, when a single point of failure inside Amazon’s sprawling network took out vital services worldwide. It lasted 15 hours and 32 minutes. The root cause that kicked off a chain of events was a software bug in the software that monitors the stability of load balances by, among other things, periodically creating new DNS configurations for endpoints within the Amazon Web Services network. A race condition—a type of bug that makes a process dependent on the timing or sequence of events that are variable and outside the developers’ control—caused a key component inside the network to experience “unusually high delays needing to retry its update on several of the DNS endpoint,” Amazon said in a post-mortem.


The Evolving Cybersecurity Challenge for Critical Infrastructure

Convergence between OT, IT and the cloud is providing cybercriminal groups with the opportunity to target critical infrastructure. Operators, and regulators, are wrestling with new technology and new manufacturers, outside the traditional OT/ICS supply chain. “With the geopolitical tensions and the way that the world will look in maybe a few years, they're starting to scratch their heads and think, ‘okay, is it secure? Is it safe? How was it developed? Is there any remote access? How is it being configured?’ There are things that are being done now, that will have an effect in a few years’ time,” cautioned Daniel dos Santos, head of security research at Forescout's Vedere Labs. Given the lifespans of operational technology, installing insecure equipment now can have long-term consequences. Meanwhile, CISOs face dealing with older hardware that was not designed for modern threats. Even where vendors release patches, CNI operators do not always apply them, either because of concerns about business interruption, or a lack of visibility. ... Threats to CNI are not likely to abate in 2026. Legislators are putting more emphasis on cyber resilience and directives, such as the EU’s Cyber Resilience Act, will improve the security of connected devices. But these upgrades take time. “Threats from criminal groups continue to grow exponentially,” said Phil Tonkin, CTO at OT security specialists Dragos


The changing role of the MSP: What does this mean for security?

MSPs hold a unique position within the IT ecosystem, as they are often responsible for managing and supporting the IT infrastructures, cloud services, and cybersecurity of many different organizations. These trusted partners often have privileged access to the inner workings of the organizations they support, including access to the critical systems, sensitive information, and intellectual property of their clients. ... Research shows that over half of MSP leaders globally believe that their customers are at more risk today than this time last year when it comes to cyber threats, with AI-based attack vectors, ransomware/malware, and insider threats the most commonly faced threats. As a result of this uptick in threats, more organizations than ever are leaning on MSPs for cyber support. In fact, in 2025, 84% of MSPs managed either their clients’ cyber infrastructure or their cyber and IT estates combined. This increased significantly, from 64% the previous year. What this shows is that SMEs are realising that they cannot handle cybersecurity alone, turning to MSPs for additional help. Cybersecurity is no longer an optional extra or add-on; it’s becoming a core, expected service for MSPs. MSP leaders are transitioning from general IT support to becoming essential cybersecurity guardians. ... MSPs that adapt by investing in specialized cybersecurity expertise, advanced technologies, and a proactive security posture will thrive, becoming indispensable partners to businesses navigating the complex world of cyber risk. 


What’s next for Azure containers?

Until now, even though Azure has had deep eBPF support, you’ve had to bring your own eBPF tools and manage them yourself, which does require expertise to run at scale. Not everyone is a Kubernetes platform engineer, and with tools like AKS providing a managed environment for cloud-native applications, having a managed eBPF environment is an important upgrade. The new Azure Managed Cilium tool provides a quick way of getting that benefit in your applications, using it for host routing and significantly reducing the overhead that comes with iptables-based networking. ... Declarative policies let Azure lock down container features to reduce the risk of compromised container images affecting other users. At the same time, it’s working to secure the underlying host OS, which for ACI is Linux. SELinux allows Microsoft to lock that image down, providing an immutable host OS. However, those SELinux policies don’t cross the boundary into containers, leaving their userspace vulnerable. ... Having a policy-driven approach to security helps quickly remediate issues. If, say, a common container layer has a vulnerability, you can build and verify a patch layer and deploy it quickly. There’s no need to patch everything in the container, only the relevant components. Microsoft has been doing this for OS features for some time now as part of its internal Project Copacetic, and it’s extending the process to common runtimes and libraries, building patches with updated packages for tools like Python.

Daily Tech Digest - July 11, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


Throwing AI at Developers Won’t Fix Their Problems

Organizations are spending too much time, money and energy focusing on the tools themselves. “Should we use OpenAI or Anthropic? Copilot or Cursor?” We see two broad patterns for how organizations approach AI tool adoption. The first is that leadership has a relationship with a certain vendor or just a personal preference, so they pick a tool and mandate it. This can work, but you’ll often get poor results — not because the tool is bad, but because the market is moving too fast for centralized teams to keep up. ... The second model, which generally works much better, is to allow early adopters to try new tools and find what works. This gives developers autonomy to improve their own workflows and reduces the need for a central team to test every new tool exhaustively. Comparing the tools by features or technology is less important every day. You’ll waste a lot of energy debating minor differences that won’t matter next year. Instead, focus on what problem you want to solve. Are you trying to improve testing? Code review? Documentation? Incident response? Figure out the goal first. Then see if an AI tool (or any tool) actually helps. If you don’t, you’ll just make DevEx worse: You’ll have a landscape of 100 tools nobody knows how to use, and you’ll deliver no real value.


Anatomy of a Scattered Spider attack: A growing ransomware threat evolves

Scattered Spider began its attack against the unnamed organization’s public-facing Oracle Cloud authentication portal, targeting its chief financial officer. Using personal details, such as the CFO’s date of birth and the last four digits of their Social Security number obtained from public sources and previous breaches, Scattered Spider impersonated the CFO in a call to the company’s help desk, tricking help desk staff into resetting the CFO’s registered device and credentials. ... The cybercriminals extracted more than 1,400 secrets by taking advantage of compromised admin accounts tied to the target’s CyberArk password vault and likely an automated script. Scattered Spider granted administrator roles to compromised user accounts before using tools, including ngrok, to maintain access on compromised virtual machines. ... Scattered Spider’s operations have become more aggressive and compressed. “Within hours of initial compromise — often via social engineering — they escalate privileges, move laterally, establish persistence, and begin reconnaissance across both cloud and on-prem environments,” Beek explained. “This speed and fluidity represent a significant escalation in operational maturity.” ... Defending effectively against Scattered Spider involves tackling both human and technical vulnerabilities, ReliaQuest researchers noted.


Data governance: The contract layer that makes agentic systems possible

Today, AI has changed everything. Lineage, access enforcement and cataloging must operate in real time and cover vastly more data types and sources. Models consume data continuously and make decisions instantly, raising the stakes for mistakes or gaps in oversight. What used to be a once-a-week check is now an always-on discipline. This transformation has turned data governance from a checklist into a living system that protects quality and trust at scale. ... One of the biggest misconceptions is that governance slows down innovation. In reality, good governance speeds it up. By clarifying ownership, policies and data quality from the start, teams avoid spending precious time reconciling mismatches and can focus on delivering AI that works as intended. A clear governance framework reduces unnecessary data copies, lowers regulatory risk and prevents AI from producing unpredictable results. Getting this right also requires a culture shift. Producers and consumers alike need to see themselves as co-stewards of shared data products. ... Enterprises deploying agentic AI cannot leave governance behind. These systems run continuously, make autonomous decisions and rely on accurate context to stay relevant. Governance must move from passive checks to an active, embedded foundation within both architecture and culture.


How CIOs Are Navigating Today’s Hyper Volatility

“When it comes to changing dynamics, [such as] AI and driving innovation, there are several things that people like me are dealing with right now. There is an impact on how you hire people, staffing, how to structure your organization,” says Johar. “There is an impact on risk. I’m also responsible within my organization for managing the risk of data, privacy and security, and AI is bringing a new dimension to that risk. It’s an opportunity, but it's also a risk. How you structure your organization, how you manage risk, how you drive transformation -- these things are all connected.” ... “[CIOs] are emerging as transformation leaders, so they need to understand how to navigate the culture change of an organization, the change in people in an organization. They must know how to tell stories so they can get the organization on board,” says Danielle Phaneuf, a partner, PwC cloud and digital strategy operating model leader. “Their mindset is different, so they're embracing the transformation with a product model that allows them to move faster [and] allows them to think long term. They’re building these new muscles around change leadership and engaging the business early, co-creating solutions, not thinking they must solve everything on their own, and doing that in an agile way.”


What Is AI Agent Washing And Why Is It A Risk To Businesses?

You’ve heard of greenwashing and AI-washing? Well, now it seems that the hype-merchants and bandwagon-jumpers with technology to sell have come up with a new (and perhaps predictably inevitable) scam. Analysts at Gartner say unscrupulous vendors are increasingly engaging in "agent washing" and say that out of “thousands” of supposedly agentic AI products tested, only 130 truly lived up to the claim. ... So, what’s the scam? Well, according to the report, agent washing involves passing off existing automation technology, including LLM-powered chatbots and robotic process automation, as agentic, when in reality it lacks those capabilities. ... Tools that claim to be agentic because they orchestrate and pull together multiple AI systems, such as marketing automation platforms and workflow automation tools, are stretching the term, too, unless they are also capable of autonomously coordinating the usage of those tools for long-term planning and decision-making. A few more hypothetical examples: While an AI chatbot-based system can write emails on command, an agentic system might write emails, identify the best recipients for marketing purposes, send the emails out, monitor responses, and then generate follow-up emails, tailored to individual responders.


Agentic AI Architecture Framework for Enterprises

The critical decision point lies in understanding when predictability and control take precedence versus when flexibility and autonomous decision-making deliver greater value. This understanding leads to a fundamental principle: start with the simplest effective solution, adding complexity only when clear business value justifies the additional operational overhead and risk. ... Enterprise deployment of agentic AI creates an inherent tension between AI autonomy and organizational governance requirements. Our Analysis of successful MVPs and on-going production implementations across multiple industries reveals three distinct architectural tiers, each representing different trade-offs between capability and control while anticipating emerging regulatory frameworks like the EU AI Act and others coming. These tiers form a systematic maturity progression, so organizations can build competency and stakeholder trust incrementally before advancing to more sophisticated implementations. ... Our three-tier progression manifests differently across industries, reflecting unique regulatory environments, risk tolerances, customer expectations and operational requirements. Understanding these industry-specific approaches enables organizations to tailor their implementation strategies while maintaining systematic capability development.


Rewriting the rules of enterprise architecture with AI agents

In enterprise architecture, agentic AI systems can be deployed as digital “co-architects”, process optimizers, compliance monitors and scenario planners — each acting with a degree of independence and intelligence previously impossible. So why agentic AI and simulations for governance…and why now? Governance in enterprise architecture is about ensuring that IT systems, processes and data align with business goals, comply with regulations and adapt to change. ... These methods are increasingly inadequate in the face of real-time business dynamics. Agentic AI introduces a new composability model that is achievable: Governance that is continuous, adaptive and proactive. Agentic systems can monitor the enterprise landscape, simulate the impact of changes, enforce policies autonomously and even resolve conflicts or escalate issues when necessary. This results in governance that is both more robust and more responsive to business needs. Gartner’s research reinforces the impact of agency and simulations on enterprise architecture’s future. According to its Enterprise Architecture Services Predictions for 2025, 55% of EA teams will act as coordinators of autonomous governance automation by 2028 and shift from a direct oversight role to that of model curation and certification, agent simulations and oversight, and business outcome alignment with machine-led governance.


With tools like Alpha and Coherence, we’re turning risk management from reactive to real-time

Those days when it was more of a very reactive and process-heavy system, where you had to follow a set of dilutive processes all the time and react to risks being observed in the system, and then you had a standard operating procedure to deal with it step by step. Those days are behind us. That scenario was there for a number of decades. But with AI and intelligent-led solution capabilities transforming the landscape, it has become proactive and extremely real-time. So what we propose, we always have lived by our Digital Knowledge Operations framework. The three words in it: digital, knowledge, and operations. Digital makes you proactive because you’re building solutions not for today but for the future. You rely on knowledge, and you transform your operations. That’s our philosophy that unlocks this proactive ability of capturing the possibilities of risk in real time. That drove us to build something like Alpha. It’s essentially a very strong and effective transaction monitoring framework and tool that can detect a whole lot of false alerts with over 75% to 80% accuracy. Now, in risk management, what happens is that a lot of operational bandwidth, effort, and talent capability is lost in assessing all of these false positives that are generated because of risk management procedures. Most of them can be taken care of by a combination of machine learning, artificial intelligence, and some sort of robotics.


Banking on Better Data: Why Financial Institutions Need an Agile Cloud Strategy

The urgency to migrate to the cloud is particularly pronounced in the banking sector, where legacy institutions are under mounting pressure to keep pace with digital-native competitors. These agile challengers can roll out new features in a matter of weeks, while traditional banks remain constrained by older mainframes. It is clear that the risk of standing still is no longer theoretical. Earlier this year, over 1.2 million UK customers experienced banking outages on pay day, a critical moment for both individuals and businesses. Several major retail banks reported widespread issues, including login failures and prolonged delays in customer service. Far from being one-off glitches, these disruptions point to a broader pattern of structural fragility rooted in outdated technology. Unlike legacy systems, cloud-native platforms are engineered for adaptability, resilience, and real-time performance, which are traits that traditional banking environments have been struggling to deliver. These failures weren’t just accidents; they were foreseeable outcomes of prolonged underinvestment in modernization. This reinforced a critical truth for traditional banks, which is that cloud transformation is no longer a future aspiration, but an immediate requirement to safeguard customer trust and remain viable in a rapidly evolving market.


Why knowledge is the ultimate weapon in the Information Age

To turn AI into an asset rather than a liability, organisations must rethink their approach to knowledge management. At its core, knowledge management is a learning cycle centred on people, with technology acting as a force multiplier, not a substitute for judgment. The objective is to establish a virtuous loop in which data is collected, validated, and transformed into actionable insight. The tighter and more disciplined this cycle, the higher the quality of the resulting knowledge. In practice, this means treating AI as just another tool in the toolkit. ... In an age of information warfare, perception is the battleground. To stay ahead, decision-makers must be trained not just in AI tools but in understanding their strengths, limitations, and potential biases, including their own. The ability to critically assess AI-generated content is essential, not optional. More than static planning, modern organisations need situational awareness and strategic agility, embedding AI within a human-centric knowledge strategy. We can shift the balance in the information war by curating trusted sources, rigorously verifying content, and sustaining a culture of learning. This new knowledge ecosystem embraces uncertainty, leverages AI wisely, and keeps cognitive bias in control, wielding knowledge as a disciplined and secure strategic asset.