Daily Tech Digest - July 17, 2025


Quote for the day:

"Accept responsibility for your life. Know that it is you who will get you where you want to go, no one else." -- Les Brown


AI That Thinks Like Us: New Model Predicts Human Decisions With Startling Accuracy

“We’ve created a tool that allows us to predict human behavior in any situation described in natural language – like a virtual laboratory,” says Marcel Binz, who is also the study’s lead author. Potential applications range from analyzing classic psychological experiments to simulating individual decision-making processes in clinical contexts – for example, in depression or anxiety disorders. The model opens up new perspectives in health research in particular – for example, by helping us understand how people with different psychological conditions make decisions. ... “We’re just getting started and already seeing enormous potential,” says institute director Eric Schulz. Ensuring that such systems remain transparent and controllable is key, Binz adds – for example, by using open, locally hosted models that safeguard full data sovereignty. ...  The researchers are convinced: “These models have the potential to fundamentally deepen our understanding of human cognition – provided we use them responsibly.” That this research is taking place at Helmholtz Munich rather than in the development departments of major tech companies is no coincidence. “We combine AI research with psychological theory – and with a clear ethical commitment,” says Binz. “In a public research environment, we have the freedom to pursue fundamental cognitive questions that are often not the focus in industry.” 


Collaboration is Key: How to Make Threat Intelligence Work for Your Organization

A challenge with joining threat intelligence sharing communities is that a lot of threat information is generated and needs to be shared daily. For already resource-stretched teams, it can be extra work to pull together, share a threat intelligence report, and filter through the incredible volumes of information. Particularly for smaller organizations, it can be a bit like drinking from a firehose. In this context, an advanced threat intelligence platform (TIP) can be invaluable. A TIP has the capabilities to collect, filter, and prioritize data, helping security teams to cut through the noise and act on threat intelligence faster. TIPs can also enrich the data with additional contexts, such as threat actor TTPs (tactics, techniques and procedures), indicators of compromise (IOCs), and potential impact, making it easier to understand and respond to threats. Furthermore, an advanced TIP can have the capability to automatically generate threat intelligence reports, ready to be securely shared within the organization’s threat intelligence sharing community Secure threat intelligence sharing reduces risk, accelerates response and builds resilience across entire ecosystems. If you’re not already part of a trusted intelligence-sharing community, it is time to join. And if you are, do contribute your own valuable threat information. In cybersecurity, we’re only as strong as our weakest link and our most silent partner.


Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems

The researchers first examined how the visibility of the LLM’s own answer affected its tendency to change its answer. They observed that when the model could see its initial answer, it showed a reduced tendency to switch, compared to when the answer was hidden. This finding points to a specific cognitive bias. As the paper notes, “This effect – the tendency to stick with one’s initial choice to a greater extent when that choice was visible (as opposed to hidden) during the contemplation of final choice – is closely related to a phenomenon described in the study of human decision making, a choice-supportive bias.” ... “This finding demonstrates that the answering LLM appropriately integrates the direction of advice to modulate its change of mind rate,” the researchers write. However, they also discovered that the model is overly sensitive to contrary information and performs too large of a confidence update as a result. ... Fortunately, as the study also shows, we can manipulate an LLM’s memory to mitigate these unwanted biases in ways that are not possible with humans. Developers building multi-turn conversational agents can implement strategies to manage the AI’s context. For example, a long conversation can be periodically summarized, with key facts and decisions presented neutrally and stripped of which agent made which choice.


Building stronger engineering teams with aligned autonomy

Autonomy in the absence of organizational alignment can cause teams to drift in different directions, build redundant or conflicting systems, or optimize for local success at the cost of overall coherence. Large organizations with multiple engineering teams can be especially prone to these kinds of dysfunction. The promise of aligned autonomy is that it resolves this tension. It offers “freedom within a framework,” where engineers understand the why behind their work but have the space to figure out the how. Aligned autonomy builds trust, reduces friction, and accelerates delivery by shifting control from a top-down approach to a shared, mission-driven one. ... For engineering teams, their north star might be tied to business outcomes, such as enabling a frictionless customer onboarding experience, reducing infrastructure costs by 30%, or achieving 99.9% system uptime. ... Autonomy without feedback is a blindfolded sprint, and just as likely to end in disaster. Feedback loops create connections between independent team actions and organizational learning. They allow teams to evaluate whether their decisions are having the intended impact and to course-correct when needed. ... In an aligned autonomy model, teams should have the freedom to choose their own path — as long as everyone’s moving in the same direction. 


How To Build a Software Factory

Of the three components, process automation is likely to present the biggest hurdle. Many organizations are happy to implement continuous integration and stop there, but IT leaders should strive to go further, Reitzig says. One example is automating underlying infrastructure configuration. If developers don’t have to set up testing or production environments before deploying code, they get a lot of time back and don’t need to wait for resources to become available. Another is improving security. Though there’s value in continuous integration automatically checking in, reviewing and integrating code, stopping there can introduce vulnerabilities. “This is a system for moving defects into production faster, because configuration and testing are still done manually,” Reitzig says. “It takes too long, it’s error-prone, and the rework is a tax on productivity.” ... While the software factory standardizes much of the development process, it’s not monolithic. “You need different factories to segregate domains, regulations, geographic regions and the culture of what’s acceptable where,” Yates says. However, even within domains, software can serve vastly different purposes. For instance, human resources might seek to develop applications that approve timesheets or security clearances. Managing many software factories can pose challenges, and organizations would be wise to identify redundancies, Reitzig says. 


Why Scaling Makes Microservices Testing Exponentially Harder

You’ve got clean service boundaries and focused test suites, and each team can move independently. Testing a payment service? Spin up the service, mock the user service and you’re done. Simple. This early success creates a reasonable assumption that testing complexity will scale proportionally with the number of services and developers. After all, if each service can be tested in isolation and you’re growing your engineering team alongside your services, why wouldn’t the testing effort scale linearly? ... Mocking strategies that work beautifully at a small scale become maintenance disasters at a large scale. One API change can require updating dozens of mocks across different codebases, owned by different teams. ... Perhaps the most painful scaling challenge is what happens to shared staging environments. With a few services, staging works reasonably well. Multiple teams can coordinate deployments, and when something breaks, the culprit is usually obvious. But as you add services and teams, staging becomes either a traffic jam or a free-for-all — and both are disastrous. ... The teams that successfully scale microservices testing have figured out how to break this exponential curve. They’ve moved away from trying to duplicate production environments for testing and are instead focused on creating isolated slices of their production-like environment.


India’s Digital Infrastructure Is Going Global. What Kind of Power Is It Building?

India’s digital transformation is often celebrated as a story of frugal innovation. DPI systems have allowed hundreds of millions to access ID, receive payments, and connect to state services. In a country of immense scale and complexity, this is an achievement. But these systems do more than deliver services; they configure how the state sees its citizens: through biometric records, financial transactions, health databases, and algorithmic scoring systems. ... India’s digital infrastructure is not only reshaping domestic governance, but is being actively exported abroad. From vaccine certification platforms in Sri Lanka and the Philippines to biometric identity systems in Ethiopia, elements of India Stack are being adopted across Asia and Africa. The Modular Open Source Identity Platform (MOSIP), developed in Bangalore, is now in use in more than twenty countries. Indeed, India is positioning itself as a provider of public infrastructure for the Global South, offering a postcolonial alternative to both Silicon Valley’s corporate-led ecosystems and China’s surveillance-oriented platforms. ... It would be a mistake to reduce India’s digital governance model to either a triumph of innovation or a tool of authoritarian control. The reality is more of a fragmented and improvisational technopolitics. These platforms operate across a range of sectors and are shaped by diverse actors including bureaucrats, NGOs, software engineers, and civil society activists.


Chris Wright: AI needs model, accelerator, and cloud flexibility

As the model ecosystem has exploded, platform providers face new complexity. Red Hat notes that only a few years ago, there were limited AI models available under open user-friendly licenses. Most access was limited to major cloud platforms offering GPT-like models. Today, the situation has changed dramatically. “There’s a pretty good set of models that are either open source or have licenses that make them usable by users”, Wright explains. But supporting such diversity introduces engineering challenges. Different models require different model customization and inference optimizations, and platforms must balance performance with flexibility. ... The new inference capabilities, delivered with the launch of Red Hat AI Inference Server, enhance Red Hat’s broader AI vision. This spans multiple offerings: Red Hat OpenShift AI, Red Hat Enterprise Linux AI, and the aforementioned Red Hat AI Inference Server under the Red Hat AI umbrella. Along the are embedded AI capabilities across Red Hat’s hybrid cloud offerings with Red Hat Lightspeed. These are not simply single products but a portfolio that Red Hat can evolve based on customer and market demands. This modular approach allows enterprises to build, deploy, and maintain models based on their unique use case, across their infrastructure. This from edge deployments to centralized cloud inference, while maintaining consistency in management and operations.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. As such, cyber resilience needs more than a focus on recovery. It requires the ability to recover with data integrity intact and prevent the same vulnerabilities that caused the incident in the first place. ... Failover plans, which are common in disaster recovery, focus on restarting Virtual Machines (VMs) sequentially but lack comprehensive validation. Application-centric recovery runbooks, however, provide a step-by-step approach to help teams manage and operate technology infrastructure, applications and services. This is key to validating whether each service, dataset and dependency works correctly in a staged and sequenced approach. This is essential as businesses typically rely on numerous critical applications, requiring a more detailed and validated recovery process.


Rethinking Distributed Computing for the AI Era

The problem becomes acute when we examine memory access patterns. Traditional distributed computing assumes computation can be co-located with data, minimizing network traffic—a principle that has guided system design since the early days of cluster computing. But transformer architectures require frequent synchronization of gradient updates across massive parameter spaces—sometimes hundreds of billions of parameters. The resulting communication overhead can dominate total training time, explaining why adding more GPUs often yields diminishing returns rather than the linear scaling expected from well-designed distributed systems. ... The most promising approaches involve cross-layer optimization, which traditional systems avoid when maintaining abstraction boundaries. For instance, modern GPUs support mixed-precision computation, but distributed systems rarely exploit this capability intelligently. Gradient updates might not require the same precision as forward passes, suggesting opportunities for precision-aware communication protocols that could reduce bandwidth requirements by 50% or more. ... These architectures often have non-uniform memory hierarchies and specialized interconnects that don’t map cleanly onto traditional distributed computing abstractions. 

Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.

Daily Tech Digest - July 15, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill


CyberArk: Rise in Machine Identities Poses New Risks

The CyberArk report outlines the substantial business consequences of failing to protect machine identities, leaving organizations vulnerable to costly outages and breaches. Seventy-two percent of organizations experienced at least one certificate-related outage over the past year - a sharp increase compared to prior years. Additionally, 50% reported security incidents or breaches stemming from compromised machine identities. Companies that have experienced non-human identity security breaches include xAI, Uber, Schneider Electric, Cloudflare and BeyondTrust, among others. "Machine identities of all kinds will continue to skyrocket over the next year, bringing not only greater complexity but also increased risks," said Kurt Sand, general manager of machine identity security at CyberArk. "Cybercriminals are increasingly targeting machine identities - from API keys to code-signing certificates - to exploit vulnerabilities, compromise systems and disrupt critical infrastructure, leaving even the most advanced businesses dangerously exposed." ... Fifty percent of security leaders reported security incidents or breaches linked to compromised machine identities in the previous year. These incidents led to delays in application launches for 51% companies, customer-impacting outages for 44% and unauthorized access to sensitive systems for 43%.


What Can Businesses Do About Ethical Dilemmas Posed by AI?

Digital discrimination is a product of bias incorporated into the AI algorithms and deployed at various levels of development and deployment. The biases mainly result from the data used to train the large language models (LLMs). If the data reflects previous iniquities or underrepresents certain social groups, the algorithm has the potential to learn and perpetuate those iniquities. Biases may occasionally culminate in contextual abuse when an algorithm is used beyond the environment or audience for which it was intended or trained. Such a mismatch may result in poor predictions, misclassifications, or unfair treatment of particular groups. Lack of monitoring and transparency merely adds to the problem. In the absence of oversight, biased results are not discovered. ... Human-in-the-loop systems allow intervention in real time whenever AI acts unjustly or unexpectedly, thus minimizing potential harm and reinforcing trust. Human judgment makes choices more inclusive and socially sensitive by including cultural, emotional, or situational elements, which AI lacks. When humans remain in the loop of decision-making, accountability is shared and traceable. This removes ethical blind spots and holds users accountable for consequences.


Beyond the hype: AI disruption in India’s legal practice

The competitive dynamics are stark. When AI can complete a ten-hour task in two hours, firms face a pricing paradox: how to maintain profitability while passing efficiency gains to the clients? Traditional hourly billing models become unsustainable when the underlying time economics change dramatically. ... Effective AI integration hinges on a strong technological foundation, encompassing secure data architecture, advanced cybersecurity measures and a seamless and hassle-free interoperability between systems and already existing platforms. SAM’s centralised Harvey AI approach and CAM’s multi-tool strategy both imply significant investment in these backend capabilities. ... Merely automating existing workflows fails to leverage AI’s transformative potential. To unlock AI’s full transformative value, firms must rethink their legal processes – streamlining tasks, reallocating human resources to higher order functions and embedding AI at the core of decision-making processes and document production cycles. ... AI enables alternative service models that go beyond the billable hour. Firms that rethink on how they can price say, by offering subscription-based or outcome-driven services, and position themselves as strategic partners rather than task executors, will be best positioned to capture long-term client value in an AI-first legal economy.


‘Chronodebt’: The lose/lose situation few CIOs can escape

One needn’t be an expert in the field of technical architecture to know that basing a capability as essential as air traffic control on such obviously obsolete technology is a bad idea. Someone should lose their job over this. And yet, nobody has lost their job over this, nor should they have. That’s because the root cause of the FAA’s woes — poor chronodebt management, in case you haven’t been paying attention — is a discipline that’s rarely tracked by reliable metrics and almost-as-rarely budgeted for. Metrics first: While the discipline of IT project estimation is far from reliable, it’s good enough to be useful in estimating chronodebt’s remediation costs — in the FAA’s case what it would have to spend to fix or replace its integrations and the integration platforms on which those integrations rely. That’s good enough, with no need for precision. Those running the FAA for all these years could, that is, estimate the cost of replacing the programs used to export and update its repositories, and replacing the 3 ½” diskettes and paper strips on which they rely. But, telling you what you already know, good business decisions are based not just on estimated costs, but on benefits netted against those costs. The problem with chronodebt is that there are no clear and obvious ways to quantify the benefits to be had by reducing it.


Can System Initiative fix devops?

System Initiative turns traditional devops on its head. It translates what would normally be infrastructure configuration code into data, creating digital twins that model the infrastructure. Actions like restarting servers or running complex deployments are expressed as functions, then chained together in a dynamic, graphical UI. A living diagram of your infrastructure refreshes with your changes. Digital twins allow the system to automatically infer workflows and changes of state. “We’re modeling the world as it is,” says Jacob. For example, when you connect a Docker container to a new Amazon Elastic Container Service instance, System Initiative recognizes the relationship and updates the model accordingly. Developers can turn workflows — like deploying a container on AWS — into reusable models with just a few clicks, improving speed. The GUI-driven platform auto-generates API calls to cloud infrastructure under the hood. ... An abstraction like System Initiative could embrace this flexibility while bringing uniformity to how infrastructure is modeled and operated across clouds. The multicloud implications are especially intriguing, given the rise in adoption of multiple clouds and the scarcity of strong cross-cloud management tools. A visual model of the environment makes it easier for devops teams to collaborate based on a shared understanding, says Jacob — removing bottlenecks, speeding feedback loops, and accelerating time to value.


An exodus evolves: The new digital infrastructure market

Regulatory pressures have crystallised around concerns over reliance on a small number of US-based cloud providers. With some hyperscalers openly admitting that they cannot guarantee data stays within a jurisdiction during transfer, other types of infrastructure make it easier to maintain compliance with UK and EU regulations. This is a clear strategy to avoid future financial and reputational damage. ... 2025 is a pivotal year for digital infrastructure. Public cloud will remain an essential part of the IT landscape. But the future of data strategy lies in making informed, strategic decisions, leveraging the right mix of infrastructure solutions for specific workloads and business needs. As part of our research, we assessed the shape of this hybrid market. ... With one eye to the future, UK-based cloud providers must be positioned as a strategic advantage, offering benefits such as data sovereignty, regulatory compliance, and reduced latency. Businesses will need to situate themselves ever more precisely on the spectrum of digital infrastructure. Their location will reflect how they embrace a hybrid model that balances public cloud, private cloud, colocation and on-premise options. This approach will not only optimise performance and costs but also provide long-term resilience in an evolving digital economy.


How Trump's Cyber Cuts Dismantle Federal Information Sharing

"The budget cuts, personnel reductions and other policy changes have decreased the volume and frequency of CISA's information sharing activities in both formal and informal channels," Daniel told ISMG. While sector-specific ISACs still share information, threat sharing efforts tied to federal funding - such as the Multi-State ISAC, which supports state and local governments - "have been negatively affected," he said . One former CISA staffer who recently accepted the administration's deferred resignation offer told ISMG the agency's information-sharing efforts "were among the first to take a hit" from the administration's cuts, with many feeling pressured into silence. ... Analysts have also warned that cuts to cyber staff across federal agencies and risks to initiatives including the National Vulnerability Database and Common Vulnerabilities and Exposures program could harm cybersecurity far beyond U.S. borders. The CVE program is dealing with backlogs and a recent threat to shut down funding over a federal contracting issue. Failure of the CVE Program "would have wide impacts on vulnerability management efficiency and effectiveness globally," said John Banghart, senior director for cybersecurity services at Venable and a key architect of the Obama administration's cybersecurity policy as a former director for federal cybersecurity for the National Security Council.


Securing vehicles as they become platforms for code and data

Recently security researchers have demonstrated real-world attacks against connected cars, such as wireless brake manipulation on heavy trucks by spoofing J-bus diagnostic packets. Another very recent example is successful attacks against autonomous car LIDAR systems. While the distribution of EV and advanced cars becomes more pervasive across our society, we expect these types of attacks and methods to continue to grow in complexity. Which makes a continuous, real-time approach to securing the entire ecosystem (from charger, to car, to driver) even more so important. ... Over-the-air (OTA) update hijacking is very real and often enabled by poor security design, such as lack of encryption, improper authentication between the car and backend, and lack of integrity or checksum validation. Attack vectors that the traditional computer industry has dealt with for years are now becoming a harsh reality in the automotive sector. Luckily, many of the same approaches used to mitigate these risks in IT can also apply here ... When we look at just the automobile, we have a variety of connected systems which typically all come from different manufacturers (Android Automotive, or QNX as examples) which increases the potential for supply chain abuse. We also have devices which the driver introduces which interacts with the car’s APIs creating new entry points for attackers.


Strategizing with AI: How leaders can upgrade strategic planning with multi-agent platforms

Building resiliency and optionality into a strategic plan challenges humans’ cognitive (and financial) bandwidth. The seemingly endless array of future scenarios, coupled with our own human biases, conspires to anchor our understanding of the future in what we’ve seen in the past. Generative AI (GenAI) can help overcome this common organizational tendency for entrenched thinking, and mitigate the challenges of being human, while exploiting LLMs’ creativity as well as their ability to mirror human behavioral patterns. ... In fact, our argument reflects our own experience using a multi-agent LLM simulation platform built by the BCG Henderson Institute. We’ve used this platform to mirror actual war games and scenario planning sessions we’ve led with clients in the past. As we’ve seen firsthand, what makes an LLM multi-agent simulation so powerful is the possibility of exploiting two unique features of GenAI—its anthropomorphism, or ability to mimic human behavior, and its stochasticity, or creativity. LLMs can role-play in remarkably human-like fashion: Research by Stanford and Google published earlier this year suggests that LLMs are able to simulate individual personalities closely enough to respond to certain types of surveys with 85% accuracy as the individuals themselves.


The Network Challenges of IoT Integration

IoT interoperability and compatible security protocols are a particular challenge. Although NIST and ISO, among other organizations, have issued IoT standards, smaller IoT manufacturers don't always have the resources to follow their guidance. This becomes a network problem because companies have to retool these IoT devices before they can be used on their enterprise networks. Moreover, because many IoT gadgets are delivered with default security settings that are easy to undo, each device has to be hand-configured to ensure it meets company security standards. To avoid potential interoperability pitfalls, network staff should evaluate prospective technology before anything is purchased. ... First, to achieve high QoS, every data pipeline on the network must be analyzed -- as well as every single system, application and network device. Once assessed, each component must be hand-calibrated to run at the highest performance levels possible. This is a detailed and specialized job. Most network staff don't have trained QoS technicians on board, so they must go externally for help. Second, which areas of the business get maximum QoS, and which don't? A medical clinic, for example, requires high QoS to support a telehealth application where doctors and patients communicate. 

Daily Tech Digest - July 12, 2025


Quote for the day:

"If you do what you’ve always done, you’ll get what you’ve always gotten." -- Tony Robbins


Why the Value of CVE Mitigation Outweighs the Costs

When it comes to CVEs and continuous monitoring, meeting compliance requirements can be daunting and confusing. Compliance isn’t just achieved; rather, it is a continuous maintenance process. Compliance frameworks might require additional standards, such as Federal Information Processing Standards (FIPS), Federal Risk and Authorization Management Program (FedRAMP), Security Technical Implementation Guides (STIGs) and more that add an extra layer of complexity and time spent. The findings are clear. Telecommunications and infrastructure companies reported an average of $3 million in new revenue annually by improving their container security enough to qualify for security-sensitive contracts. Healthcare organizations averaged $7.3 million in new revenue, often driven by unlocking expansion into compliance-heavy markets. ... The industry has long championed “shifting security left,” or embedding checks earlier in the pipeline to ensure security measures are incorporated throughout the entire software development life cycle. However, as CVE fatigue worsens, many teams are realizing they need to “start left.” That means: Using hardened, minimal container images by default; Automating CVE triage and patching through reproducible builds; Investing in secure-by-default infrastructure that makes vulnerability management invisible to most developers


Generative AI: A Self-Study Roadmap

Building generative AI applications requires comfort with Python programming and basic machine learning concepts, but you don't need deep expertise in neural network architecture or advanced mathematics. Most generative AI work happens at the application layer, using APIs and frameworks rather than implementing algorithms from scratch. ... Modern generative AI development centers around foundation models accessed through APIs. This API-first approach offers several advantages: you get access to cutting-edge capabilities without managing infrastructure, you can experiment with different models quickly, and you can focus on application logic rather than model implementation. ... Generative AI applications require different API design patterns than traditional web services. Streaming responses improve user experience for long-form generation, allowing users to see content as it's generated. Async processing handles variable generation times without blocking other operations. ... While foundation models provide impressive capabilities out of the box, some applications benefit from customization to specific domains or tasks. Consider fine-tuning when you have high-quality, domain-specific data that foundation models don't handle well—specialized technical writing, industry-specific terminology, or unique output formats requiring consistent structure.


Announcing GenAI Processors: Build powerful and flexible Gemini applications

At its core, GenAI Processors treat all input and output as asynchronous streams of ProcessorParts (i.e. two-way aka bidirectional streaming). Think of it as standardized data parts (e.g., a chunk of audio, a text transcription, an image frame) flowing through your pipeline along with associated metadata. This stream-based API allows for seamless chaining and composition of different operations, from low-level data manipulation to high-level model calls. ... We anticipate a growing need for proactive LLM applications where responsiveness is critical. Even for non-streaming use cases, processing data as soon as it is available can significantly reduce latency and time to first token (TTFT), which is essential for building a good user experience. While many LLM APIs prioritize synchronous, simplified interfaces, GenAI Processors – by leveraging native Python features – offer a way for writing responsive applications without making code more complex. ... GenAI Processors is currently in its early stages, and we believe it provides a solid foundation for tackling complex workflow and orchestration challenges in AI applications. While the Google GenAI SDK is available in multiple languages, GenAI Processors currently only support Python.


Scaling the 21st-century leadership factory

Identifying priority traits is critical; just as important, CEOs and their leadership teams must engage early and often with high-potential employees and unconventional thinkers in the organization, recognizing that innovation often comes from the edges of the business. Skip-level meetings are a powerful tool for this purpose. Most famously, Apple’s Steve Jobs would gather what he deemed the 100 most influential people at the company, including young engineers, to engage directly in strategy discussions—regardless of hierarchy or seniority. ... A culture of experimentation and learning is essential for leadership development—but it must be actively pursued. “Instillation of personal initiative, aggressiveness, and risk-taking doesn’t spring forward spontaneously,” General Jim Mattis explained in his 2019 book on leadership, Call Sign Chaos. “It must be cultivated for years and inculcated, even rewarded, in an organization’s culture. If the risk-takers are punished, then you will retain in your ranks only the risk averse,” he wrote. ... There are multiple ways to streamline decision-making, including redefining decision rights to focus on a handful of owners and distinguishing between different types of decisions, as not all choices are high stakes. 


Lessons learned from Siemens’ VMware licensing dispute

Siemens threatened to sue VMware if it didn’t provide ongoing support for the software and handed over a list of the software it was using that it wanted support for. Except that the list included software that it didn’t have any licenses for, perpetual or otherwise. Broadcom-owned VMware sued, Siemens countersued, and now the companies are battling over jurisdiction. Siemens wants the case to be heard in Germany, and VMware prefers the United States. Normally, if unlicensed copies of software are discovered during an audit, the customer pays the difference and maybe an additional penalty. After all, there are always minor mistakes. The vendors try to keep these costs at least somewhat reasonable, since at some point, customers will migrate from mission-critical software if the pain is high enough. ... For large companies, it can be hard to pivot quickly. Using open-source software can help reduce the risk of unexpected license changes, and, for many major tools there are third-party service providers that can offer ongoing support. Another option is SaaS software, Ringdahl says, because it does make license management a bit easier, since there’s usually transparency both for the customer and the vendor about how much usage the product is getting.


Microsoft says regulations and environmental issues are cramping its Euro expansion

One of the things that everyone needs to consider is how datacenter development in Europe is being enabled or impeded, Walsh said. "Because we have moratoriums coming at us. We have communities that don't want us there," she claimed, referring particularly to Ireland where local opposition to bit barns has been hardening because of the amount of electricity they consume and their environmental impact. Another area of discussion at the Datacloud keynote was the commercial models for acquiring datacenter capacity, which it was felt had become unfit for the new environment where large amounts are needed quickly. "From our perspective, time to market is essential. We've done a lot of leasing in the last two years, and that is all time for market pressure," Walsh said. "I also manage land acquisition and land development, which includes permitting. So the joy of doing that is that when my permits are late, I can lease so I can actually solve my own problems, which is amazing, but the way things are going, it's going to be very difficult to continue to lease the infrastructure using co-location style funding. It's just getting too big, and it's going to get harder and harder to get up the chain, for sure," she explained. ... "European regulations and planning are very slow, and things take 18 months longer than anywhere else," she told attendees at <>Bisnow's Datacenter Investment Conference and Expo (DICE) in Ireland.


350M Cars, 1B Devices Exposed to 1-Click Bluetooth RCE

The scope of affected systems is massive. The developer, OpenSynergy, proudly boasts on its homepage that Blue SDK — and RapidLaunch SDK, which is built on top of it and therefore also possibly vulnerable — has been shipped in 350 million cars. Those cars come from companies like Mercedes-Benz, Volkswagen, and Skoda, as well as a fourth known but unnamed company. Since Ford integrated Blue SDK into its Android-based in-vehicle infotainment (IVI) systems in November, Dark Reading has reached out to determine whether it too was exposed. ... Like any Bluetooth hack, the one major hurdle in actually exploiting these vulnerabilities is physical proximity. An attacker would likely have to position themselves within around 10 meters of a target device in order to pair with it, and the device would have to comply. Because Blue SDK is merely a framework, different devices might block pairing, limit the number of pairing requests an attacker could attempt, or at least require a click to accept a pairing. This is a point of contention between the researchers and Volkswagen. ... "Usually, in modern cars, an infotainment system can be turned on without activating the ignition. For example, in the Volkswagen ID.4 and Skoda Superb, it's not necessary," he says, though the case may vary vehicle to vehicle. 


Leaders will soon be managing AI agents – these are the skills they'll need, according to experts

An AI agent is essentially just "a piece of code", says Jarah Euston, CEO and Co-Founder of AI-powered labour platform WorkWhile, which connects frontline workers to shifts. "It may not have the same understanding, empathy, awareness of the politics of your organization, of the fears or concerns or ambitions of the people around that it is serving. "So managers have to be aware that the agent is only as good as how you've trained it. I don't think we're close yet to having agents that can operate without any human oversight. "As a manager, you want to leverage the AI to make you and your team more productive, but you constantly have to be checking, iterating and training your tools to get the most out of them."  ... Technological skills are expected to become increasingly vital over the next five years, outpacing the growth of all other skill categories. Leading the way are AI and big data, followed closely by networking, cybersecurity and overall technological literacy. The so-called 'soft skills' of creative thinking and resilience, flexibility and agility are also rising in importance, along with curiosity and lifelong learning. Empathy is one skill AI agents can't learn, says Women in Tech's Moore Aoki, and she believes this will advantage women.


Common Master Data Management (MDM) Pitfalls

In addition to failing to connect MDM’s value with business outcomes, “People start with MDM by jumping in with the technology,” Cooper said. “Then, they try to fit the people, processes, and master data into their selected technology.” Moreover, in the process of prioritizing technology first, organizations take for granted that they have good data quality, data that is clean and fit for purpose. Then, during a major initiative, such as migrating to a cloud environment, they discover their data is not so clean. ... Organizations fall into the pitfalls above and others because they try to do it alone, and most have never done MDM before. Instead, “Organizations have different capabilities with MDM,” said Cooper, “and you don’t know what you don’t know.” ... Connecting the MDM program to business objectives requires talking with the stakeholders across the organization, especially divisions with direct financial risks such as sales, marketing, procurement, and supply. Cooper said readers should learn the goals of each unit and how they measure success in growing revenue, reducing cost, mitigating risk, or operating more efficiently. ... Cooper advised focusing on data quality – e.g., through reference data – rather than technology. In the figure below, a company has data about a client, Emerson Electric, as shown on the left. 


Why Cloud Native Security Is More Complex Than You Think

Enterprise security tooling can help with more than just the monitoring of these vulnerabilities though. And, often older vulnerabilities that have been patched by the software vendor will offer “fix status” advice. This is where a specific package version is shown to the developer or analyst responsible for remediating the vulnerability. When they upgrade the current package to that later version, the vulnerability alert will be resolved. To confuse things further, the applications running in containers or serverless functions also need to be checked for non-compliance. Warnings that may be presented by security tooling when these applications are checked against recognised compliance standards, frameworks or benchmarks for noncompliance are wide and varied. For example, if a serverless function has overly permissive access to another cloud service and an attacker gets access to the serverless function’s code via a vulnerability, the attack’s blast radius could exponentially increase as a result. Or, often compliance checks reveal how containers are run with inappropriate network settings. ... At a high level, these components and importantly, how they interact with each other, is why applications running in the cloud require time, effort and specialist expertise to secure them.

Daily Tech Digest - July 11, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


Throwing AI at Developers Won’t Fix Their Problems

Organizations are spending too much time, money and energy focusing on the tools themselves. “Should we use OpenAI or Anthropic? Copilot or Cursor?” We see two broad patterns for how organizations approach AI tool adoption. The first is that leadership has a relationship with a certain vendor or just a personal preference, so they pick a tool and mandate it. This can work, but you’ll often get poor results — not because the tool is bad, but because the market is moving too fast for centralized teams to keep up. ... The second model, which generally works much better, is to allow early adopters to try new tools and find what works. This gives developers autonomy to improve their own workflows and reduces the need for a central team to test every new tool exhaustively. Comparing the tools by features or technology is less important every day. You’ll waste a lot of energy debating minor differences that won’t matter next year. Instead, focus on what problem you want to solve. Are you trying to improve testing? Code review? Documentation? Incident response? Figure out the goal first. Then see if an AI tool (or any tool) actually helps. If you don’t, you’ll just make DevEx worse: You’ll have a landscape of 100 tools nobody knows how to use, and you’ll deliver no real value.


Anatomy of a Scattered Spider attack: A growing ransomware threat evolves

Scattered Spider began its attack against the unnamed organization’s public-facing Oracle Cloud authentication portal, targeting its chief financial officer. Using personal details, such as the CFO’s date of birth and the last four digits of their Social Security number obtained from public sources and previous breaches, Scattered Spider impersonated the CFO in a call to the company’s help desk, tricking help desk staff into resetting the CFO’s registered device and credentials. ... The cybercriminals extracted more than 1,400 secrets by taking advantage of compromised admin accounts tied to the target’s CyberArk password vault and likely an automated script. Scattered Spider granted administrator roles to compromised user accounts before using tools, including ngrok, to maintain access on compromised virtual machines. ... Scattered Spider’s operations have become more aggressive and compressed. “Within hours of initial compromise — often via social engineering — they escalate privileges, move laterally, establish persistence, and begin reconnaissance across both cloud and on-prem environments,” Beek explained. “This speed and fluidity represent a significant escalation in operational maturity.” ... Defending effectively against Scattered Spider involves tackling both human and technical vulnerabilities, ReliaQuest researchers noted.


Data governance: The contract layer that makes agentic systems possible

Today, AI has changed everything. Lineage, access enforcement and cataloging must operate in real time and cover vastly more data types and sources. Models consume data continuously and make decisions instantly, raising the stakes for mistakes or gaps in oversight. What used to be a once-a-week check is now an always-on discipline. This transformation has turned data governance from a checklist into a living system that protects quality and trust at scale. ... One of the biggest misconceptions is that governance slows down innovation. In reality, good governance speeds it up. By clarifying ownership, policies and data quality from the start, teams avoid spending precious time reconciling mismatches and can focus on delivering AI that works as intended. A clear governance framework reduces unnecessary data copies, lowers regulatory risk and prevents AI from producing unpredictable results. Getting this right also requires a culture shift. Producers and consumers alike need to see themselves as co-stewards of shared data products. ... Enterprises deploying agentic AI cannot leave governance behind. These systems run continuously, make autonomous decisions and rely on accurate context to stay relevant. Governance must move from passive checks to an active, embedded foundation within both architecture and culture.


How CIOs Are Navigating Today’s Hyper Volatility

“When it comes to changing dynamics, [such as] AI and driving innovation, there are several things that people like me are dealing with right now. There is an impact on how you hire people, staffing, how to structure your organization,” says Johar. “There is an impact on risk. I’m also responsible within my organization for managing the risk of data, privacy and security, and AI is bringing a new dimension to that risk. It’s an opportunity, but it's also a risk. How you structure your organization, how you manage risk, how you drive transformation -- these things are all connected.” ... “[CIOs] are emerging as transformation leaders, so they need to understand how to navigate the culture change of an organization, the change in people in an organization. They must know how to tell stories so they can get the organization on board,” says Danielle Phaneuf, a partner, PwC cloud and digital strategy operating model leader. “Their mindset is different, so they're embracing the transformation with a product model that allows them to move faster [and] allows them to think long term. They’re building these new muscles around change leadership and engaging the business early, co-creating solutions, not thinking they must solve everything on their own, and doing that in an agile way.”


What Is AI Agent Washing And Why Is It A Risk To Businesses?

You’ve heard of greenwashing and AI-washing? Well, now it seems that the hype-merchants and bandwagon-jumpers with technology to sell have come up with a new (and perhaps predictably inevitable) scam. Analysts at Gartner say unscrupulous vendors are increasingly engaging in "agent washing" and say that out of “thousands” of supposedly agentic AI products tested, only 130 truly lived up to the claim. ... So, what’s the scam? Well, according to the report, agent washing involves passing off existing automation technology, including LLM-powered chatbots and robotic process automation, as agentic, when in reality it lacks those capabilities. ... Tools that claim to be agentic because they orchestrate and pull together multiple AI systems, such as marketing automation platforms and workflow automation tools, are stretching the term, too, unless they are also capable of autonomously coordinating the usage of those tools for long-term planning and decision-making. A few more hypothetical examples: While an AI chatbot-based system can write emails on command, an agentic system might write emails, identify the best recipients for marketing purposes, send the emails out, monitor responses, and then generate follow-up emails, tailored to individual responders.


Agentic AI Architecture Framework for Enterprises

The critical decision point lies in understanding when predictability and control take precedence versus when flexibility and autonomous decision-making deliver greater value. This understanding leads to a fundamental principle: start with the simplest effective solution, adding complexity only when clear business value justifies the additional operational overhead and risk. ... Enterprise deployment of agentic AI creates an inherent tension between AI autonomy and organizational governance requirements. Our Analysis of successful MVPs and on-going production implementations across multiple industries reveals three distinct architectural tiers, each representing different trade-offs between capability and control while anticipating emerging regulatory frameworks like the EU AI Act and others coming. These tiers form a systematic maturity progression, so organizations can build competency and stakeholder trust incrementally before advancing to more sophisticated implementations. ... Our three-tier progression manifests differently across industries, reflecting unique regulatory environments, risk tolerances, customer expectations and operational requirements. Understanding these industry-specific approaches enables organizations to tailor their implementation strategies while maintaining systematic capability development.


Rewriting the rules of enterprise architecture with AI agents

In enterprise architecture, agentic AI systems can be deployed as digital “co-architects”, process optimizers, compliance monitors and scenario planners — each acting with a degree of independence and intelligence previously impossible. So why agentic AI and simulations for governance…and why now? Governance in enterprise architecture is about ensuring that IT systems, processes and data align with business goals, comply with regulations and adapt to change. ... These methods are increasingly inadequate in the face of real-time business dynamics. Agentic AI introduces a new composability model that is achievable: Governance that is continuous, adaptive and proactive. Agentic systems can monitor the enterprise landscape, simulate the impact of changes, enforce policies autonomously and even resolve conflicts or escalate issues when necessary. This results in governance that is both more robust and more responsive to business needs. Gartner’s research reinforces the impact of agency and simulations on enterprise architecture’s future. According to its Enterprise Architecture Services Predictions for 2025, 55% of EA teams will act as coordinators of autonomous governance automation by 2028 and shift from a direct oversight role to that of model curation and certification, agent simulations and oversight, and business outcome alignment with machine-led governance.


With tools like Alpha and Coherence, we’re turning risk management from reactive to real-time

Those days when it was more of a very reactive and process-heavy system, where you had to follow a set of dilutive processes all the time and react to risks being observed in the system, and then you had a standard operating procedure to deal with it step by step. Those days are behind us. That scenario was there for a number of decades. But with AI and intelligent-led solution capabilities transforming the landscape, it has become proactive and extremely real-time. So what we propose, we always have lived by our Digital Knowledge Operations framework. The three words in it: digital, knowledge, and operations. Digital makes you proactive because you’re building solutions not for today but for the future. You rely on knowledge, and you transform your operations. That’s our philosophy that unlocks this proactive ability of capturing the possibilities of risk in real time. That drove us to build something like Alpha. It’s essentially a very strong and effective transaction monitoring framework and tool that can detect a whole lot of false alerts with over 75% to 80% accuracy. Now, in risk management, what happens is that a lot of operational bandwidth, effort, and talent capability is lost in assessing all of these false positives that are generated because of risk management procedures. Most of them can be taken care of by a combination of machine learning, artificial intelligence, and some sort of robotics.


Banking on Better Data: Why Financial Institutions Need an Agile Cloud Strategy

The urgency to migrate to the cloud is particularly pronounced in the banking sector, where legacy institutions are under mounting pressure to keep pace with digital-native competitors. These agile challengers can roll out new features in a matter of weeks, while traditional banks remain constrained by older mainframes. It is clear that the risk of standing still is no longer theoretical. Earlier this year, over 1.2 million UK customers experienced banking outages on pay day, a critical moment for both individuals and businesses. Several major retail banks reported widespread issues, including login failures and prolonged delays in customer service. Far from being one-off glitches, these disruptions point to a broader pattern of structural fragility rooted in outdated technology. Unlike legacy systems, cloud-native platforms are engineered for adaptability, resilience, and real-time performance, which are traits that traditional banking environments have been struggling to deliver. These failures weren’t just accidents; they were foreseeable outcomes of prolonged underinvestment in modernization. This reinforced a critical truth for traditional banks, which is that cloud transformation is no longer a future aspiration, but an immediate requirement to safeguard customer trust and remain viable in a rapidly evolving market.


Why knowledge is the ultimate weapon in the Information Age

To turn AI into an asset rather than a liability, organisations must rethink their approach to knowledge management. At its core, knowledge management is a learning cycle centred on people, with technology acting as a force multiplier, not a substitute for judgment. The objective is to establish a virtuous loop in which data is collected, validated, and transformed into actionable insight. The tighter and more disciplined this cycle, the higher the quality of the resulting knowledge. In practice, this means treating AI as just another tool in the toolkit. ... In an age of information warfare, perception is the battleground. To stay ahead, decision-makers must be trained not just in AI tools but in understanding their strengths, limitations, and potential biases, including their own. The ability to critically assess AI-generated content is essential, not optional. More than static planning, modern organisations need situational awareness and strategic agility, embedding AI within a human-centric knowledge strategy. We can shift the balance in the information war by curating trusted sources, rigorously verifying content, and sustaining a culture of learning. This new knowledge ecosystem embraces uncertainty, leverages AI wisely, and keeps cognitive bias in control, wielding knowledge as a disciplined and secure strategic asset.