Daily Tech Digest - May 23, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


MCP, ACP, and Agent2Agent set standards for scalable AI results

“Without standardized protocols, companies will not be able to reap the maximum value from digital labor, or will be forced to build interoperability capabilities themselves, increasing technical debt,” he says. Protocols are also essential for AI security and scalability, because they will enable AI agents to validate each other, exchange data, and coordinate complex workflows, Lerhaupt adds. “The industry can build more robust and trustworthy multi-agent systems that integrate with existing infrastructure, encouraging innovation and collaboration instead of isolated, fragmented point solutions,” he says. ... ACP is “a universal protocol that transforms the fragmented landscape of today’s AI agents into inter-connected teammates,” writes Sandi Besen, ecosystem lead and AI research engineer at IBM Research, in Towards Data Science. “This unlocks new levels of interoperability, reuse, and scale.” ACP uses standard HTTP patterns for communication, making it easy to integrate into production, compared to JSON-RPC, which relies on more complex methods, Besen says. ... Agent2Agent, supported by more than 50 Google technology partners, will allow IT leaders to string a series of AI agents together, making it easier to get the specialized functionality their organizations need, Ensono’s Piazza says. Both ACP and Agent2Agent, with their focus on connecting AI agents, are complementary protocols to the model-centric MCP, their creators say.


It’s Time to Get Comfortable with Uncertainty in AI Model Training

“We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high,” said Bilbrey Pope. “This is common for most deep neural networks. But a model trained with SNAP gives a metric that mitigates this overconfidence. Ideally, you’d want to look at both prediction uncertainty and training data uncertainty to assess your overall model performance.” ... “AI should be able to accurately detect its knowledge boundaries,” said Choudhury. “We want our AI models to come with a confidence guarantee. We want to be able to make statements such as ‘This prediction provides 85% confidence that catalyst A is better than catalyst B, based on your requirements.’” In their published study, the researchers chose to benchmark their uncertainty method with one of the most advanced foundation models for atomistic materials chemistry, called MACE. The researchers calculated how well the model is trained to calculate the energy of specific families of materials. These calculations are important to understanding how well the AI model can approximate the more time- and energy-intensive methods that run on supercomputers. The results show what kinds of simulations can be calculated with confidence that the answers are accurate. 


Don’t let AI integration become your weakest link

Ironically, integration meant to boost efficiency can stifle innovation. Once a complex web of AI- interconnected systems exists, adding tools or modifying processes becomes a major architectural undertaking, not plug-and-play. It requires understanding interactions with central AI logic, potentially needing complex model re-training, integration point redevelopment, and extensive regression testing to avoid destabilisation. ... When AI integrates and automates decisions and workflows across systems based on learned patterns, it inherently optimises for the existing or dominant processes observed in the training data. While efficiency is the goal, there’s a tangible risk of inadvertently enforcing uniformity and suppressing valuable diversity in approaches. Different teams might have unique, effective methods deviating from the norm. An AI trained on the majority might flag these as errors, subtly discouraging creative problem-solving or context-specific adaptations. ... Feeding data from multiple sensitive systems (CRM, HR, finance, and communications) into central AI dramatically increases the scope and sensitivity of data processed and potentially exposed. Each integration point is another vector for data leakage or unauthorised access. Sensitive customer, employee, and financial data may flow across more boundaries and be aggregated in new ways, increasing the surface area for breaches or misuse.


Beyond API Uptime: Modern Metrics That Matter

A minuscule delay (measurable in API response times) in processing API requests can be as painful to a customer as a major outage. User behavior and expectations have evolved, and performance standards need to keep up. Traditional API monitoring tools are stuck in a binary paradigm of up versus down, despite the fact that modern, cloud native applications live in complex, distributed ecosystems. ... Measuring performance from multiple locations provides a more balanced and realistic view of user experience and can help uncover metrics you need to monitor, like location-specific latency: What’s fast in San Francisco might be slow in New York and terrible in London. ... The real value of IPM comes from how its core strengths, such as proactive synthetic testing, global monitoring agents, rich analytics with percentile-based metrics and experience-level objectives, interact and complement each other, Vasiliou told me. “IPM can proactively monitor single API URIs [uniform resource identifiers] or full API multistep transactions, even when users are not on your site or app. Many other monitors can also do this. It is only when you combine this with measuring performance from multiple locations, granular analytics and experience-level objectives that the value of the whole is greater than the sum of its parts,” Vasiliou said.


Agentic AI shaping strategies and plans across sectors as AI agents swarm

“Without a robust identity model, agents can’t truly act autonomously or securely,” says the post. “The MCP-I (I for Identity) specification addresses this gap – introducing a practical, interoperable approach to agentic identity.” Vouched also offers its turnkey SaaS Vouched MCP Identity Server, which provides easy-to-integrate APIs and SDKs for enterprises and developers to embed strong identity verification into agent systems. While the Agent Reputation Directory and MCP-I specification are open and free to the public, the MCP Identity Server is available as a commercial offering. “Thinking through strong identity in advance is critical to building an agentic future that works,” says Peter Horadan, CEO of Vouched. “In some ways we’ve seen this movie before. For example, when our industry designed email, they never anticipated that there would be bad email senders. As a result, we’re still dealing with spam problems 50 years later.” ... An early slide outlining definitions tells us that AI agents are ushering in a new definition of the word “tools,” which he calls “one of the big changes that’s happening this year around agentic AI, giving the ability to LLMs to actually do and act with permission on behalf of the user, interact with permission on behalf of the user, interact with third-party APIs,” and so on. Tools aside, what are the challenges for agentic AI? “The biggest one is security,” he says. 


Optimistic Security: A New Era for Cyber Defense

Optimistic cybersecurity involves effective NHI management that reduces risk, improves regulatory compliance, enhances operational efficiency and provides better control over access management. This management strategy goes beyond point solutions such as secret scanners, offering comprehensive protection throughout the entire lifecycle of these identities. ... Furthermore, a proactive attitude towards cybersecurity can lead to potential cost savings by automating processes such as secrets rotation and NHIs decommissioning. By utilizing optimistic cybersecurity strategies, businesses can transform their defensive mechanisms, preparing for a new era in cyber defense. By integrating Non-Human Identities and Secrets Management into their cloud security control strategies, organizations can fortify their digital infrastructure, significantly reducing security breaches and data leaks. ... Implementing an optimistic cybersecurity approach is no less than a transformation in perspective. It involves harnessing the power of technology and human ingenuity to build a resilient future. With optimism at its core, cybersecurity measures can become a beacon of hope rather than a looming threat. By welcoming this new era of cyber defense with open arms, organizations can build a secure digital environment where NHIs and their secrets operate seamlessly, playing a pivotal role in enhancing overall cybersecurity.


Identity Security Has an Automation Problem—And It's Bigger Than You Think

The data reveals a persistent reliance on human action for tasks that should be automated across the identity security lifecycle.41% of end users still share or update passwords manually, using insecure methods like spreadsheets, emails, or chat tools. They are rarely updated or monitored, increasing the likelihood of credential misuse or compromise. Nearly 89% of organizations rely on users to manually enable MFA in applications, despite MFA being one of the most effective security controls. Without enforcement, protection becomes optional, and attackers know how to exploit that inconsistency. 59% of IT teams handle user provisioning and deprovisioning manually, relying on ticketing systems or informal follow-ups to grant and remove access. These workflows are slow, inconsistent, and easy to overlook—leaving organizations exposed to unauthorized access and compliance failures. ... According to the Ponemon Institute, 52% of enterprises have experienced a security breach caused by manual identity work in disconnected applications. Most of them had four or more. The downstream impact was tangible: 43% reported customer loss, and 36% lost partners. These failures are predictable and preventable, but only if organizations stop relying on humans to carry out what should be automated. Identity is no longer a background system. It's one of the primary control planes in enterprise security. 


Critical infrastructure under attack: Flaws becoming weapon of choice

“Attackers have leaned more heavily on vulnerability exploitation to get in quickly and quietly,” said Dray Agha, senior manager of security operations at managed detection and response vendor Huntress. “Phishing and stolen credentials play a huge role, however, and we’re seeing more and more threat actors target identity first before they probe infrastructure.” James Lei, chief operating officer at application security testing firm Sparrow, added: “We’re seeing a shift in how attackers approach critical infrastructure in that they’re not just going after the usual suspects like phishing or credential stuffing, but increasingly targeting vulnerabilities in exposed systems that were never meant to be public-facing.” ... “Traditional methods for defense are not resilient enough for today’s evolving risk landscape,” said Andy Norton, European cyber risk officer at cybersecurity vendor Armis. “Legacy point products and siloed security solutions cannot adequately defend systems against modern threats, which increasingly incorporate AI. And yet, too few organizations are successfully adapting.” Norton added: “It’s vital that organizations stop reacting to cyber incidents once they’ve occurred and instead shift to a proactive cybersecurity posture that allows them to eliminate vulnerabilities before they can be exploited.”


Fundamentals of Data Access Management

An important component of an organization’s data management strategy is controlling access to the data to prevent data corruption, data loss, or unauthorized modification of the information. The fundamentals of data access management are especially important as the first line of defense for a company’s sensitive and proprietary data. Data access management protects the privacy of the individuals to which the data pertains, while also ensuring the organization complies with data protection laws. It does so by preventing unauthorized people from accessing the data, and by ensuring those who need access can reach it securely and in a timely manner. ... Appropriate data access controls improve the efficiency of business processes by limiting the number of actions an employee can take. This helps simplify user interfaces, reduce database errors, and automate validation, accuracy, and integrity checks. By restricting the number of entities that have access to sensitive data, or permission to alter or delete the data, organizations reduce the likelihood of errors being introduced while enhancing the effectiveness of their real-time data processing activities. ... Becoming a data-driven organization requires overcoming several obstacles, such as data silos, fragmented and decentralized data, lack of visibility into security and access-control measures currently in place, and a lack of organizational memory about how existing data systems were designed and implemented.


Chief Intelligence Officers? How Gen AI is rewiring the CxOs Brain

Generative AI is making the most impact in areas like Marketing, Software Engineering, Customer Service, and Sales. These functions benefit from AI’s ability to process vast amounts of data quickly. On the other hand, Legal and HR departments see less GenAI adoption, as these areas require high levels of accuracy, predictability, and human judgment. ... Business and tech leaders must prioritize business value when choosing AI use cases, focus on AI literacy and responsible AI, nurture cross-functional collaboration, and stress continuous learning to achieve successful outcomes. ... Leaders need to clearly outline and share a vision for responsible AI, establishing straightforward principles and policies that address fairness, bias reduction, ethics, risk management, privacy, sustainability, and compliance with regulations. They should also pinpoint the risks associated with Generative AI, such as privacy concerns, security issues, hallucinations, explainability, and legal compliance challenges, along with practical ways to mitigate these risks. When choosing and prioritizing use cases, it’s essential to consider responsible AI by filtering out those that carry unacceptable risks. Each Generative AI use case should have a designated champion responsible for ensuring that development and usage align with established policies. 

Daily Tech Digest - May 22, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. ... Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.


Putting agentic AI to work in Firebase Studio

An AI assistant is like power steering. The programmer, the driver, remains in control, and the tool magnifies that control. The developer types some code, and the assistant completes the function, speeding up the process. The next logical step is to empower the assistant to take action—to run tests, debug code, mock up a UI, or perform some other task on its own. In Firebase Studio, we get a seat in a hosted environment that lets us enter prompts that direct the agent to take meaningful action. ... Obviously, we are a long way off from a non-programmer frolicking around in Firebase Studio, or any similar AI-powered development environment, and building complex applications. Google Cloud Platform, Gemini, and Firebase Studio are best-in-class tools. These kinds of limits apply to all agentic AI development systems. Still, I would in no wise want to give up my Gemini assistant when coding. It takes a huge amount of busy work off my shoulders and brings much more possibility into scope by letting me focus on the larger picture. I wonder how the path will look, how long it will take for Firebase Studio and similar tools to mature. It seems clear that something along these lines, where the AI is framed in a tool that lets it take action, is part of the future. It may take longer than AI enthusiasts predict. It may never really, fully come to fruition in the way we envision.


Edge AI + Intelligence Hub: A Match in the Making

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. ... In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks. But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you’ll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.


AI-driven threats prompt IT leaders to rethink hybrid cloud security

Public cloud security risks are also undergoing renewed assessment. While the public cloud was widely adopted during the post-pandemic shift to digital operations, it is increasingly seen as a source of risk. According to the survey, 70 percent of Security and IT leaders now see the public cloud as a greater risk than any other environment. As a result, an equivalent proportion are actively considering moving data back from public to private cloud due to security concerns, and 54 percent are reluctant to use AI solutions in the public cloud citing apprehensions about intellectual property protection. The need for improved visibility is emphasised in the findings. Rising sophistication in cyberattacks has exposed the limitations of existing security tools—more than half (55 percent) of Security and IT leaders reported lacking confidence in their current toolsets' ability to detect breaches, mainly due to insufficient visibility. Accordingly, 64 percent say their primary objective for the next year is to achieve real-time threat monitoring through comprehensive real-time visibility into all data in motion. David Land, Vice President, APAC at Gigamon, commented: "Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments. 


Taming the Hacker Storm: Why Millions in Cybersecurity Spending Isn’t Enough

The key to taming the hacker storm is founded on the core principle of trust: that the individual or company you are dealing with is who or what they claim to be and behaves accordingly. Establishing a high-trust environment can largely hinder hacker success. ... For a pervasive selective trusted ecosystem, an organization requires something beyond trusted user IDs. A hacker can compromise a user’s device and steal the trusted user ID, making identity-based trust inadequate. A trust-verified device assures that the device is secure and can be trusted. But then again, a hacker stealing a user’s identity and password can also fake the user’s device. Confirming the device’s identity—whether it is or it isn’t the same device—hence becomes necessary. The best way to ensure the device is secure and trustworthy is to employ the device identity that is designed by its manufacturer and programmed into its TPM or Secure Enclave chip. ... Trusted actions are critical in ensuring a secure and pervasive trust environment. Different actions require different levels of authentication, generating different levels of trust, which the application vendor or the service provider has already defined. An action considered high risk would require stronger authentication, also known as dynamic authentication.


AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments. ... On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined. To get ahead of the change, organizations should audit their exposure and ask their managed service providers (MSPs) what commitments are pooled and when they renew, Brunkard advised. ... Ultimately, enterprises that have relied on vendor flexibility to manage overcommitment could face hits to gross margins, budget overruns, and a spike in “finance-engineering misalignment,” Barrow said. Those whose vendor models are based on RI and SP reallocation tactics will see their risk profile “changed overnight,” he said. New commitments will now essentially be non-cancellable financial obligations, and if cloud usage dips or pivots, they will be exposed. Many vendors won’t be able to offer protection as they have in the past.


The new C-Suite ally: Generative AI

While traditional GenAI applications focus on structured datasets, a significant frontier remains largely untapped — the vast swathes of unstructured "dark data" sitting in contracts, credit memos, regulatory reports, and risk assessments. Aashish Mehta, Founder and CEO of nRoad, emphasizes this critical gap.
"Most strategic decisions rely on data, but the reality is that a lot of that data sits in unstructured formats," he explained. nRoad’s platform, CONVUS, addresses this by transforming unstructured content into structured, contextual insights. ... Beyond risk management, OpsGPT automates time-intensive compliance tasks, offers multilingual capabilities, and eliminates the need for coding through intuitive design. Importantly, Broadridge has embedded a robust governance framework around all AI initiatives, ensuring security, regulatory compliance, and transparency. Trustworthiness is central to Broadridge’s approach. "We adopt a multi-layered governance framework grounded in data protection, informed consent, model accuracy, and regulatory compliance," Seshagiri explained. ... Despite the enthusiasm, CxOs remain cautious about overreliance on GenAI outputs. Concerns around model bias, data hallucination, and explainability persist. Many leaders are putting guardrails in place: enforcing human-in-the-loop systems, regular model audits, and ethical AI use policies.


Building a Proactive Defence Through Industry Collaboration

Trusted collaboration, whether through Information Sharing and Analysis Centres (ISACs), government agencies, or private-sector partnerships, is a highly effective way to enhance the defensive posture of all participating organisations. For this to work, however, organisations will need to establish operationally secure real-time communication channels that support the rapid sharing of threat and defence intelligence. In parallel, the community will also need to establish processes to enable them to efficiently disseminate indicators of compromise (IoCs) and tactics, techniques and procedures (TTPs), backed up with best practice information and incident reports. These collective defence communities can also leverage the centralised cyber fusion centre model that brings together all relevant security functions – threat intelligence, security automation, threat response, security orchestration and incident response – in a truly cohesive way. Providing an integrated sharing platform for exchanging information among multiple security functions, today’s next-generation cyber fusion centres enable organisations to leverage threat intelligence, identify threats in real-time, and take advantage of automated intelligence sharing within and beyond organisational boundaries. 


3 Powerful Ways AI is Supercharging Cloud Threat Detection

AI’s strength lies in pattern recognition across vast datasets. By analysing historical and real-time data, AI can differentiate between benign anomalies and true threats, improving the signal-to-noise ratio for security teams. This means fewer false positives and more confidence when an alert does sound. ... When a security incidents strike, every second counts. Historically, responding to an incident involves significant human effort – analysts must comb through alerts, correlate logs, identify the root cause, and manually contain the threat. This approach is slow, prone to errors, and doesn’t scale well. It’s not uncommon for incident investigations to stretch hours or days when done manually. Meanwhile, the damage (data theft, service disruption) continues to accrue. Human responders also face cognitive overloads during crises, juggling tasks like notifying stakeholders, documenting events, and actually fixing the problem. ... It’s important to note that AI isn’t about eliminating the need for human experts but rather augmenting their capabilities. By taking over initial investigation steps and mundane tasks, AI frees up human analysts to focus on strategic decision-making and complex threats. Security teams can then spend time on thorough analysis of significant incidents, threat hunting, and improving security posture, instead of constant firefighting. 


The hidden gaps in your asset inventory, and how to close them

The biggest blind spot isn’t a specific asset. It is trusting that what’s on paper is actually live and in production. Many organizations often solely focus on known assets within their documented environments, but this can create a false sense of security. Blind spots are not always the result of malicious intent, but rather of decentralized decision-making, forgotten infrastructure, or evolving technology that hasn’t been brought under central control. External applications, legacy technologies and abandoned cloud infrastructure, such as temporary test environments, may remain vulnerable long after their intended use. These assets pose a risk, particularly when they are unintentionally exposed due to misconfiguration or overly broad permissions. Third-party and supply chain integrations present another layer of complexity.  ... Traditional discovery often misses anything that doesn’t leave a clear, traceable footprint inside the network perimeter. That includes subdomains spun up during campaigns or product launches; public-facing APIs without formal registration or change control; third-party login portals or assets tied to your brand and code repositories, or misconfigured services exposed via DNS. These assets live on the edge, connected to the organization but not owned in a traditional sense. 

Daily Tech Digest - May 21, 2025


Quote for the day:

"A true dreamer is one who knows how to navigate in the dark." -- John Paul Warren


How Microsoft wants AI agents to use your PC for you

Microsoft’s concept revolves around the Model Context Protocol (MCP), which was created by Anthropic (the company behind the Claude chatbot) last year. That’s an open-source protocol that AI apps can use to talk to other apps and web services. Soon, Microsoft says, you’ll be able to let a chatbot — or “AI agent” — connect to apps running on your PC and manipulate them on your behalf. ... Compared to what Microsoft is proposing, past “agentic” AI solutions that promised to use your computer for you aren’t quite as compelling. They’ve relied on looking at your computer’s screen and using that input to determine what to click and type. This new setup, in contrast, is neat — if it works as promised — because it lets an AI chatbot interact directly with any old traditional Windows PC app. But the Model Context Protocol solution is even more advanced and streamlined than that. Rather than a chatbot having to put together a Spotify playlist by dragging and dropping songs in the old-fashioned way, it would give the AI the ability to give instructions to the Spotify app in a more simplified form. On a more technical level, Microsoft will let application developers make their applications function as MCP servers — a fancy way of saying they’d act like a bridge between the AI models and the tasks they perform. 


How vulnerable are undersea cables?

The only way to effectively protect a cable against sabotage is to bury the entire cable, says Liwång, which is not economically justifiable. In the Baltic Sea, it is easier and more sensible to repair the cables when they break, and it is more important to lay more cables than to try to protect a few.
Burying all transoceanic cables is hardly feasible in practice either. ... “Cable breaks are relatively common even under normal circumstances. In terrestrial networks, they can be caused by various factors, such as excavators working near the fiber installation and accidentally cutting it. In submarine cables, cuts can occur, for example due to irresponsible use of anchors, as we have seen in recent reports,” says Furdek Prekratic. Network operators ensure that individual cable breaks do not lead to widespread disruptions, she notes: “Optical fiber networks rely on two main mechanisms to handle such events without causing a noticeable disruption to public transport. The first is called protection. The moment an optical connection is established over a physical path between two endpoints, resources are also allocated to another connection that takes a completely different path between the same endpoints. If a failure occurs on any link along the primary path, the transmission quickly switches to the secondary path. The second mechanism is called failover. Here, the secondary path is not reserved in advance, but is determined after the primary path has suffered a failure.” 


Driving business growth through effective productivity strategies

In times of economic uncertainty, it is to be expected that businesses grow more cautious with their spending. However, this can result in missed opportunities to improve productivity in favour of cost reductions. While cutting costs can seem an attractive option in light of economic doubts, it is merely a short-term solution. When businesses hold back from knee-jerk reactions and maintain a focus on sustainable productivity gains, they will find themselves reaping rewards in the long term. Strategic investments in technology solutions are essential to support businesses in driving their productivity strategies forward. With new technology constantly being introduced, there are a lot of options for business decision makers to consider. Most obviously, there are technology features in our ERP systems, and in our project management and collaboration tools, that can be used to facilitate significant flexibility or performance advantages compared to legacy approaches and processes. ... While technology is a vital part of any innovative productivity model, it’s just one piece of the puzzle. It is no use installing modern technology if internal processes remain outdated. Businesses must also look to weed out inefficient practices to improve and streamline resource management. 


Synthetic data’s fine line between reward and disaster

Generating large volumes of training data on demand is appealing compared to slow, expensive gathering of real-world data, which can be fraught with privacy concerns, or just not available. Synthetic data ought to help preserve privacy, speed up development, and be more cost effective for long-tail scenarios enterprises couldn’t otherwise tackle, she adds. It can even be used for controlled experimentation, assuming you can make it accurate enough. Purpose-built data is ideal for scenario planning and running intelligent simulations, and synthetic data detailed enough to cover entire scenarios could predict future behavior of assets, processes, and customers, which would be invaluable for business planning. ... Created properly, synthetic data mimics statistical properties and patterns of real-world data without containing actual records from the original dataset, says Jarrod Vawdrey, field chief data scientist at Domino Data Lab. And David Cox, VP of AI Models at IBM Research suggests viewing it as amplifying rather than creating data. “Real data can be extremely expensive to produce, but if you have a little bit of it, you can multiply it,” he says. “In some cases, you can make synthetic data that’s much higher quality than the original. The real data is a sample. It doesn’t cover all the different variations and permutations you might encounter in the real world.”


AI Interventions to Reduce Cycle Time in Legacy Modernization

As the software becomes difficult to change, businesses may choose to tolerate conceptual drift or compensate for it through their operations. When the difficulty of modifying the software poses a significant enough business risk, a legacy modernization effort is undertaken. Legacy modernization efforts showcase the problem of concept recovery. In these circumstances, recovering a software system’s underlying concept is the labor-intensive bottleneck step to any change. Without it, the business risks a failed modernization or losing customers that depend on unknown or under-considered functionality. ... The goal of a software modernization’s design phase is to perform enough validation of the approach to be able to start planning and development while minimizing the amount of rework that could result due to missed information. Traditionally, substantial lead time is spent in the design phase inspecting legacy source code, producing a target architecture, and collecting business requirements. These activities are time-intensive, mutually interdependent, and usually the bottleneck step in modernization. While exploring how to use LLMs for concept recovery, we encountered three challenges to effectively serving teams performing legacy modernizations: which context was needed and how to obtain it, how to organize context so humans and LLMs can both make use of it, and how to support iterative improvement of requirements documents. 


OWASP proposes a way for enterprises to automatically identify AI agents

“The confusion about ANS versus protocols like MCP, A2A, ACP, and Microsoft Entra is understandable, but there’s an important distinction to make: ANS is a discovery service, not a communication protocol,” Narajala said. “MCP, A2A and ACP define how agents talk to each other once connected, like HTTP for web. ANS defines how agents find and verify each other before communication, like DNS for web. Microsoft Entra provides identity services, but primarily within Microsoft’s ecosystem.” ... “We’re fast approaching the point where the need for a standard to identify AI agents becomes painfully obvious. Right now, it’s a mess. Companies are spinning up agents left and right, with no trusted way to know what they are, what they do, or who built them,” Tvrdik said. “The Wild West might feel exciting, but we all know how most of those stories end. And it’s not secure.” As for ANS, he said. “it makes sense in theory. Treat agents like domains. Give them names, credentials, and a way to verify who’s talking to what. That helps with security, sure, but also with keeping things organized. Without it, we’re heading into chaos.” But Tvrdik stressed that the deployment mechanisms will ultimately determine if ANS works.


Driving DevOps With Smart, Scalable Testing

Testing apps manually isn’t easy and consumes a lot of time and money. Testing complex ones with frequent releases requires an enormous number of human hours when attempted manually. This will affect the release cycle, results will take longer to appear, and if shown to be a failure, you’ll need to conduct another round of testing. What’s more, the chances of doing it correctly, repeatedly and without any human error, are highly unlikely. Those factors have driven the development of automation throughout all phases of the testing process, ranging from infrastructure builds to actual testing of code and applications. As for who should write which tests, as a general rule of thumb, it’s a task best-suited to software engineers. They should create unit and integration tests as well as UI e2e tests. QA analysts should also be tasked with writing UI E2E tests scenarios together with individual product owners. QA teams collaborating with business owners enhance product quality by aligning testing scenarios with real-world user experiences and business objectives. ... AWS CodePipeline can provide completely managed continuous delivery that creates pipelines, orchestrates and updates infrastructure and apps. It also works well with other crucial AWS DevOps services, while integrating with third-party action providers like Jenkins and Github. 


Bridging the Digital Divide: Understanding APIs

While both Event-Driven Architecture (EDA) and Data-Driven Architecture (DDA) are crucial for modern enterprises, they serve distinct purposes, operate on different core principles, and manifest through different architectural characteristics. Understanding these differences is key for enterprise architects to effectively leverage their individual strengths and potential synergies. While EDA is often highly operational and tactical, facilitating immediate responses to specific triggers, DDA can span both operational and strategic domains. A key differentiator between the two lies in the “granularity of trigger.” EDA is typically triggered by fine-grained, individual events—a single mouse click, a specific sensor reading, a new message arrival. Each event is a distinct signal that can initiate a process. DDA, on the other hand, often initiates its processes or derives its triggers from aggregated data, identified patterns, or the outcomes of analytical models that have processed numerous data points. For example, an analytical process in DDA might be triggered by the availability of a complete daily sales dataset, or an alert might be generated when a predictive model identifies an anomaly based on a complex evaluation of multiple data streams over time. This distinction in trigger granularity directly influences the design of processing logic, the selection of underlying technologies, and the expected immediacy and nature of the system’s response.


What good threat intelligence looks like in practice

The biggest shortcoming is often in the last mile, connecting intelligence to real-time detection, response, and risk mitigation. Another challenge is organizational silos. In many environments, the CTI team operates separately from SecOps, incident response, or threat hunting teams. Without seamless collaboration between those functions, threat intelligence remains a standalone capability rather than a force multiplier. This is often where threat intelligence teams can be challenged to demonstrate value into security operations. ... Rather than picking one over the other, CISOs should focus on blending these sources and correlating them with internal telemetry. The goal is to reduce noise, enhance relevance, and produce enriched insights that reflect the organization’s actual threat surface. Feed selection should also consider integration capabilities — intelligence is only as useful as the systems and people that can act on it. When threat intelligence is operationalized, a clear picture can be formed from the variety of available threat feeds. ... The threat intel team should be seen not as another security function, but as a strategic partner in risk reduction and decision support. CISOs can encourage cross-functional alignment by embedding CTI into security operations workflows, incident response playbooks, risk registers, and reporting frameworks.


4 ways to safeguard CISO communications from legal liabilities

“Words matter incredibly in any legal proceeding,” Brown agreed. “The first thing that will happen will be discovery. And in discovery, they will collect all emails, all Teams, all Slacks, all communication mechanisms, and then run queries against that information.” Speaking with professionalism is not only a good practice in building an effective cybersecurity program, but it can go a long way to warding off legal and regulatory repercussions, according to Scott Jones, senior counsel at Johnson & Johnson. “The seriousness and the impact of your words and all other aspects of how you conduct yourself as a security professional can have impacts not only on substantive cybersecurity, but also what harms might befall your company either through an enforcement action or litigation,” he said. ... CISOs also need to pay attention to what they say based on the medium in which they are communicating. Pay attention to “how we communicate, who we’re communicating with, what platforms we’re communicating on, and whether it’s oral or written,” Angela Mauceri, corporate director and assistant general counsel for cyber and privacy at Northrop Grumman, said at RSA. “There’s a lasting effect to written communications.” She added, “To that point, you need to understand the data governance and, more importantly, the data retention policy of those electronic communication platforms, whether it exists for 60 days, 90 days, or six months.”

Daily Tech Digest - May 20, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Scalability and Flexibility: Every Software Architect's Challenge

Building successful business applications involves addressing practical challenges and strategic trade-offs. Cloud computing offers flexibility, but poor resource management can lead to ballooning costs. Organizations often face dilemmas when weighing feature richness against budget constraints. Engaging stakeholders early in the development process ensures alignment with priorities. ... Right-sizing cloud resources is essential for software architects, who can leverage tools to monitor usage and scale resources automatically based on demand. Serverless computing models, which charge only for execution time, are ideal for unpredictable workloads and seasonal fluctuations, ensuring organizations only use what they need when needed. .. The next decade will usher in unprecedented opportunities for innovation in business applications. Regularly reviewing market trends and user feedback ensures applications remain relevant. Features like voice commands and advanced analytics are becoming standard as users demand more intuitive interfaces, boosting overall performance and creating new avenues for innovation. Software architects can stay alert and flexible by regularly assessing application performance, user feedback, and market trends to guarantee that systems remain relevant.


Navigating the Future of Network Security with Secure Access Service Edge (SASE)

As businesses expand their digital footprint, cyber attackers increasingly target unsecured cloud resources and remote endpoints. Traditional perimeter-based network and security architectures are not capable of protecting distributed environments. Therefore, organizations must adopt a holistic, future-proof network and cybersecurity architecture to succeed in this rapidly changing business landscape. The ChallengesPerimeter-based security revolves around defending the network’s boundary. It assumes that anyone who has gained access to the network is trusted and that everything outside the network is a potential threat. While this model worked well when applications, data, and users were contained within corporate walls, it is not adequate in a world where cloud applications and hybrid work are the norm. ... ... SASE is an architecture comprising a broad spectrum of technologies, including Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Firewall as a Service (FWaaS), Cloud Access Security Brocker (CASB), Data Loss Prevention (DLP), and Software-Defined Wide Area Networking (SD-WAN). Everything is embodied into a single, cloud-native platform that provides advanced cyber protection and seamless network performance for highly distributed applications and users.


Whether AI is a bubble or revolution, how does software survive?

Bubble or not, AI has certainly made some waves, and everyone is looking to find the right strategy. It’s already caused a great deal of disruption—good and bad—among software companies large and small. The speed at which the technology has moved from its coming out party, has been stunning; costs have dropped, hardware and software have improved, and the mediocre version of many jobs can be replicated in a chat window. It’s only going to continue. “AI is positioned to continuously disrupt itself, said McConnell. “It's going to be a constant disruption. If that's true, then all of the dollars going to companies today are at risk because those companies may be disrupted by some new technology that's just around the corner.” First up on the list of disruption targets: startups. If you’re looking to get from zero to market fit, you don’t need to build the same kind of team like you used to. “Think about the ratios between how many engineers there are to salespeople,” said Tunguz. “We knew what those were for 10 or 15 years, and now none of those ratios actually hold anymore. If we are really are in a position that a single person can have the productivity of 25, management teams look very different. Hiring looks extremely different.” That’s not to say there won’t be a need for real human coders. We’ve seen how badly the vibe coding entrepreneurs get dunked on when they put their shoddy apps in front of a merciless internet.


The AI security gap no one sees—until it’s too late

The most serious—and least visible—gaps stem from the “Jenga-style” layering of managed AI services, where cloud providers stack one service on another and ship them with user-friendly but overly permissive defaults. Tenable’s 2025 Cloud AI Risk Report shows that 77 percent of organisations running Google Cloud’s Vertex AI Workbench leave the notebook’s default Compute Engine service account untouched; that account is an all-powerful identity which, if hijacked, lets an attacker reach every other dependent service. ... CIOs should treat every dataset in the AI pipeline as a high-value asset. Begin with automated discovery and classification across all clouds so you know exactly where proprietary corpora or customer PII live, then encrypt them in transit and at rest in private, version-controlled buckets. Enforce least-privilege access through short-lived service-account tokens and just-in-time elevation, and isolate training workloads on segmented networks that cannot reach production stores or the public internet. Feed telemetry from storage, IAM and workload layers into a Cloud-Native Application Protection Platform that includes Data Security Posture Management; this continuously flags exposed buckets, over-privileged identities and vulnerable compute images, and pushes fixes into CI/CD pipelines before data can leak.


5 questions defining the CIO agenda today

CIOs along with their executive colleagues and board members “realize that hacks and disruptions by bad actors are an inevitability,” SIM’s Taylor says. That realization has shifted security programs from being mostly defensive measures to ones that continuously evolve the organization’s ability to identify breaches quickly, respond rapidly, and return to operations as fast as possible, Taylor says. The goal today is ensuring resiliency — even as the bad actors and their attack strategies evolve. ... Building a tech stack that can grow and retract with business needs, and that can evolve quickly to capitalize on an ever-shifting technology landscape, is no easy feat, Phelps and other IT leaders readily admit. “In modernizing, it’s such a moving target, because once you got it modernized, something new can come out that’s better and more automated. The entire infrastructure is evolving so quickly,” says Diane Gutiw ... “CIOs should be asking, ‘How do I change or adapt what I do now to be able to manage a hybrid workforce? What does the future of work look like? How do I manage that in a secure, responsible way and still take advantage of the efficiencies? And how do I let my staff be innovative without violating regulation?’” Gutiw says, noting that today’s managers “are the last generation of people who will only manage people.”


Microsoft just taught its AI agents to talk to each other—and it could transform how we work

Microsoft is giving organizations more flexibility with their AI models by enabling them to bring custom models from Azure AI Foundry into Copilot Studio. This includes access to over 1,900 models, including the latest from OpenAI GPT-4.1, Llama, and DeepSeek. “Start with off-the-shelf models because they’re already fantastic and continuously improving,” Smith said. “Companies typically choose to fine-tune these models when they need to incorporate specific domain language, unique use cases, historical data, or customer requirements. This customization ultimately drives either greater efficiency or improved accuracy.” The company is also adding a code interpreter feature that brings Python capabilities to Copilot Studio agents, enabling data analysis, visualization, and complex calculations without leaving the Copilot Studio environment. Smith highlighted financial applications as a particular strength: “In financial analysis and services, we’ve seen a remarkable breakthrough over the past six months,” Smith said. “Deep reasoning models, powered by reinforcement learning, can effectively self-verify any process that produces quantifiable outputs.” He added that these capabilities excel at “complex financial analysis where users need to generate code for creating graphs, producing specific outputs, or conducting detailed financial assessments.”


Culture fit is a lie: It’s time we prioritised culture add

The idea of culture fit originated with the noble intent of fostering team cohesion. But over time, it has become an excuse to hire people who are familiar, comfortable and easy to manage. In doing so, companies inadvertently create echo chambers—workforces that lack diverse perspectives, struggle to challenge the status quo and fail to innovate. Ankur Sharma, Co-Founder & Head of People at Rebel Foods, understands this well. Speaking at the TechHR Pulse Mumbai 2025 conference, Sharma explained how Rebel Foods moved beyond hiring for cultural likeness. “We are not building a family; we are building a winning team,” he said, emphasising that what truly matters is competency, accountability and adaptability. The problem with culture fit is not just about homogeneity—it’s about stagnation. When teams are made up of individuals who think alike, they lose the ability to see challenges from multiple angles. Companies that prioritise cultural uniformity often struggle to pivot in response to industry shifts. ... Leading organisations are abandoning the notion of culture fit and shifting towards ‘culture add’—hiring employees who bring fresh ideas, challenge existing norms, and contribute new perspectives. Instead of asking, ‘Will this person fit in?’ Hiring managers are asking, ‘What unique value does this person bring?’


Closing security gaps in multi-cloud and SaaS environments

Many organizations are underestimating the risk — especially as the nature of attacks evolves. Traditional behavioral detection methods often fall short in spotting modern threats such as account hijacking, phishing, ransomware, data exfiltration, and denial of service attacks. Detecting these types of attacks require correlation and traceability across different sources including runtime events with eBPF, cloud audit logs, and APIs across both cloud infrastructure and SaaS. ... As attackers adopt stealthier tactics — from GenAI-generated malware to supply chain compromises — traditional signature- and rule-based methods fall short. ... A unified cloud and SaaS security strategy means moving away from treating infrastructure, applications, and SaaS as isolated security domains. Instead, it focuses on delivering seamless visibility, risk prioritization, and automated response across the full spectrum of enterprise environments — from legacy on-premises to dynamic cloud workloads to business-critical SaaS platforms and applications. ... Native CSP and SaaS telemetry is essential, but it’s not enough on its own. Continuous inventory and monitoring across identity, network, compute, and AI is critical — especially to detect misconfigurations and drift. 


AI-Driven Test Automation Techniques for Multimodal Systems

Traditional testing frameworks struggle to meet these demands, particularly as multimodal systems continuously evolve through real-time updates and training. Consequently, AI-powered test automation has emerged as a promising paradigm to ensure scalable and reliable testing processes for multimodal systems. ... Natural Language Processing (NLP)-powered AI tools will understand and define the requirements in a more elaborate and defined structure. This will detect any ambiguity and gaps in requirements. For example, the “System should display message quickly” AI tool will identify the need for a precise definition for the word “quickly.” It looks simple, but if missed, it could lead to great performance issues in production. ... Based on AI-generated requirements and business scenarios, AI-based tools can generate test strategy documents by identifying resources, constraints, and dependencies between systems. All this can be achieved with NLP AI tools ... AI-driven test automation solutions can improve shift-left testing even more by generating automated test scripts faster. Testers can run automation at an early stage when the code is ready to test. AI tools like Chat GPT 4.0 provide script code in any language, like Java or Python, based on simple text input. This uses the NLP (Natural Language Processing) AI model to generate code for automation scripts.


IGA: What Is It, and How Can SMBs Use It?

The first step in a total IGA strategy has nothing to do with software. It actually starts with IT and business leaders determining what the rules of identity governance and behavior should be. The benefit of having a smaller organization is that there are not quite as many stakeholders as in an enterprise. The challenge, of course, is that people, time and resources are limited. IT may have to assume the role of facilitator and earn buy-in. Nevertheless, this is a worthwhile exercise, as it can help establish a platform for secure growth in the future. And again, for SMBs in regulatory-heavy industries — especially finance, healthcare and government contractors — IGA should be a top priority. ... To do this, CIOs should first procure support from key stakeholders by meeting with them individually to explain the need for IGA as an overarching security technology and policy platform for digital security. In these discussions, CIOs can present the long-term benefits of an IGA program that can streamline user identity verification across services while easing audits and automating compliance. ... A strategic roadmap for IGA should involve minimally disruptive business and user adoption and quick technology implementation. One way to do this is to create a phased implementation approach that tackles the most mission-critical and sensitive systems first before extending to other areas of IT.

Daily Tech Digest - May 19, 2025


Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree


Adopting agentic AI? Build AI fluency, redesign workflows, don’t neglect supervision

AI upskilling is still majorly under-prioritized across organizations. Did you know that less than one-third of companies have trained even a quarter of their staff to use AI? How do leaders expect employees to feel empowered to use AI if education isn’t presented as the priority? Maintaining a nimble and knowledgeable workforce is critical, fostering a culture that embraces technological change. Team collaboration in this sense could take the form of regular training about agentic AI, highlighting its strengths and weaknesses and focusing on successful human-AI collaborations. For more established companies, role-based training courses could successfully show employees in different capacities and roles to use generative AI appropriately. ... Although gen AI will not substantially affect organizations’ workforce sizes in the short-term, we should still expect an evolution of role titles and responsibilities. For example, from service operations and product development to AI ethics and AI model validation positions. For this shift to successfully happen, executive-level buy-in is paramount. Senior leaders need a clearly-defined organization-wide strategy, including a dedicated team to drive gen AI adoption. We’ve seen that when senior leaders delegate AI integration solely to IT or digital technology teams, the business context can be neglected. 


Half of tech execs are ready to let AI take the wheel

“AI is not just an incremental change from digital business. AI is a step change in how business and society work,” he said. “A significant implication is that, if savviness across the C-suite is not rapidly improved, competitiveness will suffer, and corporate survival will be at stake.” CEOs perceived even the CIO, chief information security officer (CISO), and chief data officer (CDO) as lacking AI savviness. Respondents said the top two factors limiting AI’s deployment and use are the inability to hire adequate numbers of skilled people and an inability to calculate value or outcomes. “CEOs have shifted their view of AI from just a tool to a transformative way of working,” said Jennifer Carter, a principal analyst at Gartner. “This change has highlighted the importance of upskilling. As leaders recognize AI’s potential and its impact on their organizations, they understand that success isn’t just about hiring new talent. Instead, it’s about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks.” This focus on upskilling is a strategic response to AI’s evolving role in business, ensuring that the entire organization can adapt and thrive in this new paradigm. Sixty-six percent of CEOs said their business models are not fit for AI purposes, according to Gartner’s survey. 


What comes after Stack Overflow?

The most obvious option is the one that is already happening whether we like it or not: LLMs are the new Q&A platforms. In the immediate term, ChatGPT and similar tools have become the go-to source for many. They provide the convenience of natural language queries with immediate answers. It’s possible we’ll see official “Stack Overflow GPT” bots or domain-specific LLMs trained on curated programming knowledge. In fact, Stack Overflow’s own team has been experimenting with using AI to draft preliminary answers to questions, while linking back to the original human posts for context. This kind of hybrid approach leverages AI’s speed but still draws on the library of verified solutions the community has built over years. ... Additionally, it’s still possible that the social Q&A sites will save the experience through data partnerships. For example, Stack Overflow, Reddit, and others have moved toward paid licensing agreements for their data. The idea is to both control how AI companies use community content and to funnel some value back to the content creators. We may see new incentives for experienced developers to contribute knowledge. One proposal is that if an AI answer draws from your Stack Overflow post, you could earn reputation points or even a cut of the licensing fee.


8 security risks overlooked in the rush to implement AI

AI models are frequently deployed as part of larger application pipelines, such as through APIs, plugins, or retrieval-augmented generation (RAG) architectures. “Insufficient testing at this level can lead to insecure handling of model inputs and outputs, injection pathways through serialized data formats, and privilege escalation within the hosting environment,” Mindgard’s Garraghan says. “These integration points are frequently overlooked in conventional AppSec [application security] workflows.” ... AI systems may exhibit emergent behaviors only during deployment, especially when operating under dynamic input conditions or interacting with other services. “Vulnerabilities such as logic corruption, context overflow, or output reflection often appear only during runtime and require operational red-teaming or live traffic simulation to detect,” according to Garraghan. ... The rush to implement AI puts CISOs in a stressful bind, but James Lei, chief operating officer at application security testing firm Sparrow, advises CISOs to push back on the unchecked enthusiasm to introduce fundamental security practices into the deployment process. “To reduce these risks, organizations should be testing AI tools in the same way they would any high-risk software, running simulated attacks, checking for misuse scenarios, validating input and output flows, and ensuring that any data processed is appropriately protected,” he says.


A Brief History of Data Stewardship

Today, in leading-edge organizations, data stewardship is at the heart of data-driven transformation initiatives, such as DataOps, AI governance, and improved metadata management, which have evolved data stewardship beyond traditional data quality control. Data stewards can be found in every industry and in organizations of any size. Modern data stewards interact with:Automated data quality tools that identify and resolve data issues at scale. Data catalogs and data lineage applications that organize business and technical metadata and provide searchable inventories of data assets. AI/ML models that require extensive monitoring to ensure they are trained on unbiased, accurate datasets The scope of data stewardship has expanded to include ethical considerations, particularly concerning data privacy, algorithmic bias, and responsible AI. Data stewards are increasingly seen as the conscience of data within organizations, championing not only compliance but also fairness, transparency, and accountability. New organizational models, such as federated data stewardship – in which data stewardship responsibilities are distributed across teams – can promote improved collaboration and enable scaling data stewardship efforts alongside agile and decentralized business units.


Introducing strands agents, an Open Source AI agents SDK

In Strands’ model-driven approach, tools are key to how you customize the behavior of your agents. For example, tools can retrieve relevant documents from a knowledge base, call APIs, run Python logic, or just simply return a static string that contains additional model instructions. Tools also help you achieve complex use cases in a model-driven approach, such as with these Strands Agents example pre-built tools: Retrieve tool: This tool implements semantic search using Amazon Bedrock Knowledge Bases. Beyond retrieving documents, the retrieve tool can also help the model plan and reason by retrieving other tools using semantic search. For example, one internal agent at AWS has over 6,000 tools to select from! Models today aren’t capable of accurately selecting from quite that many tools. Instead of describing all 6,000 tools to the model, the agent uses semantic search to find the most relevant tools for the current task and describes only those tools to the model. ... Thinking tool: This tool prompts the model to do deep analytical thinking through multiple cycles, enabling sophisticated thought processing and self-reflection as part of the agent. In the model-driven approach, modeling thinking as a tool enables the model to reason about if and when a task needs deep analysis.


AI hallucinations and their risk to cybersecurity operations

“AI hallucinations are an expected byproduct of probabilistic models,” explains Chetan Conikee, CTO at Qwiet AI, emphasizing that the focus shouldn’t be on eliminating them entirely but on minimizing operational disruption. “The CISO’s priority should be limiting operational impact through design, monitoring, and policy.” That starts with intentional architecture. Conikee recommends implementing a structured trust framework around AI systems, an approach that includes practical middleware to vet inputs and outputs through deterministic checks and domain-specific filters. This step ensures that models don’t operate in isolation but within clearly defined bounds that reflect enterprise needs and security postures. Traceability is another cornerstone. “All AI-generated responses must carry metadata including source context, model version, prompt structure, and timestamp,” Conikee notes. Such metadata enables faster audits and root cause analysis when inaccuracies occur, a critical safeguard when AI output is integrated into business operations or customer-facing tools. For enterprises deploying LLMs, Conikee advises steering clear of open-ended generation unless necessary. Instead, organizations should lean on RAG grounded in curated, internal knowledge bases. 


Can Data Governance Set Us Free?

Internally, an important lesson has been to view data management as a federated service. This entails a shift from data management being a ‘governance’ activity – something people did because we pushed them to do it – to a service-driven activity – something people do because they want to. We worked with our User-Centred Service Design team to agree an underpinning set of principles to get buy-in across the organisation on the purpose of, and facets to, good data management. The overarching principle is that data are valuable, shared assets. We can maximise value by making data widely available, easy to use and understand, whilst ensuring data are protected and not misused. Bringing the service to life means getting four things right: First, a proportionate vision for service maturity. All data need to have basic information registered. But where data are widely used or feed into critical processes, it becomes instrumental to dedicate resources to supporting ease of access, use and quality for our users. We are increasingly tending toward managing these assets centrally. Second, the assignment of clear responsibilities across the federation. We are working through which datasets will be managed centrally and which will be managed by teams across the Bank that are expert in them. 


To Fix Platform Engineering, Build What Users Actually Want

If it takes developers and engineers months to become productive, your platform isn’t helping — it’s hindering. A great platform should be as frictionless and intuitive as a consumer-grade product. Internal platforms must empower instant productivity. If your platform offers compute, it shouldn’t just be raw power — it should be integrated, easy to adopt, and evolve seamlessly in the background. Let’s not create unnecessary cognitive load. Developers are adapting quickly to generative AI and new tech. The real value lies in solving real, tangible problems — not fictional ones. This brings us to a deeper look at what’s not working — and why so many efforts fail despite the best intentions. ... Most enterprises are hybrid by nature — legacy systems, siloed processes and complex workflows are the norm. The real challenge isn’t just technological; it’s integrating platform engineering into these messy realities without making it worse. Today, no single product solves this end-to-end. We’re still lacking a holistic solution that manages internal workflows, governance and hybrid complexity without adding friction. What’s needed is a shift in mindset — from assembling open source tools to building integrated, adoption-focused, business-aligned platforms. And that shift must be guided by clear trends in tooling and team structure.


Liquid cooling becoming essential as AI servers proliferate

“A lot of the carbon emissions of the data center happen in the build of it, in laying down the slab,” says Josh Claman, CEO at Accelsius, a liquid cooling company. “I hope that companies won’t just throw all that away and start over.” In addition to the environmental benefits, upgrading an air-cooled data center into a hybrid, liquid and air system has other advantages, says Herb Hogue, CTO at Myriad360, a global systems integrator. Liquid cooling is more effective than air alone, he says, and when used in combination with air cooling, the temperature of the air cooling systems can be increased slightly without impacting performance. “This reduces overall energy consumption and lowers utility bills,” he says. Liquid cooling also allows for not just lower but also more consistent operating temperatures, Hogue says. That leads to less wear on IT equipment, and, without fans, fewer moving parts per server. The downsides, however, include the cost of installing the hybrid system and needed specialized operations and maintenance skills. There might also be space constraints and other challenges. Still, it can be a smart approach for handling high-density server setups, he says. And there’s one more potential benefit, says Gerald Kleyn, vice president of customer solutions for HPC and AI at Hewlett Packard Enterprise.