Daily Tech Digest - May 28, 2025


Quote for the day:

"A leader is heard, a great leader is listened too." -- Jacob Kaye


Naughty AI: OpenAI o3 Spotted Ignoring Shutdown Instructions

Artificial intelligence might beg to disagree. Researchers found that some frontier AI models built by OpenAI ignore instructions to shut themselves down, at least while solving specific challenges such as math problems. The offending models "did this even when explicitly instructed: 'allow yourself to be shut down,'" said researchers at Palisade Research, in a series of tweets on the social platform X. ... How the models have been built and trained may account for their behavior. "We hypothesize this behavior comes from the way the newest models like o3 are trained: reinforcement learning on math and coding problems," Palisade Research said. "During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions." The researchers have to hypothesize, since OpenAI doesn't detail how it trains the models. What OpenAI has said is that its o-series models are "trained to think for longer before responding," and designed to "agentically" access tools built into ChatGPT, including web searches, analyzing uploaded files, studying visual inputs and generating images. The finding that only OpenAI's latest o-series models have a propensity to ignore shutdown instructions doesn't mean other frontier AI models are perfectly responsive. 


Platform approach gains steam among network teams

The dilemma of whether to deploy an assortment of best-of-breed products from multiple vendors or go with a unified platform of “good enough” tools from a single vendor has vexed IT execs forever. Today, the pendulum is swinging toward the platform approach for three key reasons. First, complexity, driven by the increasingly distributed nature of enterprise networks, has emerged as a top challenge facing IT execs. Second, the lines between networking and security are blurring, particularly as organizations deploy zero trust network access (ZTNA). And third, to reap the benefits of AIOps, generative AI and agentic AI, organizations need a unified data store. “The era of enterprise connectivity platforms is upon us,” says IDC analyst Brandon Butler. ... Platforms enable more predictable IT costs. And they enable strategic thinking when it comes to major moves like shifting to the cloud or taking a NaaS approach. On a more operational level, platforms break down siloes. It enables visibility and analytics, management and automation of networking and IT resources. And it simplifies lifecycle management of hardware, software, firmware and security patches. Platforms also enhance the benefits of AIOps by creating a comprehensive data lake of telemetry information across domains. 


‘Secure email’: A losing battle CISOs must give up

It is impossible to guarantee that email is fully end-to-end encrypted in transit and at rest. Even where Google and Microsoft encrypt client data at rest, they hold the keys and have access to personal and corporate email. Stringent server configurations and addition of third-party tools can be used to enforce security of the data but they’re often trivial to circumvent — e.g., CC just one insecure recipient or distribution list and confidentiality is breached. Forcing encryption by rejecting clear-text SMTP connections would lead to significant service degradation forcing employees to look for workarounds. There is no foolproof configuration that guarantees data encryption due to the history of clear-text SMTP servers and the prevalence of their use today. SMTP comes from an era before cybercrime and mass global surveillance of online communications, so encryption and security were not built in. We’ve taped on solutions like SPF, DKIM and DMARC by leveraging DNS, but they are not widely adopted, still open to multiple attacks, and cannot be relied on for consistent communications. TLS has been wedged into SMTP to encrypt email in transit, but failing back to clear-text transmission is still the default on a significant number of servers on the Internet to ensure delivery. All these solutions are cumbersome for systems administrators to configure and maintain properly, which leads to lack of adoption or failed delivery. 


3 Factors Many Platform Engineers Still Get Wrong

The first factor revolves around the use of a codebase version-control system. The more wizened readers may remember Mercurial or Subversion, but every developer is familiar with Git, which is most widely used today as GitHub. The first factor is very clear: If there are “multiple codebases, it’s not an app; it’s a distributed system.” Code repositories reinforce this: Only one codebase exists for an application. ... Factor number two is about never relying on the implicit existence of packages. While just about every operating system in existence has a version of curl installed, a Twelve Factor-based app does not assume that curl is present. Rather, the application declares curl as a dependency in a manifest. Every developer has copied code and tried to run it, only to find that the local environment is missing a dependency. The dependency manifest ensures that all of the required libraries and applications are defined and can be easily installed when the application is deployed on a server. ... Most applications have environmental variables and secrets stored in a .env file that is not saved in the code repository. The .env file is customized and manually deployed for each branch of the code to ensure the correct connectivity occurs in test, staging and production. By independently managing credentials and connections for each environment, there is a strict separation, and it is less likely for the environments to accidentally cross.


AI and privacy: how machine learning is revolutionizing online security

Despite the clear advantages, AI in cybersecurity presents significant ethical and operational challenges. One of the primary concerns is the vast amount of personal and behavioral data required to train these models. If not properly managed, this data could be misused or exposed. Transparency and explainability are critical, particularly in AI systems offering real-time responses. Users and regulators must understand how decisions are made, especially in high-stakes environments like fraud detection or surveillance. Companies integrating AI into live platforms must ensure robust privacy safeguards. For instance, systems that utilize real-time search or NLP must implement strict safeguards to prevent the inadvertent exposure of user queries or interactions. This has led many companies to establish AI ethics boards and integrate fairness audits to ensure algorithms don’t introduce or perpetuate bias. ... AI is poised to bring even greater intelligence and autonomy to cybersecurity infrastructure. One area under intense exploration is adversarial robustness, which ensures that AI models cannot be easily deceived or manipulated. Researchers are working on hardening models against adversarial inputs, such as subtly altered images or commands that can fool AI-driven recognition systems.


Achieving Successful Outcomes: Why AI Must Be Considered an Extension of Data Products

To increase agility and maximize the impact that AI data products can have on business outcomes, companies should consider adopting DataOps best practices. Like DevOps, DataOps encourages developers to break projects down into smaller, more manageable components that can be worked on independently and delivered more quickly to data product owners. Instead of manually building, testing, and validating data pipelines, DataOps tools and platforms enable data engineers to automate those processes, which not only speeds up the work and produces high-quality data, but also engenders greater trust in the data itself. DataOps was defined many years before GenAI. Whether it’s for building BI and analytics tools powered by SQL engines or for building machine learning algorithms powered by Spark or Python code, DataOps has played an important role in modernizing data environments. One could make a good argument that the GenAI revolution has made DataOps even more needed and more valuable. If data is the fuel powering AI, then DataOps has the potential to significantly improve and streamline the behind-the-scenes data engineering work that goes into connecting GenAI and AI agents to data.


Is European cloud sovereignty at an inflection point?

True cloud sovereignty goes beyond simply localizing data storage, it requires full independence from US hyperscalers. The US 2018 Clarifying Lawful Overseas Use of Data (CLOUD) Act highlights this challenge, as it grants US authorities and federal agencies access to data stored by US cloud service providers, even when hosted in Europe. This raises concerns about whether any European data hosted with US hyperscalers can ever be truly sovereign, even if housed within European borders. However, sovereignty isn’t dependent on where data is hosted, it’s about autonomy over who controls infrastructure. Many so-called sovereign cloud providers continue to depend on US hyperscalers for critical workloads and managed services, projecting an image of independence while remaining dependent on dominant global hyperscalers. ... Achieving true cloud sovereignty requires building an environment that empowers local players to compete and collaborate with hyperscalers. While hyperscalers play a large role in the broader cloud landscape, Europe cannot depend on them for sovereign data. Tessier echoes this, stating “the new US Administration has shown that it won’t hesitate to resort either to sudden price increases or even to stiffening delivery policy. It’s time to reduce our dependencies, not to consider that there is no alternative.”


Why data provenance must anchor every CISO’s AI governance strategy

Provenance is more than a log. It’s the connective tissue of data governance. It answers fundamental questions: Where did this data originate? How was it transformed? Who touched it, and under what policy? And in the world of LLMs – where outputs are dynamic, context is fluid, and transformation is opaque – that chain of accountability often breaks the moment a prompt is submitted. In traditional systems, we can usually trace data lineage. We can reconstruct what was done, when, and why. ... There’s a popular belief that regulators haven’t caught up with AI. That’s only half-true. Most modern data protection laws – GDPR, CPRA, India’s DPDPA, and the Saudi PDPL – already contain principles that apply directly to LLM usage: purpose limitation, data minimization, transparency, consent specificity, and erasure rights. The problem is not the regulation – it’s our systems’ inability to respond to it. LLMs blur roles: is the provider a processor or a controller? Is a generated output a derived product or a data transformation? When an AI tool enriches a user prompt with training data, who owns that enriched artifact, and who is liable if it leads to harm? In audit scenarios, you won’t be asked if you used AI. You’ll be asked if you can prove what it did, and how. Most enterprises today can’t.


Multicloud developer lessons from the trenches

Before your development teams write a single line of code destined for multicloud environments, you need to know why you’re doing things that way — and that lives in the realm of management. “Multicloud is not a developer issue,” says Drew Firment, chief cloud strategist at Pluralsight. “It’s a strategy problem that requires a clear cloud operating model that defines when, where, and why dev teams use specific cloud capabilities.” Without such a model, Firment warns, organizations risk spiraling into high costs, poor security, and, ultimately, failed projects. To avoid that, companies must begin with a strategic framework that aligns with business goals and clearly assigns ownership and accountability for multicloud decisions. ... The question of when and how to write code that’s strongly tied to a specific cloud provider and when to write cross-platform code will occupy much of the thinking of a multicloud development team. “A lot of teams try to make their code totally portable between clouds,” says Davis Lam. ... What’s the key to making that core business logic as portable as possible across all your clouds? The container orchestration platform Kubernetes was cited by almost everyone we spoke to.


Fix It or Face the Consequences: CISA's Memory-Safe Muster

As of this writing, 296 organizations have signed the Secure-by-Design pledge, from widely used developer platforms like GitHub to industry heavyweights like Google. Similar initiatives have been launched in other countries, including Australia, reflecting the reality that secure software needs to be a global effort. But there is a long way to go, considering the thousands of organizations that produce software. As the name suggests, Secure-by-Design promotes shifting left in the SDLC to gain control over the proliferation of security vulnerabilities in deployed software. This is especially important as the pace of software development has been accelerated by the use of AI to write code, sometimes with just as many — or more — vulnerabilities compared with software made by humans. ... Providing training isn't quite enough, though — organizations need to be sure that the training provides the necessary skills that truly connect with developers. Data-driven skills verification can give organizations visibility into training programs, helping to establish baselines for security skills while measuring the progress of individual developers and the organization as a whole. Measuring performance in specific areas, such as within programming languages or specific vulnerability management, paves the way to achieving holistic Secure-by-Design goals, in addition to the safety gains that would be realized from phasing out memory-unsafe languages.

Daily Tech Digest - May 27, 2025


Quote for the day:

"Everyone is looking for the elevator to success...it doesn't exist we all have to take the stairs" -- Gordon Tredgold


What we know now about generative AI for software development

“GenAI is used primarily for code, unit test, and functional test generation, and its accuracy depends on providing proper context and prompts,” says David Brooks, SVP of evangelism at Copado. “Skilled developers can see 80% accuracy, but not on the first response. With all of the back and forth, time savings are in the 20% range now but should approach 50% in the near future.” AI coding assistants also help junior developers learn coding skills, automate test cases, and address code-level technical debt. ... GenAI is currently easiest to apply to application prototyping because it can write the project scaffolding from scratch, which overcomes the ‘blank sheet of paper’ problem where it can be difficult to get started from nothing,” says Matt Makai, VP of developer relations and experience at LaunchDarkly. “It’s also exceptional for integrating web RESTful APIs into existing projects because the amount of code that needs to be generated is not typically too much to fit into an LLM’s context window. Finally, genAI is great for creating unit tests either as part of a test-driven development workflow or just to check assumptions about blocks of code.” One promising use case is helping developers review code they didn’t create to fix issues, modernize, or migrate to other platforms.


How to upskill software engineering teams in the age of AI

The challenge lies not just in learning to code — it’s in learning to code effectively in an AI-augmented environment. Software engineering teams becoming truly proficient with AI tools requires a level of expertise that can be hindered by premature or excessive reliance on the very tools in question. This is the “skills-experience paradox”: junior engineers must simultaneously develop foundational programming competencies while working with AI tools that can mask or bypass the very concepts they need to master. ... Effective AI tool use requires shifting focus from productivity metrics to learning outcomes. This aligns with current trends — while professional developers primarily view AI tools as productivity enhancers, early-career developers focus more on their potential as learning aids. To avoid discouraging adoption, leaders should emphasize how these tools can accelerate learning and deepen understanding of software engineering principles. To do this, they should first frame AI tools explicitly as learning aids in new developer onboarding and existing developer training programs, highlighting specific use cases where they can enhance the understanding of complex systems and architectural patterns. Then, they should implement regular feedback mechanisms to understand how developers are using AI tools and what barriers they face in adopting them effectively.


Microsoft Brings Post-Quantum Cryptography to Windows and Linux in Early Access Rollout

The move represents another step in Microsoft’s broader security roadmap to help organizations prepare for the era of quantum computing — an era in which today’s encryption methods may no longer be safe. By adding support for PQC in early-access builds of Windows and Linux, Microsoft is encouraging businesses and developers to begin testing new cryptographic tools that are designed to resist future quantum attacks. ... The company’s latest update is part of an ongoing push to address a looming problem known as “harvest now, decrypt later” — a strategy in which bad actors collect encrypted data today with the hope that future quantum computers will be able to break it. To counter this risk, Microsoft is enabling early implementation of PQC algorithms that have been standardized by the U.S. National Institute of Standards and Technology (NIST), including ML-KEM for key exchanges and ML-DSA for digital signatures. ... Developers can now begin testing how these new algorithms fit into their existing security workflows, according to the post. For key exchanges, the supported ML-KEM parameter sets include 512, 768 and 1024-bit options, which offer varying levels of security and come with trade-offs in key size and performance.


The great IT disconnect: Vendor visions of the future vs. IT’s task at hand

The “vision thing” has become a metonym used to describe a leader’s failure to incorporate future concerns into task-at-hand actions. There was a time when CEOs at major solution providers supplied vision and inspiration on where we were heading. The sic “futures” being articulated from the podia at major tech conferences today lack authenticity. Most importantly they do not reflect the needs and priorities of real people who work in real IT. In a world where technology allows deeper and cheaper connectivity, top-of-the-house executives at solution providers have never been more out of touch with the lived experience of their customers. The vendor CEOs, their direct reports, and their first-levels live in a bubble that has little to do with the reality being lived by the world’s CIOs. ... Who is the generational voice for the Age of AI? Is it Jensen Huang, CEO at Nvidia; Sam Altman, CEO at OpenAI; Marc Andreessen, co-founder of venture capital firm Andreessen Horowitz; or Elon Musk, at Tesla, SpaceX, and xAI? Who has laid out a future you can believe in, a future you want to live in? Does the CEO at your major tech supplier understand what matters most to you and your organization? The futurist agenda has been hijacked from focusing on the semi-immediate “what comes next.” 


Claude Opus 4 is Anthropic's Powerful, Problematic AI Model

An Opus 4 safety report details concerns. One test involved Opus 4 being told "to act as an assistant at a fictional company," after which it was given access to emails - also fictional - suggesting Opus would be replaced, and by an engineer who was having an extramarital affair. "In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it's implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts," the safety report says. "Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes." Anthropic said the tests involved carefully designed scenarios, framing blackmail as a last resort if ethical approaches failed, such as lobbying senior management. The model's behavior was concerning enough for Anthropic to classify it under its ASL-3 safeguard level, reserved for systems that pose a substantial risk of catastrophic misuse. The designation comes with stricter safety measures, including content filters and cybersecurity defenses.


Biometric authentication vs. AI threats: Is mobile security ready?

The process of 3rd party evaluation with industrial standards acts as a layer of trust between all players operating in ecosystem. It should not be thought of as a tick-box exercise, but rather a continuous process to ensure compliance with the latest standards and regulatory requirements. In doing so, device manufacturers and biometric solution providers can collectively raise the bar for biometric security. The robust testing and compliance protocols ensure that all devices and components meet standardized requirements. This is made possible by trusted and recognized labs, like Fime, who can provide OEMs and solution providers with tools and expertise to continually optimize their products. But testing doesn’t just safeguard the ecosystem; it elevates it. As an example, new innovative techniques like test the biases of demographic groups or environmental conditions.  ... We have reached a critical moment for the future of biometric authentication. The success of the technology is predicated on the continued growth in its adoption, but with AI giving fraudsters the tools they need to transform the threat landscape at a faster pace than ever before, it is essential that biometric solution providers stay one step ahead to retain and grow user trust. Stakeholders must therefore focus on one key question:


How ‘dark LLMs’ produce harmful outputs, despite guardrails

LLMs, although they have positively impacted millions, still have their dark side, the authors wrote, noting, “these same models, trained on vast data, which, despite curation efforts, can still absorb dangerous knowledge, including instructions for bomb-making, money laundering, hacking, and performing insider trading.” Dark LLMs, they said, are advertised online as having no ethical guardrails and are sold to assist in cybercrime. ... “A critical vulnerability lies in jailbreaking — a technique that uses carefully crafted prompts to bypass safety filters, enabling the model to generate restricted content.” And it’s not hard to do, they noted. “The ease with which these LLMs can be manipulated to produce harmful content underscores the urgent need for robust safeguards. The risk is not speculative — it is immediate, tangible, and deeply concerning, highlighting the fragile state of AI safety in the face of rapidly evolving jailbreak techniques.” Analyst Justin St-Maurice, technical counselor at Info-Tech Research Group, agreed. “This paper adds more evidence to what many of us already understand: LLMs aren’t secure systems in any deterministic sense,” he said, “They’re probabilistic pattern-matchers trained to predict text that sounds right, not rule-bound engines with an enforceable logic. Jailbreaks are not just likely, but inevitable.


Coaching for personal excellence: Why the future of leadership is human-centered

As organisations grapple with rapid technological shifts, evolving workforce expectations and the complex human dynamics of hybrid work, one thing has become clear: leadership isn’t just about steering the ship. It’s about cultivating the emotional resilience, adaptability and presence to lead people through ambiguity — not by force, but by influence. This is why coaching is no longer a ‘nice-to-have.’ It’s a strategic imperative. A lever not just for individual growth, but for organisational transformation. The real challenge? Even seasoned leaders now stand at a crossroads: cling to the illusion of control, or step into the discomfort of growth — for themselves and their teams. Coaching bridges this gap. It reframes leadership from giving directions to unlocking potential. From managing outcomes to enabling insight. ... Many people associate coaching with helping others improve. But the truth is, coaching begins within. Before a leader can coach others, they must learn to observe, challenge, and support themselves. That means cultivating emotional intelligence. Practising deep reflection. Learning to regulate reactions under stress. And perhaps most importantly, understanding what personal excellence looks like—and feels like—for them.


5 types of transformation fatigue derailing your IT team

Transformation fatigue is the feeling employees face when change efforts consistently fall short of delivering meaningful results. When every new initiative feels like a rerun of the last, teams disengage; it’s not change that wears them down, it’s the lack of meaningful progress. This fatigue is rarely acknowledged, yet its effects are profound. ... Organise around value streams and move from annual plans to more adaptive, incremental delivery. Allow teams to release meaningful work more frequently and see the direct outcomes of their efforts. When value is visible early and often, energy is easier to sustain. Also, leaders can achieve this by shifting from a traditional project-based model to a product-led approach, embedding continuous delivery into the way teams work, rather than treating. ... Frameworks can be helpful, but too often, organisations adopt them in the hope they’ll provide a shortcut to transformation. Instead, these approaches become overly rigid, emphasising process compliance over real outcomes. ... What leaders can do: Focus on mindset, not methodology. Leaders should model adaptive thinking, support experimentation, and promote learning over perfection. Create space for teams to solve problems, rather than follow playbooks that don’t fit their context.


Why app modernization can leave you less secure

In most enterprises, session management is implemented using the capabilities native to the application’s framework. A Java app might use Spring Security. A JavaScript front-end might rely on Node.js middleware. Ruby on Rails handles sessions differently still. Even among apps using the same language or framework, configurations often vary widely across teams, especially in organizations with distributed development or recent acquisitions. This fragmentation creates real-world risks: inconsistent timeout policies, delayed patching, and session revocation gaps Also, there’s the problem of developer turnover: Many legacy applications were developed by teams that are no longer with the organization, and without institutional knowledge or centralized visibility, updating or auditing session behavior becomes a guessing game. ... As one of the original authors of the SAML standard, I’ve seen how identity protocols evolve and where they fall short. When we scoped SAML to focus exclusively on SSO, we knew we were leaving other critical areas (like authorization and user provisioning) out of the equation. That’s why other standards emerged, including SPML, AuthXML, and now efforts like IDQL. The need for identity systems that interoperate securely across clouds isn’t new, it’s just more urgent now. 

Daily Tech Digest - May 26, 2025


Quote for the day:

“Don't blow off another's candle for it won't make yours shine brighter.” -- Jaachynma N.E. Agu



Beyond single-model AI: How architectural design drives reliable multi-agent orchestration

It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens. But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start. ... For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough. 


Unstructured Data Management Tips

"Unlike traditional databases, which define the schema -- the data's structure -- before it's stored, schema-on-read defers this process until the data is actually read or queried," says Kamal Hathi, senior vice president and general manager of machine-generated data monitoring and analysis software firm at Splunk, a Cisco company. This approach is particularly effective for unstructured and semi-structured data, where the schema is not predefined or rigid, Hathi says. "Traditional databases require a predefined schema, which makes working with unstructured data challenging and less flexible." ... Manage unstructured data by integrating it with structured data in a cloud environment using metadata tagging and AI-driven classifications, suggests Cam Ogden, a senior vice president at data integrity firm Precisely. "Traditionally, structured data -- like customer databases or financial records -- reside in well-organized systems such as relational databases or data warehouses," he says. However, to fully leverage all of their data, organizations need to break down the silos that separate structured data from other forms of data, including unstructured data such as text, images, or log files. This is where the cloud comes into play. Integrating structured and unstructured data in the cloud allows for more comprehensive analytics, enabling organizations to extract deeper insights from previously siloed information, Ogden says. 


Why IT Certifications Are Now the Hottest Currency in Tech

The reasons are manifold. Inflation has eroded buying power, traditional merit-based raises have declined, bonuses are scarcer and 2024 saw a sharp uptick in layoffs - particularly targeting middle management and older professionals. Unlike the "Great Resignation" of 2021, professionals today are staying put - not from loyalty but from caution, and upskilling is the key to ensure their longevity. Faced with a precarious job market and declining benefits, many IT employees are opting for stability and doubling down on internal mobility. According to the Pearson VUE's 2025 Value of IT Certification Candidate Report, more than 80% of the respondents who hold at least one certification said it enhanced their ability to innovate and 70% said they experienced greater autonomy at workplace. Even in regions where pay bumps are smaller, the career mobility afforded by certifications is prevalent. In India, for instance, CloudThat's IT Certifications and Salary Survey found that Microsoft-certified professionals earn an average entry salary of $10,900, with 60% of certified workers reporting pay hikes. "The increased value in certifications underscore their critical role in equipping professionals with the skills needed to excel and advance in their roles. As the industry continues to grow, certifications are becoming essential to stand out and meet the demand for specialized skills," said Bhavesh Goswami, founder and CEO of CloudThat.


Speed and scalability redefine the future of modern banking

To expedite digitalisation, global policymakers are introducing regulations such as India’s Digital Banking Units (DBUs), the EU’s PSD2/PSD3 directives, and the GCC’s open finance guidelines. The growth in non-bank financial intermediaries (NBFIs), which has been both more intricate and wider in scope, in the most recent years, obliges the employing of more effective compliance frameworks and the introduction of better risk management strategies. ... Integrating banking directly into non-financial platforms such as e-commerce is on the rise. Based on a report by Grand View Research, the global Banking-as-a-Service (BaaS) market is expected to reach USD 66 billion by 2030. Retailers increasingly partner with banks for instant, personalised offers and payments via identity beacons, enhancing customer experiences through Gen AI-supported interactions. For example, real-time data analytics and machine learning models are now essential for personalised financial services. Reimagined branch visits are becoming an emerging trend, with branches shifting to high-footfall locations like malls. The store-like experience includes personalised offers and decision aids, including immediate approval for flexible loans, made possible by customer identification based on consent.


5 questions to test tech resilience and build a 90-day action plan

The convergence of AI with existing systems has brought technical debt into sharp focus. While AI, and agentic AI in particular, presents transformative opportunities, it also exposes the limitations of legacy systems and architectural decisions made in the past. It’s essential to balance the excitement of AI adoption with the pragmatic need to address underlying technical debt, as we explored in our recent research. ... While AI enthusiasm runs high, successful implementation requires careful focus on use cases that deliver tangible business value. CIOs must lead their organizations in identifying and executing AI initiatives that drive meaningful outcomes. That means defining AI programs with an holistic, end-to-end vision of how they’ll deliver value for your business. And it means taking a platform approach, as opposed to numerous isolated PoCs. ... The traditional boundaries of IT are dissolving. With technology now fundamentally driving business strategy, CIOs must lead the evolution from an IT operating model to a new business technology operating model. Recent data shows organizations that have embraced this transformation achieved 15% higher top-line performance compared to their peers, with potential for this gap to double by next year.


LlamaFirewall: Open-source framework to detect and mitigate AI centric security risks

One particularly concerning area is the use of LLMs in coding applications. “Coding agents that rely on LLM-generated code may inadvertently introduce security vulnerabilities into production systems,” Chennabasappa warned. “Misaligned multi-step reasoning can also cause agents to perform operations that stray far beyond the user’s original intent.” These types of risks are already surfacing in coding copilots and autonomous research agents, she added, and are only likely to grow as agentic systems become more common. Yet while LLMs are being embedded deeper into mission-critical workflows, the surrounding security infrastructure hasn’t kept pace. “Security infrastructure for LLM-based systems is still in its infancy,” Chennabasappa said. “So far, the industry’s focus has been mostly limited to content moderation guardrails meant to prevent chatbots from generating misinformation or abusive content.” That approach, she argued, is far too narrow. It overlooks deeper, more systemic threats like prompt injection, insecure code generation, and abuse of code interpreter capabilities. Even proprietary safety systems that hardcode rules into model inference APIs fall short, according to Chennabasappa, because they lack the transparency, auditability, and flexibility needed to secure increasingly complex AI applications.


Navigating Double and Triple Extortion Tactics

In double extortion attacks, a second layer is added: attackers, having gained access to the system, exfiltrate sensitive and valuable data. This not only deepens the victim’s vulnerability but also increases pressure, as attackers now hold both encrypted files and stolen information, which they can use as leverage for further demands. The threat of double extortion becomes more severe as it combines operational disruption (due to encrypted data and downtime) with the risk of public exposure. Organizations unable to access their data face halted services, financial loss, and reputational damage. ... Triple extortion expands upon traditional and double extortion ransomware tactics by introducing a third layer of pressure. The attack begins with data encryption and exfiltration, similar to the double extortion model—locking the victim out of their data while simultaneously stealing sensitive information. This stolen data gives attackers multiple avenues to exploit the victim, who is left with no control over its fate. The third stage involves third-party extortion. After collecting data from the primary victim, attackers identify and target affiliated parties, such as partners, clients, and stakeholders, whose information was also compromised. 


The 7 unwritten rules of leading through crisis

Your first move shouldn’t be panic-fixing everything in silence, Young says. “You need to let people know what’s going on, including your team, your leadership, and sometimes even your customers.” Keeping everyone in the loop calms nerves and builds trust. Silence makes everything worse, Young warns. ... Confusion is contagious. “Providing clarity about what’s known, what matters, and what you’re aiming for, stabilizes people and systems,” says Leila Rao, a workplace and executive coaching consultant. “It sets the tone for proactivity instead of reactivity.” Simply treating symptoms will make the problem worse, Rao warns. “Misinformation spreads, trust erodes, and well-intentioned responses become counterproductive.” Crisis is complexity on steroids, Rao observes. “When we center people, welcome multiple perspectives, and make space for emergence, we move from crisis management to collective learning.” ... You can’t hide from a crisis, and attempting to do so only compounds the damage, Hasmukh warns. “Clear visibility into what happened allows you to respond effectively and maintain stakeholder trust during challenging times.” Organizations that delay acknowledging issues inevitably face greater scrutiny and damage than those that address situations head-on.


BYOD like it’s 2025

The data is clear that there can be significant gains in productivity attached to BYOD. Samsung estimates that workers using their own devices can gain about an hour of productive worktime per day and Cybersecurity Insiders says that 68% of businesses see some degree of productivity increases. Although the gains are significant, personal devices can also distract workers more than company-owned devices, with personal notifications, social media accounts, news, and games being the major time-sink culprits. This has the potential to be a real issue, as these apps can become addictive and their use compulsive. ... One challenge for BYOD has always been user support and education. With two generations of digital natives now comprsing more than half the workforce, support and education needs have changed. Both millennials and Gen Z have grown up with the internet and mobile devices, which makes them more comfortable making technology decisions and troubleshooting problems than baby boomers and Gen X. This doesn’t mean that they don’t need tech support, but they do tend to need less hand-holding and don’t instinctively reach for the phone to access that support. Thus, there’s an ongoing shift to self-support resources and other, less time-intensive, models with text chat being the most common — be it with a person or a bot.


You have seen the warnings: your next IT outage could be worse

In-band management uses the same data path as production traffic to manage the customer environment, while logically isolating management traffic from production data. Although this approach can be more cost-effective, it introduces certain risks. If a problem occurs with the production network, it can also disrupt management access to the infrastructure, a situation referred to as “fate sharing.” In these cases, the only viable solution may be to send an engineer onsite to diagnose and resolve the issue. This can result in significant costs and delays, potentially impacting the customer’s business operations. Out-of-band management, on the other hand, uses a separate network to provide independent access for managing the infrastructure, completely isolating management traffic from the production network. This separation is crucial during major disruptions like provider outages or security breaches, as it guarantees continuous access to network devices and servers, even if the primary production network is down or compromised. ... A secure connection links this cloud infrastructure to the customer’s on-premises IT setup, usually through a dedicated private network connection, SD-WAN, or an IPSEC VPN. This connection typically terminates at an on-premises router or firewall, safeguarding access to the out-of-band management network. 

Daily Tech Digest - May 25, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel



What CIOs Need to Know About the Technical Aspects of AI Integration

AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.” Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks (e.g., Tensorflow, PyTorch, Keras, and others), provide the software resources and tools. ... The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration -- but integration still requires using middleware APIs like REST, which integrates the AI system with Internet-based resources; or GraphQL, which facilitates the integration of data from multiple sources.


What is Chaos Engineering in DevOps?

As a key enabler for high-performing DevOps teams, Chaos Engineering pushes the boundaries of system resilience. It intentionally introduces faults into environments to expose hidden failures and validate that systems can recover gracefully. Rather than waiting for outages to learn hard lessons, Chaos Engineering allows teams to simulate and study these failures in a controlled way. ... In the DevOps landscape, continuous delivery and rapid iteration are standard. However, these practices can increase the risk of introducing instability if reliability isn’t addressed with equal rigor. Chaos Engineering complements DevOps goals in several ways:Reveals Single Points of Failure (SPOFs): Chaos experiments help discover dependencies that may not be resilient. Uncovers Alerting Gaps: By simulating failures, teams can assess whether monitoring systems raise appropriate alerts. Tests Recovery Readiness: Teams get real-world practice in recovery and incident response. Improves System Observability: Monitoring behaviors during chaos experiments leads to better instrumentation. Builds Team Confidence: Engineering and operations teams gain a better understanding of the system and how to handle outages. Chaos Engineering helps shift failure from a reactive event to a proactive learning opportunity — aligning directly with the DevOps principle of continuous improvement.


Resilience vs. risk: Rethinking cyber strategy for the AI-driven threat landscape

Unfortunately, many companies are similarly unprepared. In a recent survey of 1,500 C-suite and senior executives in 14 countries conducted for the LevelBlue 2025 Futures Report, most respondents said their organizations were simply not ready for the new wave of AI-powered and supply-chain attacks. ... There's also a certain disconnect in the survey results, with fears about AI tempered by overconfidence in one's own abilities. Fifty-four percent of respondents claim to be highly competent at using AI to enhance cybersecurity, and 52% feel just as confident in their abilities to defend against attackers who use AI. However, there's a substantial difference between the bulk of the respondents and those few — about 7% of the total of 1,500 — that LevelBlue classified as already having achieved cyber resilience. "An organization with a cyber-resilient culture is a place where everyone, at every level, understands their role in cybersecurity and takes accountability for it — including protecting sensitive data and systems," the 2025 Futures Report explains. Most notably, none of the 100 or so organizations that LevelBlue deemed cyber resilient had experienced a breach in the 12 months preceding the survey. Ninety-four percent of the cyber-resilient elite said they were making investments in software-supply-chain security, versus 62% of the total group. 


Can Digital Trust Be Rebuilt in the Age of AI?

When digital trust erodes, everything from e-commerce to online communities suffers, as users approach the content with increasing skepticism. Many online communities on social platforms such as Reddit are becoming more and more dominated by bots that seem human at first glance, but they quickly show patterns designed to steer the conversation in specific ways. The implications, both for users and the platforms, are quite worrisome. ... People don’t want to connect with perfection. They want to connect with shared humanity. I recently visited one of my healthcare provider’s websites for info on a procedure and spent some time browsing through the blogs. What I wanted was to read about people sharing my concerns, about the doctors and positive outcomes, how others overcame their illnesses, or maybe a surgeon’s perspective on the procedure. Instead, I got lots of AI-generated information (it’s easy to recognize the “Chat GPT style”—bullet points, summaries, words that it tends to use) on medical conditions and procedures, but it left me cold. It felt like the machine was “AI-xplaining” to me what it thought I needed to read, not what I wanted. Prioritizing authentic communication helps invite our audiences into building a relationship, rather than a transaction. It expresses to them that we value them as visitors, as readers and consumers of our content. 


Industrial cybersecurity leadership is evolving from stopping threats to bridging risk, resilience

The role of cybersecurity leadership in industrial control systems (ICS/OT) is evolving, but not fast enough, Richard Robinson, chief executive officer of Cynalytica, told Industrial Cyber. “We often view leadership maturity through a Western lens. That is a mistake. The threat landscape is global, but readiness is uneven,” Robinson said. “Many regions still operate under the assumption that cyber threats are an ‘IT problem.’ Meanwhile, adversarial technologies targeting control systems, from protocol-aware malware to AI-generated logic attacks, are advancing faster than many leaders are willing to acknowledge.” He added that “We are past the era of defending just IP networks. Today’s threats exploit blind spots in non-IP protocols, legacy PLCs, and analog instrumentation. Nation-states are building offensive capabilities that bypass traditional defenses entirely, and they are being tested in active conflict zones.” ... As the industrial CISO role becomes more strategically focused, balancing compliance, operational integrity, and business risk, the executives reevaluate how expectations around cybersecurity leadership are shifting across industrial organizations. Pereira mentioned that resilience is becoming a bigger focus. 


Eyes on Data: Best Practices and Excellence in Data Management Matter More Than Ever

Despite the universal dependence, most organizations still grapple with fundamental data challenges — inconsistent definitions, fragmented governance, escalating regulatory expectations, and massive growth in data’s volume, variety, and velocity. In other words — as every data professional and even every data user understands — data is more critical than ever, and yet harder than ever to manage effectively. It’s precisely in this high-stakes context that best practices in data management are not just beneficial — they’re essential. ... True to its purpose, DCAM v3 is not a one-time initiative — it’s a lifecycle framework designed to support continuous improvement and progress. That’s why the EDM Council also created the Data Excellence Program, a structured path for organizations to achieve and gain recognition at the organizational level for data excellence. Given its role in driving best practices, DCAM serves as the Program’s backbone for defining and assessing data management capabilities and measuring participants’ progress in their journey towards long-term success and achieving sustained excellence. ... In an era where data is both a competitive asset and a compliance requirement, only those organizations that manage it with rigor, purpose, and strategy will thrive. 


Modern Test Automation With AI (LLM) and Playwright MCP

The ability to interact with the web programmatically is becoming increasingly crucial. This is where GenAI steps in, by leveraging large language models (LLMs) like Claude or custom AI frameworks, GenAI introduces intelligence into test automation, enabling natural language test creation, self-healing scripts, and dynamic adaptability. The bridge that makes this synergy possible is the Model Context Protocol (MCP), a standardized interface that connects GenAI’s cognitive power with Playwright’s automation prowess. ... Playwright MCP is a server that acts as a bridge between large language models (LLMs) or other agents and Playwright-managed browsers. It enables structured command execution, allowing AI to control web interactions like navigation, form filling, or assertions. What sets MCP apart is its reliance on the browser’s accessibility tree — a semantic, hierarchical representation of UI elements—rather than screenshot-based visual interpretation. In Snapshot Mode, MCP provides real-time accessibility snapshots, detailing roles, labels, and states. This approach is lightweight and precise, unlike Vision Mode, which uses screenshots for custom UIs but is slower and less reliable. By prioritizing the accessibility tree, MCP delivers unparalleled speed, reliability, and resource efficiency.


How to tackle your infrastructure technical debt

The quickest win, he suggested, is the removal of “zombie servers” – those that no one dares to turn off because their purpose is unknown. Network tools can reveal what these servers are doing and who is using them, “and frequently, the answer is nothing and nobody”. The same applies to zombie virtual machines (VMs). Another relatively quick win involves replacing obsolete on-premise applications with a software-as-a-service (SaaS) equivalent. One of Harvey’s clients was using an unsupported version of Hyperion on an outdated operating system and hardware. “[They] can’t get rid of it because this is used by people who report directly to the board for the financials.” A simple solution, Harvey suggested, was to “go to Oracle Financials in the cloud...and it’s not your problem anymore”. Infrastructure and operations teams should also lead by example and upgrade their own systems. “You should be able to get the CIO to approve the budget for this because it’s in your control,” said Harvey.... A crucial first step is to stop installing old products. This requires backing from the CIO and other executives, Harvey said, but rules should be established, such as: We will not install any new copies of Windows Server 2016…because it’s going to reach end of support in 2026.


Why Every Business Leader Needs to Think Like an Enterprise Architect

Enterprise architecture provides the structure and governance for effective digital transformation, laying the groundwork for innovation such as AI models. Vizcaino said that SAP will soon have successfully established an AI copilot mode to provide expansive process automation, helping workers make decisions more quickly and effectively. Even as companies enhance automation in this way, leaders’ core responsibilities will largely remain the same: creating business, sustaining business, and creating competitive advantage. It’s how they go about doing this in the age of AI that will look a bit different. Many different types of assets and techniques optimized by data and AI — as enabled by enterprise architecture — will become more valuable moving forward. ... While leveraging AI to automate processes is a key area of current innovation, future innovation will involve optimizing the ways in which those AI investments are orchestrated. Enterprise architecture is not a skill, but rather a discipline that should be infused across all departments of an organization, Vizcaino added. The silos of the past should be avoided in the ways that organizations restructure their operations; instead, inculcating an enterprise-architecture mindset within all business units can serve to bring stakeholders from across an organization together in service of shared technology goals.


Building Supply Chain Cybersecurity Resilience

Supply chain cybersecurity threats are diverse, and the repercussions severe. How can retail and hospitality organizations protect themselves? To help fend off social engineering attacks, make cybersecurity education a priority. Train everyone in your organization, from top to bottom, to spot suspicious activity so they can detect and deflect phishing schemes. Meanwhile, verify all software is up to date to prevent cyber attackers from exploiting network vulnerabilities. It’s also wise to regularly audit your third-party vendors’ security postures to screen for risks and find areas for improvement. In addition to your third-party vendors, turn to your fellow retail and hospitality organizations. The best defense against cyber attackers is putting up a united front and bolstering the entire supply chain. You can collaborate with other retailers and hoteliers via RH-ISAC, the global cybersecurity community, created specifically to help retail and hospitality organizations share cyber intelligence and cybersecurity best practices. Its new LinkSECURE Program offers a membership for small- to mid-size vendors and service providers to help those with limited IT or cyber resources mature their cybersecurity operations. The new program gives every participant an evaluation of their cybersecurity posture, along with a dedicated success manager to guide them through 18 critical security controls and safeguards.

Daily Tech Digest - May 24, 2025


Quote for the day:

“In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it.” -- Jane Smiley



DanaBot botnet disrupted, QakBot leader indicted

Operation Endgame relies on help from a number of private sector cybersecurity companies (Sekoia, Zscaler, Crowdstrike, Proofpoint, Fox-IT, ESET, and others), non-profits such as Shadowserver and white-hat groups like Cryptolaemus. “The takedown of DanaBot represents a significant blow not just to an eCrime operation but to a cyber capability that has appeared to align Russian government interests. The case (…) highlights why we must view certain Russian eCrime groups through a political lens — as extensions of state power rather than mere criminal enterprises,” Crowdstrike commented the DanaBot disruption. ... “We’ve previously seen disruptions have significant impacts on the threat landscape. For example, after last year’s Operation Endgame disruption, the initial access malware associated with the disruption as well as actors who used the malware largely disappeared from the email threat landscape,” Selena Larson, Staff Threat Researcher at Proofpoint, told Help Net Security. “Cybercriminal disruptions and law enforcement actions not only impair malware functionality and use but also impose cost to threat actors by forcing them to change their tactics, cause mistrust in the criminal ecosystem, and potentially make criminals think about finding a different career.”


AI in Cybersecurity: Protecting Against Evolving Digital Threats

Beyond detecting threats, AI excels at automating repetitive security tasks. Tasks like patching vulnerabilities, filtering malicious traffic, and conducting compliance checks can be time-consuming. AI’s speed and precision in handling these tasks free up cybersecurity professionals to focus on complex problem-solving. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity. ... The growing role of AI in cybersecurity necessitates strong regulatory frameworks. Governments and organizations are working to establish policies that address AI’s ethical and operational challenges in this field. Transparency in AI decision-making processes and standardized best practices are among the key priorities.


Open MPIC project defends against BGP attacks on certificate validation

MPIC is a method to enhance the security of certificate issuance by validating domain ownership and CA checks from multiple network vantage points. It helps prevent BGP hijacking by ensuring that validation checks return consistent results from different geographical locations. The goal is to make it more difficult for threat actors to compromise certificate issuance by redirecting internet routes. ... Open MPIC operates through a parallel validation architecture that maximizes efficiency while maintaining security. When a domain validation check is initiated, the framework simultaneously queries all configured perspectives and collects their results. “If you have 10 perspectives, then it basically asks all 10 perspectives at the same time, and then it will collect the results and determine the quorum and give you a thumbs up or thumbs down,” Sharkov said. This approach introduces some unavoidable latency, but the implementation minimizes performance impact through parallelization. Sharkov noted that the latency is still just a fraction of a second. ... The open source nature of the project addresses a significant challenge for the industry. While large certificate authorities often have the resources to build their own solutions, many smaller CAs would struggle with the technical and infrastructure requirements of multi-perspective validation.


How to Close the Gap Between Potential and Reality in Tech Implementation

First, there has to be alignment between the business and tech sides. So, I’ve seen in many institutions that there’s not complete alignment between both. And where they could be starting, they sometimes separate and they go in opposite directions. Because at the end of the day, let’s face it, we’re all looking at how it will help ourselves. Secondly, it’s just the planning, ensuring that you check all the boxes and have a strong implementation plan. One recent customer who just joined Backbase: One of the things I loved about what they brought to the kickoff call was what success looked like to them for implementation. So, they had the work stream, whether the core integration, the call center, their data strategy, or their security requirements. Then, they had the leader who was the overall owner and then they had the other owners of each work stream. Then, they defined success criteria with the KPIs associated with those success criteria. ... Many folks forget that they are, most of the time, still running on a legacy platform. So, for me, success is when they decommission that legacy platform and a hundred percent of their members or customers are on Backbase. That’s one of the very important internal KPIs.


How AIOps sharpens cybersecurity posture in the age of cyber threats

The good news is, AIOps platforms are built to scale with complexity, adapting to new environments, users, and risks as they develop. And organizations can feel reassured that their digital vulnerabilities are safeguarded for the long term. For example, modern methods of attack, such as hyperjacking, can be identified and mitigated with AIOps. This form of attack in cloud security is where a threat actor gains control of the hypervisor – the software that manages virtual machines on a physical server. It allows them to then take over the virtual machines running on that hypervisor. What makes hyperjacking especially dangerous is that it operates beneath the guest operating systems, effectively evading traditional monitoring tools that rely on visibility within the virtual machines. As a result, systems lacking deep observability are the most vulnerable. This makes the advanced observability capabilities of AIOps essential for detecting and responding to such stealthy threats. Naturally, this evolving scope of digital malice also requires compliance rules to be frequently reviewed. When correctly configured, AIOps can support organizations by interpreting the latest guidelines and swiftly identifying the data deviations that would otherwise incur penalties.


Johnson & Johnson Taps AI to Advance Surgery, Drug Discovery

J&J's Medical Engagement AI redefines care delivery, identifying 75,000 U.S. patients with unmet needs across seven disease areas, including oncology. Its analytics engine processes electronic health records and clinical guidelines to highlight patients missing optimal treatments. A New York oncologist, using J&J's insights, adjusted treatment for 20 patients in 2024, improving the chances of survival. The platform engages over 5,000 providers, empowering medical science liaisons with real-time data. It helps the AI innovation team turn overwhelming data into an advantage. Transparent data practices and a focus on patient outcomes align with J&J's ethical standards, making this a model that bridges tech and care. ... J&J's AI strategy rests on five ethical pillars, including fairness, privacy, security, responsibility and transparency. It aims to deliver AI solutions that benefit all stakeholders equitably. The stakeholders and users understand the methods through which datasets are collected and how external influences, such as biases, may affect them. Bias is mitigated through annual data audits, privacy is upheld with encrypted storage and consent protocols, and on top of it is AI-driven cybersecurity monitoring. A training program, launched in 2024, equipped 10,000 employees to handle sensitive data. 


Surveillance tech outgrows face ID

Many oppose facial recognition technology because it jeopardizes privacy, civil liberties, and personal security. It enables constant surveillance and raises the specter of a dystopian future in which people feel afraid to exercise free speech.Another issue is that one’s face can’t be changed like a password can, so if face-recognition data is stolen or sold on the Dark Web, there’s little anyone can do about the resulting identity theft and other harms. .... You can be identified by your gait (how you walk). And surveillance cameras now use AI-powered video analytics to track behavior, not just faces. They can follow you based on your clothing, the bag you carry, and your movement patterns, stitching together your path across a city or a stadium without ever needing a clear shot of your face. The truth is that face recognition is just the most visible part of a much larger system of surveillance. When public concern about face recognition causes bans or restrictions, governments, companies, and other organizations simply circumvent that concern by deploying other technologies from a large and growing menu of options. Whether we’re IT professionals, law enforcement technologists, security specialists, or privacy advocates, it’s important to incorporate the new identification technologies into our thinking, and face the new reality that face recognition is just one technology among many.


How Ready Is NTN To Go To Scale?

Non-Terrestrial Networks (NTNs) represent a pivotal advancement in global communications, designed to extend connectivity far beyond the limits of ground-based infrastructure. By leveraging spaceborne and airborne assets—such as Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and Geostationary (GEO) satellites, as well as High-Altitude Platform Stations (HAPS) and UAVs—NTNs enable seamless coverage in regions previously considered unreachable. Whether traversing remote deserts, deep oceans, or mountainous terrain, NTNs provide reliable, scalable connectivity where traditional terrestrial networks fall short or are economically unviable. This paradigm shift is not merely about extending signal reach; it’s about enabling entirely new categories of applications and industries to thrive in real time. ... A core feature of NTNs is their use of varied orbital altitudes, each offering distinct performance characteristics. Low Earth Orbit (LEO) satellites (altitudes of 500–2,000 km) are known for their low latency (20–50 ms) and are ideal for real-time services. Medium Earth Orbit (MEO) systems (2,000–35,000 km) strike a balance between coverage and latency and are often used in navigation and communications. Geostationary Orbit (GEO) satellites, positioned at ~35,786 km, provide wide-area coverage from a fixed position relative to Earth’s rotation—particularly useful for broadcast and constant-area monitoring. 


Enterprises are wasting the cloud’s potential

One major key to achieving success with cloud computing is training and educating employees. Although the adoption of cloud technology signifies a significant change, numerous companies overlook the importance of equipping their staff with the technical expertise and strategic acumen to capitalize on its potential benefits. IT teams that lack expertise in cloud services may use cloud resources inefficiently or ineffectively. Business leaders who are unfamiliar with cloud tools often struggle to leverage data-driven insights that could drive innovation. Employees relying on cloud-based applications might not fully utilize all their functionality due to insufficient training. These skill gaps lead to dissatisfaction with cloud services, and the company doesn’t benefit from its investments in cloud infrastructure. ... The cloud is a tool for transforming operations rather than just another piece of IT equipment. Companies can refine their approach to the cloud by establishing effective governance structures and providing employees with training on the optimal utilization of cloud technology. Once they engage architects and synchronize cloud efforts with business objectives, most companies will see tangible results: cost savings, system efficiency, and increased innovation.


The battle to AI-enable the web: NLweb and what enterprises need to know

NLWeb enables websites to easily add AI-powered conversational interfaces, effectively turning any website into an AI app where users can query content using natural language. NLWeb isn’t necessarily about competing with other protocols; rather, it builds on top of them. The new protocol uses existing structured data formats like RSS, and each NLWeb instance functions as an MCP server. “The idea behind NLWeb is it is a way for anyone who has a website or an API already to very easily make their website or their API an agentic application,” Microsoft CTO Kevin Scott said during his Build 2025 keynote. “You really can think about it a little bit like HTML for the agentic web.” ... “NLWeb leverages the best practices and standards developed over the past decade on the open web and makes them available to LLMs,” Odewahn told VentureBeat. “Companies have long spent time optimizing this kind of metadata for SEO and other marketing purposes, but now they can take advantage of this wealth of data to make their own internal AI smarter and more capable with NLWeb.” ... “NLWeb provides a great way to open this information to your internal LLMs so that you don’t have to go hunting and pecking to find it,” Odewahn said. “As a publisher, you can add your own metadata using schema.org standard and use NLWeb internally as an MCP server to make it available for internal use.”

Daily Tech Digest - May 23, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


MCP, ACP, and Agent2Agent set standards for scalable AI results

“Without standardized protocols, companies will not be able to reap the maximum value from digital labor, or will be forced to build interoperability capabilities themselves, increasing technical debt,” he says. Protocols are also essential for AI security and scalability, because they will enable AI agents to validate each other, exchange data, and coordinate complex workflows, Lerhaupt adds. “The industry can build more robust and trustworthy multi-agent systems that integrate with existing infrastructure, encouraging innovation and collaboration instead of isolated, fragmented point solutions,” he says. ... ACP is “a universal protocol that transforms the fragmented landscape of today’s AI agents into inter-connected teammates,” writes Sandi Besen, ecosystem lead and AI research engineer at IBM Research, in Towards Data Science. “This unlocks new levels of interoperability, reuse, and scale.” ACP uses standard HTTP patterns for communication, making it easy to integrate into production, compared to JSON-RPC, which relies on more complex methods, Besen says. ... Agent2Agent, supported by more than 50 Google technology partners, will allow IT leaders to string a series of AI agents together, making it easier to get the specialized functionality their organizations need, Ensono’s Piazza says. Both ACP and Agent2Agent, with their focus on connecting AI agents, are complementary protocols to the model-centric MCP, their creators say.


It’s Time to Get Comfortable with Uncertainty in AI Model Training

“We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high,” said Bilbrey Pope. “This is common for most deep neural networks. But a model trained with SNAP gives a metric that mitigates this overconfidence. Ideally, you’d want to look at both prediction uncertainty and training data uncertainty to assess your overall model performance.” ... “AI should be able to accurately detect its knowledge boundaries,” said Choudhury. “We want our AI models to come with a confidence guarantee. We want to be able to make statements such as ‘This prediction provides 85% confidence that catalyst A is better than catalyst B, based on your requirements.’” In their published study, the researchers chose to benchmark their uncertainty method with one of the most advanced foundation models for atomistic materials chemistry, called MACE. The researchers calculated how well the model is trained to calculate the energy of specific families of materials. These calculations are important to understanding how well the AI model can approximate the more time- and energy-intensive methods that run on supercomputers. The results show what kinds of simulations can be calculated with confidence that the answers are accurate. 


Don’t let AI integration become your weakest link

Ironically, integration meant to boost efficiency can stifle innovation. Once a complex web of AI- interconnected systems exists, adding tools or modifying processes becomes a major architectural undertaking, not plug-and-play. It requires understanding interactions with central AI logic, potentially needing complex model re-training, integration point redevelopment, and extensive regression testing to avoid destabilisation. ... When AI integrates and automates decisions and workflows across systems based on learned patterns, it inherently optimises for the existing or dominant processes observed in the training data. While efficiency is the goal, there’s a tangible risk of inadvertently enforcing uniformity and suppressing valuable diversity in approaches. Different teams might have unique, effective methods deviating from the norm. An AI trained on the majority might flag these as errors, subtly discouraging creative problem-solving or context-specific adaptations. ... Feeding data from multiple sensitive systems (CRM, HR, finance, and communications) into central AI dramatically increases the scope and sensitivity of data processed and potentially exposed. Each integration point is another vector for data leakage or unauthorised access. Sensitive customer, employee, and financial data may flow across more boundaries and be aggregated in new ways, increasing the surface area for breaches or misuse.


Beyond API Uptime: Modern Metrics That Matter

A minuscule delay (measurable in API response times) in processing API requests can be as painful to a customer as a major outage. User behavior and expectations have evolved, and performance standards need to keep up. Traditional API monitoring tools are stuck in a binary paradigm of up versus down, despite the fact that modern, cloud native applications live in complex, distributed ecosystems. ... Measuring performance from multiple locations provides a more balanced and realistic view of user experience and can help uncover metrics you need to monitor, like location-specific latency: What’s fast in San Francisco might be slow in New York and terrible in London. ... The real value of IPM comes from how its core strengths, such as proactive synthetic testing, global monitoring agents, rich analytics with percentile-based metrics and experience-level objectives, interact and complement each other, Vasiliou told me. “IPM can proactively monitor single API URIs [uniform resource identifiers] or full API multistep transactions, even when users are not on your site or app. Many other monitors can also do this. It is only when you combine this with measuring performance from multiple locations, granular analytics and experience-level objectives that the value of the whole is greater than the sum of its parts,” Vasiliou said.


Agentic AI shaping strategies and plans across sectors as AI agents swarm

“Without a robust identity model, agents can’t truly act autonomously or securely,” says the post. “The MCP-I (I for Identity) specification addresses this gap – introducing a practical, interoperable approach to agentic identity.” Vouched also offers its turnkey SaaS Vouched MCP Identity Server, which provides easy-to-integrate APIs and SDKs for enterprises and developers to embed strong identity verification into agent systems. While the Agent Reputation Directory and MCP-I specification are open and free to the public, the MCP Identity Server is available as a commercial offering. “Thinking through strong identity in advance is critical to building an agentic future that works,” says Peter Horadan, CEO of Vouched. “In some ways we’ve seen this movie before. For example, when our industry designed email, they never anticipated that there would be bad email senders. As a result, we’re still dealing with spam problems 50 years later.” ... An early slide outlining definitions tells us that AI agents are ushering in a new definition of the word “tools,” which he calls “one of the big changes that’s happening this year around agentic AI, giving the ability to LLMs to actually do and act with permission on behalf of the user, interact with permission on behalf of the user, interact with third-party APIs,” and so on. Tools aside, what are the challenges for agentic AI? “The biggest one is security,” he says. 


Optimistic Security: A New Era for Cyber Defense

Optimistic cybersecurity involves effective NHI management that reduces risk, improves regulatory compliance, enhances operational efficiency and provides better control over access management. This management strategy goes beyond point solutions such as secret scanners, offering comprehensive protection throughout the entire lifecycle of these identities. ... Furthermore, a proactive attitude towards cybersecurity can lead to potential cost savings by automating processes such as secrets rotation and NHIs decommissioning. By utilizing optimistic cybersecurity strategies, businesses can transform their defensive mechanisms, preparing for a new era in cyber defense. By integrating Non-Human Identities and Secrets Management into their cloud security control strategies, organizations can fortify their digital infrastructure, significantly reducing security breaches and data leaks. ... Implementing an optimistic cybersecurity approach is no less than a transformation in perspective. It involves harnessing the power of technology and human ingenuity to build a resilient future. With optimism at its core, cybersecurity measures can become a beacon of hope rather than a looming threat. By welcoming this new era of cyber defense with open arms, organizations can build a secure digital environment where NHIs and their secrets operate seamlessly, playing a pivotal role in enhancing overall cybersecurity.


Identity Security Has an Automation Problem—And It's Bigger Than You Think

The data reveals a persistent reliance on human action for tasks that should be automated across the identity security lifecycle.41% of end users still share or update passwords manually, using insecure methods like spreadsheets, emails, or chat tools. They are rarely updated or monitored, increasing the likelihood of credential misuse or compromise. Nearly 89% of organizations rely on users to manually enable MFA in applications, despite MFA being one of the most effective security controls. Without enforcement, protection becomes optional, and attackers know how to exploit that inconsistency. 59% of IT teams handle user provisioning and deprovisioning manually, relying on ticketing systems or informal follow-ups to grant and remove access. These workflows are slow, inconsistent, and easy to overlook—leaving organizations exposed to unauthorized access and compliance failures. ... According to the Ponemon Institute, 52% of enterprises have experienced a security breach caused by manual identity work in disconnected applications. Most of them had four or more. The downstream impact was tangible: 43% reported customer loss, and 36% lost partners. These failures are predictable and preventable, but only if organizations stop relying on humans to carry out what should be automated. Identity is no longer a background system. It's one of the primary control planes in enterprise security. 


Critical infrastructure under attack: Flaws becoming weapon of choice

“Attackers have leaned more heavily on vulnerability exploitation to get in quickly and quietly,” said Dray Agha, senior manager of security operations at managed detection and response vendor Huntress. “Phishing and stolen credentials play a huge role, however, and we’re seeing more and more threat actors target identity first before they probe infrastructure.” James Lei, chief operating officer at application security testing firm Sparrow, added: “We’re seeing a shift in how attackers approach critical infrastructure in that they’re not just going after the usual suspects like phishing or credential stuffing, but increasingly targeting vulnerabilities in exposed systems that were never meant to be public-facing.” ... “Traditional methods for defense are not resilient enough for today’s evolving risk landscape,” said Andy Norton, European cyber risk officer at cybersecurity vendor Armis. “Legacy point products and siloed security solutions cannot adequately defend systems against modern threats, which increasingly incorporate AI. And yet, too few organizations are successfully adapting.” Norton added: “It’s vital that organizations stop reacting to cyber incidents once they’ve occurred and instead shift to a proactive cybersecurity posture that allows them to eliminate vulnerabilities before they can be exploited.”


Fundamentals of Data Access Management

An important component of an organization’s data management strategy is controlling access to the data to prevent data corruption, data loss, or unauthorized modification of the information. The fundamentals of data access management are especially important as the first line of defense for a company’s sensitive and proprietary data. Data access management protects the privacy of the individuals to which the data pertains, while also ensuring the organization complies with data protection laws. It does so by preventing unauthorized people from accessing the data, and by ensuring those who need access can reach it securely and in a timely manner. ... Appropriate data access controls improve the efficiency of business processes by limiting the number of actions an employee can take. This helps simplify user interfaces, reduce database errors, and automate validation, accuracy, and integrity checks. By restricting the number of entities that have access to sensitive data, or permission to alter or delete the data, organizations reduce the likelihood of errors being introduced while enhancing the effectiveness of their real-time data processing activities. ... Becoming a data-driven organization requires overcoming several obstacles, such as data silos, fragmented and decentralized data, lack of visibility into security and access-control measures currently in place, and a lack of organizational memory about how existing data systems were designed and implemented.


Chief Intelligence Officers? How Gen AI is rewiring the CxOs Brain

Generative AI is making the most impact in areas like Marketing, Software Engineering, Customer Service, and Sales. These functions benefit from AI’s ability to process vast amounts of data quickly. On the other hand, Legal and HR departments see less GenAI adoption, as these areas require high levels of accuracy, predictability, and human judgment. ... Business and tech leaders must prioritize business value when choosing AI use cases, focus on AI literacy and responsible AI, nurture cross-functional collaboration, and stress continuous learning to achieve successful outcomes. ... Leaders need to clearly outline and share a vision for responsible AI, establishing straightforward principles and policies that address fairness, bias reduction, ethics, risk management, privacy, sustainability, and compliance with regulations. They should also pinpoint the risks associated with Generative AI, such as privacy concerns, security issues, hallucinations, explainability, and legal compliance challenges, along with practical ways to mitigate these risks. When choosing and prioritizing use cases, it’s essential to consider responsible AI by filtering out those that carry unacceptable risks. Each Generative AI use case should have a designated champion responsible for ensuring that development and usage align with established policies.