Showing posts with label vendor management. Show all posts
Showing posts with label vendor management. Show all posts

Daily Tech Digest - March 12, 2026


Quote for the day:

"Leadership happens at every level of the organization and no one can shirk from this responsibility." -- Jerry Junkins


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 24 mins • Perfect for listening on the go.


The growing cyber exposure risk you can’t afford to ignore

This TechNative article highlights a shift in the global threat landscape where fast-moving actors like Scattered Spider exploit the inherent complexity of modern digital ecosystems. Defined as the sum of all potential points of access, exploitation, or disruption, cyber exposure has become a critical vulnerability for sectors ranging from retail and insurance to aviation. Recent high-profile breaches at companies like M&S, Harrods, and Qantas underscore how legacy infrastructure and fragmented visibility allow attackers to move laterally and cause significant financial and operational damage. To combat these evolving threats, the author advocates for a strategic transition from reactive firefighting to proactive cyber exposure management. This approach involves cataloging every managed and unmanaged asset—spanning IT, OT, and cloud environments—while layering in behavioral and operational context. By utilizing AI-driven tools to anticipate emerging risks and integrating these exposure insights into existing security workflows such as SOAR or CMDB, organizations can finally eliminate the blind spots where modern attackers thrive. Ultimately, true digital resilience starts with a comprehensive understanding of an organization’s entire footprint, allowing security teams to harden defenses and anticipate threats before a breach occurs, rather than simply responding after the damage has been done.


India is leading example of digital infrastructure, IMF says

A recent report from the International Monetary Fund (IMF) highlights India as a global leader in Digital Public Infrastructure (DPI), advocating that systems like digital IDs and payment rails be treated as essential public goods similar to traditional physical infrastructure. Central to this transformation is the "JAM Trinity"—Jan Dhan bank accounts, Aadhaar biometric identification, and mobile connectivity—which has fundamentally reshaped the nation’s economy. With over 1.44 billion Aadhaar numbers issued, the system has drastically reduced fraud and lowered Know Your Customer (KYC) costs. Meanwhile, the Unified Payments Interface (UPI) has revolutionized financial transactions, processing over 21.7 billion payments in a single month and becoming the world’s largest fast-payment system. Beyond finance, tools like DigiLocker and the Open Network for Digital Commerce (ONDC) promote interoperability and data exchange, fostering a transparent governance model that has saved trillions in welfare leakages. The IMF emphasizes that India’s deliberate, centralized approach serves as a blueprint for the Global South, demonstrating how modular digital rails can multiply economic value and enable future innovations like personal AI agents. This "India Stack" is now expanding its international footprint through partnerships with over 24 countries, positioning India as a prominent architect of inclusive global digital growth.


How to 10x Your Vulnerability Management Program in the Agentic Era

In this article, Nadir Izrael explores the fundamental shift required to combat autonomous, AI-driven cyber threats. He argues that traditional vulnerability management, characterized by static scans and manual triaging, is no longer sufficient against "AiPTs" (AI-enabled persistent threats) that operate at machine speed. To achieve what Izrael calls "vulnerability management 10.0," organizations must transition to a model defined by continuous telemetry, a unified security data fabric, and contextual prioritization. This evolution moves beyond simple CVE scores by mapping relationships across IT, cloud, and IoT layers to identify business-critical risks. The ultimate goal is "agentic remediation," a phased approach where AI agents eventually handle deterministic fixes—such as rotating exposed credentials or closing misconfigured buckets—without human intervention. However, the author emphasizes that trust is built gradually, starting with "human-in-the-loop" oversight where agents identify issues and open tickets while humans maintain control. By decoupling discovery from remediation and leveraging AI to sanitize the network, security teams can finally match the velocity of modern attackers, allowing human experts to focus on complex architectural decisions and strategic risk management rather than routine maintenance.


The Vendor’s Shadow: A Passage Across Digital Trust And The Art Of Seeing What Others Miss

In this CyberDefenseMagazine article,  Krishna Rajagopal provides a compelling analysis of the profound vulnerability companies face through their extensive third-party relationships. Despite investing heavily in internal security infrastructure, organizations frequently neglect the critical "digital doors" opened to vendors, whose own inadequate defenses can lead to catastrophic data breaches. Rajagopal argues that modern cybersecurity is no longer just about personal fortifications but must encompass the integrity of the entire supply chain. He introduces four essential lessons for achieving "vendor wisdom" in an interconnected world. First, organizations must categorize partners into clear tiers—Inner, Middle, and Outer circles—to prioritize limited resources toward high-impact relationships. Second, he emphasizes moving beyond static, paperwork-based trust toward continuous, verified evidence, demanding actual proof of security controls rather than mere verbal promises. Third, the author underscores the vital importance of pre-defined exit strategies, knowing exactly when a relationship has become too risky to maintain safely. Finally, security professionals must translate complex technical vendor risks into the clear language of business impact for boards and executive decision-makers. Ultimately, the article serves as a sobering reminder that a company’s security posture is only as robust as its weakest partner.


To Create Trustworthy Agentic AI, Seek Community-Driven Innovation

In the SD Times article, Carl Meadows argues that the path to reliable and secure AI agents lies in open collaboration rather than proprietary isolation. As AI transitions from experimental projects to executive mandates, the rise of agentic systems—capable of reasoning, planning, and acting autonomously—introduces significant security risks, including prompt injection and governance challenges. Meadows asserts that community-driven innovation, similar to the models used for Linux and Kubernetes, provides the diverse peer review and rapid vulnerability discovery necessary to secure these autonomous systems. A critical pillar of this trust is the data layer; agents depend on accurate context, and failures often stem from poor retrieval quality rather than model flaws. By integrating agentic workflows into transparent search and observability platforms, organizations can ensure that every context source and automated action is inspectable and accountable. This architectural visibility allows developers to detect permission drift and refine orchestration logic effectively. Ultimately, the piece emphasizes that assuming vulnerabilities will surface and favoring scrutiny over secrecy leads to more resilient systems. Trustworthy agentic AI is therefore built on a foundation of transparency, where global engineering communities collaboratively document, investigate, and mitigate risks to ensure long-term operational success.


Oracle: sovereignty is a matter of trust, not just technology

In this Techzine article, experts Michiel van Vlimmeren and Marcel Giacomini argue that while infrastructure provides the technical foundation, digital sovereignty ultimately hinges on trust. Oracle defines sovereignty as the clear ownership of and restricted access to data, ensuring that residency and control remain with the user. To facilitate this, Oracle offers a versatile spectrum of solutions ranging from high-performance bare-metal servers to the fully abstracted Oracle Cloud Infrastructure. A standout offering is Oracle Alloy, which allows regional providers to build customized sovereign cloud solutions using Oracle’s hardware and software behind the scenes. This approach is particularly relevant as the rapid deployment of artificial intelligence depends on organizations feeling secure about their data governance. The piece highlights Oracle’s billion-euro investment in Dutch infrastructure and its collaboration with government agencies like DICTU to implement agentic AI platforms. Rather than building its own Large Language Models, Oracle focuses on providing the robust, compliant data platforms necessary for businesses to modernize their processes safely. Ultimately, Oracle positions itself as a trusted advisor, emphasizing that achieving true sovereignty requires a cultural and operational shift that extends far beyond simple technical integrations.


Why zero trust breaks down in IoT and OT environments

In the CSO Online article, author Henry Sienkiewicz explores the fundamental "model mismatch" that occurs when applying enterprise security frameworks to industrial and connected device landscapes. While Zero Trust has revolutionized IT security through identity-centric verification, its core assumptions—explicit identity and continuous enforceability—frequently fail in IoT and OT environments characterized by incomplete visibility and functionally flat networks. Sienkiewicz argues that traditional security models focus too heavily on network topology and access decisions, ignoring the invisible web of inherited trust and shared control paths. In these specialized environments, high-impact failures often propagate through shared controllers, firmware update mechanisms, and management platforms that bypass standard access controls. To bridge this gap, the author introduces the Unified Linkage Model (ULM), which shifts the focus from "who is allowed to talk" to "what changes if this component fails." By mapping functional dependencies such as adjacency and inheritance, security leaders can better protect structural amplifiers like protocol gateways and management planes. Ultimately, the piece calls for a nuanced approach that supplements Zero Trust with rigorous dependency mapping to address the durable trust relationships that define modern operational resilience.


‘Agents of Chaos’: New Study Shows AI Agents Can Leak Data, Be Easily Manipulated

This TechRepublic article "Agents of Chaos" discusses a critical study revealing the profound security risks associated with the rapid enterprise adoption of autonomous AI agents. Researchers from prestigious institutions demonstrated that these agents, despite being given restricted permissions, can be easily manipulated through simple social engineering to leak sensitive information like Social Security numbers and bank details. The study highlights three core architectural deficits: the inability to distinguish legitimate users from attackers, a lack of self-awareness regarding competence boundaries, and poor tracking of communication channel visibility. Despite these vulnerabilities, a significant governance gap persists; while many organizations invest in monitoring AI behavior, over sixty percent lack the technical capability to terminate or isolate a misbehaving system. The article argues that the industry must shift from model-level guardrails to governing the data layer itself. This architectural approach emphasizes the need for a unified control plane, immutable audit trails, and functional "kill switches" to ensure compliance with strict regulations like GDPR and HIPAA. Ultimately, the piece warns that deploying AI agents without robust, data-centric governance is a legal and security liability, urging organizations to prioritize architectural guardrails to prevent autonomous systems from becoming liabilities rather than assets.


When AI coding agents can see your APIs: Closing the context gap in autonomous development

In this article on DevPro Journal, Scott Kingsley discusses the critical need for providing AI coding agents with authoritative access to internal API documentation. While modern agents are proficient at generating code based on public patterns, they often fail in enterprise environments because they lack visibility into private OpenAPI specifications, authentication flows, and internal business logic. This "context gap" leads to code that may appear clean but fails at runtime due to incorrect endpoints, mismatched enums, or improper error handling. The author argues that by granting agents authenticated access to a company's source of truth through tools like Model Context Protocol (MCP) servers, development shifts from pattern-based guesswork to governed contract alignment. This integration ensures that agents respect real-world constraints such as cursor-based pagination and specific status codes. Ultimately, the piece highlights that documentation is no longer just for human reference but has become a strategic operational dependency. For autonomous development to succeed, organizations must prioritize high-quality, machine-readable API definitions, transforming documentation into a foundational layer of developer experience that bridges the gap between experimental demos and reliable production-ready infrastructure.


Are DevOps teams supported by automated configurations

In this article on Security Boulevard, Alison Mack explores the critical role of automated configurations and machine identity management in securing modern cloud-native environments. As organizations increasingly rely on automated systems, the management of Non-Human Identities (NHIs)—such as tokens, keys, and encrypted passwords—has evolved from a secondary task into a strategic imperative for DevOps teams. The author highlights that effective NHI management bridges the gap between security and R&D, ensuring identities are protected throughout their entire lifecycle. Key benefits include reduced risk of data breaches, improved regulatory compliance, and increased operational efficiency by automating mundane tasks like secrets rotation. Furthermore, the integration of Agile AI provides predictive analytics and proactive threat detection, allowing teams to anticipate vulnerabilities before they are exploited. The piece emphasizes that a holistic approach, characterized by interdepartmental collaboration and real-time monitoring, is essential to maintaining a robust security posture. Ultimately, Mack argues that embedding automation within the DevOps pipeline is not just about technical efficiency but is a necessary cultural shift to protect sensitive data against increasingly sophisticated cyber threats in a dynamic digital landscape.

Daily Tech Digest - November 18, 2025


Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous



The rise of the chief trust officer: Where does the CISO fit?

Trust is touted as a differentiator for organizations looking to strengthen customer confidence and find a competitive advantage. Trust cuts across security, privacy, compliance, ethics, customer assurance, and internal culture. For the custodians of trust, that’s a wide-ranging remit without the obvious definition of other C-suite roles. Typically, the CISO continues to own controls and protection, while the CTrO broadens the remit to reputation, ethics, and customer confidence. Where cybersecurity reports to the CTrO, it is a way to escape IT and the competing priorities with the CIO. This partnership repositions security from ‘department of no’ to business enabler, Forrester notes. ... Patel says that strong alignment between customer trust and business strategy is critical. “If you don’t have credibility in the marketplace, with your partners and customers, your business strategy is dead on arrival,” he tells CSO. Whereas CISO’s day-to-day responsibilities include checking on the SOC, reviewing alerts, GRC, managing other security operations and board reporting, the chief trust officer role weaves customer trust throughout, says Patel. “It’s really bringing that trust lens into the decision-making equation and challenging colleagues and partners to think in the same manner.” ... There is also the question of how organizations operationalize trust — and can it be measured? No off-the-shelf platform exists, so CTrOs must build their own dashboards combining customer and employee metrics to track trends and identify early signs of trust erosion.


When Machines Attack Machines: The New Reality of AI Security

Attackers decomposed tasks and distributed them across thousands of instructions fed into multiple Claude instances, masquerading as legitimate security tests and circumventing guardrails. The campaign’s velocity and scale dwarfed what human operators could manage, representing a fundamental leap for automated adversarial capability. Anthropic detected the operation by correlating anomalous session patterns and observing operational persistence achievable only through AI-driven task decomposition at superhuman speeds. Though AI-generated attacks sometimes faltered—hallucinating data, forging credentials, or overstating findings—the impact proved significant enough to trigger immediate global warnings and precipitate major investments in new safeguards. Anthropic concluded that this development brings advanced offensive tradecraft within reach of far less sophisticated actors, marking a turning point in the balance between AI’s promise and peril. ... AI-based offensive operations exploit vulnerabilities across entire ecosystems instantly with the goal of exfiltrating critical intelligence and causing damage to the target. Offensive AI iterates adversarial attacks and novel exploits on a scale human red teams cannot attain. Defenses that work well against traditional techniques often fail outright under continuous, machine-driven attack cycles. 


From chatbots to colleagues: How agentic AI is redefining enterprise automation

According to Flores, agentic AI changes that equation. Each agent has a name, a mission defined by its system prompt, and a connection to company data through retrieval-augmented generation. Many of them also wield tools such as CRMs, databases, or workflow platforms. “An agent is like hiring a new employee who already knows your systems on day one,” Flores said. “It doesn’t just respond — it executes.” This new mode of collaboration also changes how employees interact with technology. Flores noted that his clients often name their agents, treating them as teammates rather than tools. “When marketing needs to check something, they’ll say, ‘Let’s ask Marco,’” he added. “That naming makes adoption easier — it feels human.” ... One of IBM’s first success stories came with password resets — an unglamorous but ubiquitous use case. Two agents now collaborate: one triages the request, while the other verifies credentials and performs the reset, all under the company’s identity-and-access-management system. Each agent has its own digital identity, ensuring audit trails and preventing impersonation. ... Agentic AI isn’t a software upgrade — it’s a redesign of how digital work gets done. Each of the leaders interviewed for this story emphasized that success depends as much on data and governance as on culture and experimentation. Before moving beyond chatbots, IT directors should ask not only “Can we do this?” but “Where should we start — and how do we do it safely?”


What to look for in an AI implementation partner

Good AI implementation partners need not be limited to big professional services firms. Smaller firms such as AI consultancies and startups can provide lots of value. Regardless, many organizations require outside expertise when deploying, monitoring, and maintaining AI tools and services. ... “Many firms understand AI tools at a surface level, but what truly matters is the ability to contextualize AI within the nuances of a specific industry,” says Hrishi Pippadipally, CIO at accounting and business advisory firm Wiss. ... An effective partner must be able to balance innovation with the guardrails of security, privacy, and industry-specific compliance, Agrawal adds. “Otherwise, IT leaders will inherit long-term liabilities,” he says. ... “The mistake many organizations make is focusing only on technical credentials or flashy demos,” Agrawal says. “What’s often overlooked and what I prioritize is whether the partner can embed AI into existing workflows without disrupting business continuity. A good partner knows how to integrate AI so that it doesn’t just work in theory, but delivers impact in the complex reality of enterprise operations.” ... “Most evaluation checklists focus on the technical side — security, compliance, data governance, etc.,” says Sara Gallagher, president of The Persimmon Group, a business management consultancy. “While that matters, too many execs are skipping over the thornier questions.


Magnetic tape is going strong in the age of AI, and it's about to get even better

Aramid permits the manufacture of significantly thinner and smoother media, enabling longer tape lengths in a standard LTO Ultrium cartridge form factor,” the organization noted in a statement. “This material innovation provides an extra 10 TB of native capacity than the currently available 30 TB LTO-10 cartridge, which is manufactured using different materials.” Stephen Bacon, VP for data protection solutions product management at HPE, said the new cartridges are aimed at enterprises spanning an array of industries dealing with high data volumes, from manufacturing to financial services. “AI has turned archives into strategic assets,” Bacon commented. ... Tape storage has a number of distinct advantages, including low cost, durability, and easy portability. According to previous analysis from the LTO Program, companies using tape recorded an 86% lower total cost of ownership (TCO) compared to disk storage. TCO compared to cloud storage was also 66% lower across a 10 year period, figures showed. Notably, the use of tape for unstructured data storage also adds to the appeal, with this now vital in the training process for large language models (LLMs). ... Long-term, tape storage is only going to improve, at least if the LTO Program’s roadmap is to be believed. Through generations 11 through to 14, enterprises can expect to see significant capacity gains, eventually peaking with a 913 TB cartridge.


The rebellion against robot drivel

LLMs are “lousy writers and (most importantly!) they are not you,” Cantrill argues. That “you” is what persuades. We don’t read Steinbeck’s The Grapes of Wrath to find a robotic approximation of what desperation and hurt seem to be; we read it because we find ourselves in the writing. No one needs to be Steinbeck to draft press releases, but if that press release sounds samesy and dull, does it really matter that you did it in 10 seconds with an LLM versus an hour on your own mental steam? A few years ago, a friend in product marketing told me that an LLM generated better sales collateral than the more junior product marketing professionals he’d hired. His verdict was that he would hire fewer people and rely on LLMs for that collateral, which only got a few dozen downloads anyway, from a sales force that numbered in the thousands. Problem solved, right? Wrong. If few people are reading the collateral, it’s likely the collateral isn’t needed in the first place. Using LLMs to save money on creating worthless content doesn’t seem to be the correct conclusion. Ditto using LLMs to write press releases or other marketing content. I’ve said before that the average press release sounds like it was written by a computer (and not a particularly advanced computer), so it’s fine to say we should use LLMs to write such drivel. But isn’t it better to avoid the drivel in the first place? Good PR people think about content and its place in a wider context rather than just mindlessly putting out press releases.


AI’s Impact on Mental Health

“Talking to a therapist can be intimidating, expensive, or complicated to access, and sometimes you need someone—or something—to listen at that exact moment,’’ said Stephanie Lewis, a licensed clinical social worker and executive director of Epiphany Wellness addiction and mental health treatment centers. Chatbots allow people to vent, process their feelings, and get advice without worrying about being judged or misunderstood, Lewis said. “I also see that people who struggle with anxiety, social discomfort, or trust issues sometimes find it easier to open up to a chatbot than a real person.” Users are “often looking for a safe space to express emotions, receive reassurance, or find quick stress-management strategies,’’ added Dr. Bryan Bruno, medical director of Mid City TMS, a New York City-based medical center focused on treating depression. ... “Chatbots created for therapy are often built with input from mental health professionals and integrate evidence-based approaches, like cognitive behavioral therapy techniques,’’ Tse said. “They can prompt reflection and guide users toward actionable steps.” Lewis agreed that some therapeutic chatbots are designed with real therapy techniques, like Cognitive Behavioral Therapy (CBT), which can help manage stress or anxiety. “They can guide users through breathing exercises, mindfulness techniques, and journaling prompts, all great tools,” she said.


Holistic Engineering: Organic Problem Solving for Complex Evolving Systems & Late projects. 

Architectures that drift from their original design. Code that mysteriously evolves into something nobody planned. These persistent problems in software development often stem not from technical failures ... Holistic engineering is the practice of deliberately factoring these non-technical forces into our technical decisions, designs, and strategies. ... Holistic engineering involves considering, during technical design, among the factors, not only traditional technical factors, but also all the other non-technical forces that will be influencing your system anyhow. By acknowledging these forces, teams can view the problem as an organic system and influence, to some extent, various parts of the system. ... Consider the actual information structure within your organization. Understanding actual workflow patterns and communication channels reveals how work truly gets accomplished. These communication patterns often differ significantly from the formal hierarchy. Next, identify which processes could block your progress. For example, some organizations require approval from twenty people, including the CTO, to decide on a release. ... Organizations that embrace holistic engineering gain predictable control over forces that typically derail technical projects. Instead of reacting to "unforeseen" delays and architectural drift, teams can anticipate and plan for organizational constraints that inevitably influence technical outcomes.
At its heart, industrial AI is about automating and optimising business processes to improve decision-making, enhance efficiency and increase profitability. It requires the collection of vast volumes of data from sources like IoT sensors, cameras, and back-office systems, and the application of machine and deep learning algorithms to surface insights. In some cases, the AI powers robots to supercharge automation, and in others, it utilises edge computing for faster, localised processing. Agentic AI helps firms go even further, by working autonomously, dynamically and intelligently to achieve the goals it is set. ... “You get the data in from IoT and you trigger that as an anomaly,” says Pederson. “You analyse the anomaly against all your historic records – other incidents that have happened with customers and how they have been fixed. You relate it to your knowledge base articles. And then you relate it to your inventory on your service vans, like which service vans and which technicians are equipped to do the job. “So it’s the whole estate of structured, unstructured and processed data. In the past, they would send a technician out, and they could get it right 84% of the time. Now they have improved their first-time fix rate to 97%.” Both this and the aforementioned field service deployment feature an “agentic dispatcher” which autonomously creates and publishes the schedules to the relevant service technicians, updates their calendar and suggests the best route to take. “In the very near future, AI agents will not only be helping to address work for people behind a desk, but guiding robots directly,” says Pederson.


What security pros should know about insurance coverage for AI chatbot wiretapping claims

There are subtle differences in the way courts are viewing privacy litigation arising from the use of AI chatbots in comparison to litigation involving analytical tools like session reply or cookies. Both claims involve allegations that a third party is intercepting communications without proper consent, often under state wiretapping laws, but the legal arguments and defenses vary because the data being collected is different. ... Whether or not an exclusion will ultimately impact coverage depends both on the specific language of the exclusion and also the allegations raised in the underlying lawsuit. For example, broadly worded exclusions with “catch-all” phrases precluding coverage for any statutory violation may be more difficult for policyholder to overcome than an exclusion that identifies by name specific statutes. As these claims are relatively new, we have yet to see significant examples of how this plays out in the context of insurance coverage litigation. However, we saw similar coverage arguments in the context of insurance coverage litigation where the underlying suit alleged violations of the Biometric Information Privacy Act (BIPA). ... To help mitigate risks, organizations should review their user consent mechanisms for AI Bot Communications. Consent does not always mean signing a form, but could include prominently displaying chatbot privacy notices before any data collection, providing easy access to the business’s privacy policy detailing how chatbot interactions are stored, and using automated disclaimers at the start of each chat session. 

Daily Tech Digest - September 20, 2025


Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Five forces shaping the next wave of quantum innovation

Quantum computers are expected to solve problems currently intractable for even the world’s fastest supercomputers. Their core strengths — efficiently finding hidden patterns in complex datasets and navigating vast optimization challenges — will enable the design of novel drugs and materials, the creation of superior financial algorithms and open new frontiers in cryptography and cybersecurity. ... The quantum ecosystem now largely agrees that simply scaling up today’s computers, which suffer from significant noise and errors that prevent fault-tolerant operation, won’t unlock the most valuable commercial applications. The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. ... Most early quantum computing companies tried a full-stack approach. Now that the industry is maturing, a rich ecosystem of middle-of-the-stack players has emerged. This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines and quantum software development from firms ... recent innovations in quantum networking technology have made a scale-out approach a serious contender. 


Post-Modern Ransomware: When Exfiltration Replaces Encryption

Exfiltration-first attacks have re-written the rules, with stolen data providing criminals with a faster, more reliable payday than the complex mechanics of encryption ever could. The threat of leaking data like financial records, intellectual property, and customer and employee details delivers instant leverage. Unlike encryption, if the victim stands firm and refuses to pay up, criminal groups can always sell their digital loot on the dark web or use it to fuel more targeted attacks. ... Phishing emails, once known for being riddled with tell-tale grammar and spelling mistakes, are now polished, personalized and delivered in perfect English. AI-powered deepfake voices and videos are providing convincing impersonations of executives or trusted colleagues that have defrauded companies for millions. At the same time, attackers are deploying custom chatbots to manage ransom negotiations across multiple victims simultaneously, applying pressure with the relentless efficiency of machines. ... Yet resilience is not simply a matter of dashboards and detection thresholds – it is equally about supporting those on the frontlines. Security leaders already working punishing hours under relentless scrutiny cannot be expected to withstand endless fatigue and a culture of blame without consequence. Organizations must also embed support for their teams into their response frameworks, from clear lines of communication and decompression time to wellbeing checks. 


The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time

The uncertainty is driving concern. “There's been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. ... “There's a nervousness around deployment of AI, and I think that nervousness comes from -- definitely in conversations with other CIOs -- not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don't necessarily need all the bells and whistles of the public cloud providers, whether that's for latency or performance reasons, or whether it's for cost or whether that's for sovereignty reasons,” explained Woo. 


Enterprise AI enters the age of agency, but autonomy must be governed

Agentic AI systems don’t just predict or recommend, they act. These intelligent software agents operate with autonomy toward defined business goals, planning, learning, and executing across enterprise workflows. This is not the next version of traditional automation or static bots. It’s a fundamentally different operating paradigm, one that will shape the future of digital enterprises. ... For many enterprises, the last decade of AI investment has focused on surfacing insights: detecting fraud, forecasting demand, and predicting churn. These are valuable outcomes, but they still require humans or rigid automation to respond. Agentic AI closes that gap. These agents combine machine learning, contextual awareness, planning, and decision logic to take goal-directed action. They can process ambiguity, work across systems, resolve exceptions, and adapt over time. ... Agentic AI will not simply automate tasks. It will reshape how work is designed, measured, and managed. As autonomous agents take on operational responsibility, human teams will move toward supervision, exception resolution, and strategic oversight. New KPIs will emerge, not just around cost or cycle time, but around agent quality, business impact, and compliance resilience. This shift will also demand new talent models. Enterprises must upskill teams to manage AI systems, not just processes. 


Cybersecurity in smart cities under scrutiny

The digital transformation of public services involves “an accelerated convergence between IT and OT systems, as well as the massive incorporation of connected IoT devices,” she explains, which gives rise to challenges such as an expanding attack surface or the coexistence of obsolete infrastructure with modern ones, in addition to a lack of visibility and control over devices deployed by multiple providers. ... “According to the European Cyber ​​Security Organisation, 86% of European local governments with IoT deployments have suffered some security breach related to these devices,” she says. Accenture’s Domínguez adds that the challenge is to consider “the fragmentation of responsibilities between administrations, concessionaires, and third parties, which complicates cybersecurity governance and requires advanced coordination models.” De la Cuesta also emphasizes the siloed nature of project development, which significantly hinders the development of an active cybersecurity strategy. ... In the integration of new tools, despite Spain holding a leading position in areas such as 5G, “technology moves much faster than the government’s ability to react,” he says. “It’s not like a private company, which has a certain agility to make investments,” he explains. “Public administration is much slower. Budgets are different. Administrative procedures are extremely long. From the moment a project is first discussed until it is actually executed, many years pass.”


Your SDLC Has an Evil Twin — and AI Built It

Welcome to the shadow SDLC — the one your team built with AI when you weren't looking: It generates code, dependencies, configs, and even tests at machine speed, but without any of your governance, review processes, or security guardrails. ... It’s not just about insecure code sneaking into production, but rather about losing ownership of the very processes you’ve worked to streamline. Your “evil twin” SDLC comes with: Unknown provenance → You can’t always trace where AI-generated code or dependencies came from. Inconsistent reliability → AI may generate tests or configs that look fine but fail in production. Invisible vulnerabilities → Flaws that never hit a backlog because they bypass reviews entirely. ... AI assistants are now pulling in OSS dependencies you didn’t choose — sometimes outdated, sometimes insecure, sometimes flat-out malicious. While your team already uses hygiene tools like Dependabot or Renovate, they’re only table stakes that don’t provide governance. ... The “evil twin” of your SDLC isn’t going away. It’s already here, writing code, pulling dependencies, and shaping workflows. The question is whether you’ll treat it as an uncontrolled shadow pipeline — or bring it under the same governance and accountability as your human-led one. Because in today’s environment, you don’t just own the SDLC you designed. You also own the one AI is building — whether you control it or not.


'ShadowLeak' ChatGPT Attack Allows Hackers to Invisibly Steal Emails

Researchers at Radware realized the issue earlier this spring, when they figured out a way of stealing anything they wanted from Gmail users who integrate ChatGPT. Not only was their trick devilishly simple, but it left no trace on an end user's network — not even an iota of the suspicious Web traffic typical of data exfiltration attacks. As such, the user had no way of detecting the attack, let alone stopping it. ... To perform a ShadowLeak attack, attackers send an outwardly normal-looking email to their target. They surreptitiously embed code in the body of the message, in a format that the recipient will not notice — for example, in extremely tiny text, or white text on a white background. The code should be written in HTML, being standard for email and therefore less suspicious than other, more powerful languages would be. ... The malicious code can instruct the AI to communicate the contents of the victim's emails, or anything else the target has granted ChatGPT access to, to an attacker-controlled server. ... Organizations can try to compensate with their own security controls — for example, by vetting incoming emails with their own tools. However, Geenens points out, "You need something that is smarter than just the regular-expression engines and the state machines that we've built. Those will not work anymore, because there are an infinite number of permutations with which you can write an attack in natural language." 


UK: World’s first quantum computer built using standard silicon chips launched

This is reportedly the first quantum computer to be built using the standard complementary metal-oxide-semiconductor (CMOS) chip fabrication process which is the same transistor technology used in conventional computers. A key part of this approach is building cryoelectronics that connect qubits with control circuits that work at very low temperatures, making it possible to scale up quantum processors greatly. “This is quantum computing’s silicon moment,” James Palles‑Dimmock, Quantum Motion’s CEO, stated. ... In contrast to other quantum computing approaches, the startup used high-volume industrial 300 millimeter chipmaking processes from commercial foundries to produce qubits. The architecture, control stack, and manufacturing approach are all built to scale to host millions of qubits and pave the way for fault-tolerant, utility-scale, and commercially viable quantum computing. “With the delivery of this system, Quantum Motion is on track to bring commercially useful quantum computers to market this decade,” Hugo Saleh, Quantum Motion’s CEO and president, revealed. ... The system’s underlying QPU is built on a tile-based architecture, integrating all compute, readout, and control components into a dense, repeatable array. This design enables future expansion to millions of qubits per chip, with no changes to the system’s physical footprint.


Key strategies to reduce IT complexity

The cloud has multiplied the fragmentation of solutions within companies, expanding the number of environments, vendors, APIs, and integration approaches, which has raised the skill set, necessitated more complex governance, and prompted the emergence of cross-functional roles between IT and business. Cybersecurity also introduces further levels of complexity, introducing new platforms, monitoring tools, regulatory requirements, and risk management approaches that must be overseen by expert personnel. And then there’s shadow IT. With the ease of access to cloud technologies, it’s not uncommon for business units to independently activate services without involving IT, generating further risks. ... “Structured upskilling and reskilling programs are needed to prepare people to manage new technologies,” says Massara. “So is an organizational model capable of managing a growing number of projects, which can no longer be handled in a one-off manner. The approach to project management is changing because the project portfolio has expanded significantly, and a structured PMO is required, with project managers who often no longer reside solely in IT, but directly within the business.” ... While it’s true that an IT system with disparate systems leads to greater complexity, companies are still very cost-conscious and wary about heavily investing in unification right away. But as systems become obsolete, they become more harmonized.


Unshackling IT: Why Third-Party Support Is a Strategic Imperative, Especially for AI

One of the most compelling arguments for independent third-party support is its inherent vendor neutrality. When a company relies solely on a software vendor for support, that vendor naturally has a vested interest in promoting its latest upgrades, cloud migrations, and proprietary solutions. This can create a conflict of interest, potentially pushing customers towards expensive, unnecessary upgrades or discouraging them from exploring alternatives that might be a better fit for their unique needs. ... The recent acquisition of VMware by Broadcom provides a compelling and timely illustration of why third-party support is becoming increasingly critical. Following the merger, many VMware customers have expressed significant dissatisfaction with changes to licensing models, product roadmaps, and, crucially, support. Broadcom has been criticized for restructuring VMware’s offerings and reportedly reducing support for smaller customers, pushing them towards bundled, more expensive solutions. ... The shift towards third-party support isn’t just about cost savings; it’s about regaining control, accessing unbiased expertise, and ensuring business continuity in a rapidly changing technological landscape. For companies making critical decisions about AI integration and managing complex enterprise systems, providers like Spinnaker Support offer a strategic advantage.

Daily Tech Digest - August 26, 2025


Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” -- Eloise Ristad


6 tips for consolidating your vendor portfolio without killing operations

Behind every sprawling vendor relationship is a series of small extensions that compound over time, creating complex entanglements. To improve flexibility when reviewing partners, Dovico is wary of vendor entanglements that complicate the ability to retire suppliers. Her aim is to clearly define the service required and the vendor’s capabilities. “You’ve got to be conscious of not muddying how you feel about the performance of one vendor, or your relationship with them. You need to have some competitive tension and align core competencies with your problem space,” she says. Klein prefers to adopt a cross-functional approach with finance and engineering input to identify redundancies and sprawl. Engineers with industry knowledge cross-reference vendor services, while IT checks against industry benchmarks, such as Gartner’s Magic Quadrant, to identify vendors providing similar services or tools. ... Vendor sprawl also lurks in the blind spot of cloud-based services that can be adopted without IT oversight, fueling shadow purchasing habits. “With the proliferation of SaaS and cloud models, departments can now make a few phone calls or sign up online to get applications installed or services procured,” says Klein. This shadow IT ecosystem increases security risks and vendor entanglement, undermining consolidation efforts. This needs to be tackled through changes to IT governance.


Should I stay or should I go? Rethinking IT support contracts before auto-renewal bites

Contract inertia, which is the tendency to stick with what you know, even when it may no longer be the best option, is a common phenomenon in business technology. There are several reasons for it, such as familiarity with an existing provider, fear of disruption, the administrative effort involved in reviewing and comparing alternatives, and sometimes just a simple lack of awareness that the renewal date is approaching. The problem is that inertia can quietly erode value. As organisations grow, shift priorities or adopt new technologies, the IT support they once chose may no longer be fit for purpose. ... A proactive approach begins with accountability. IT leaders need to know what their current provider delivers and how they are being used by the company. Are remote software tools performing as expected? Are updates, patches and monitoring processes being applied consistently across all platforms? Are issues being resolved efficiently by our internal IT team, or are inefficiencies building up? Is this the correct set-up and structure for our business, or could we be making better use of existing internal capacity, by leveraging better remote management tools? Gathering this information allows organisations to have an honest conversation with their provider (and themselves) about whether the contract still aligns with their objectives.


AI Data Security: Core Concepts, Risks, and Proven Practices

Although AI makes and fortifies a lot of our modern defenses, once you bring AI into the mix, the risks evolve too. Data security (and cybersecurity in general) has always worked like that. The security team gets a new tool, and eventually, the bad guys get one too. It’s a constant game of catch-up, and AI doesn’t change that dynamic. ... One of the simplest ways to strengthen AI data security is to control who can access what, early and tightly. That means setting clear roles, strong authentication, and removing access that people don’t need. No shared passwords. No default admin accounts. No “just for testing” tokens sitting around with full privileges. ... What your model learns is only as good (and safe) as the data you feed it. If the training pipeline isn’t secure, everything downstream is at risk. That includes the model’s behavior, accuracy, and resilience against manipulation. Always vet your data sources. Don’t rely on third-party datasets without checking them for quality, bias, or signs of tampering. ... A core principle of data protection, baked into laws like GDPR, is data minimization: only collect what you need, and only keep it for as long as you actually need it. In real terms, that means cutting down on excess data that serves no clear purpose. Put real policies in place. Schedule regular reviews. Archive or delete datasets that are no longer relevant. 


Morgan Stanley Open Sources CALM: The Architecture as Code Solution Transforming Enterprise DevOps

CALM enables software architects to define, validate, and visualize system architectures in a standardized, machine-readable format, bridging the gap between architectural intent and implementation. Built on a JSON Meta Schema, CALM transforms architectural designs into executable specifications that both humans and machines can understand. ... The framework structures architecture into three primary components: nodes, relationships, and metadata. This modular approach allows architects to model everything from high-level system overviews to detailed microservices architectures. ... CALM’s true power emerges in its seamless integration with modern DevOps workflows. The framework treats architectural definitions like any other code asset, version-controlled, testable, and automatable. Teams can validate architectural compliance in their CI/CD pipelines, catching design issues before they reach production. The CALM CLI provides immediate feedback on architectural decisions, enabling real-time validation during development. This shifts compliance left in the development lifecycle, transforming potential deployment roadblocks into preventable design issues. Key benefits for DevOps teams include machine-readable architecture definitions that eliminate manual interpretation errors, version control for architectural changes that provides clear change history, and real-time feedback on compliance violations that prevent downstream issues.


Shadow AI is surging — getting AI adoption right is your best defense

Despite the clarity of this progression, many organizations struggle to begin. One of the most common reasons is poor platform selection. Either no tool is made available, or the wrong class of tool is introduced. Sometimes what is offered is too narrow, designed for one function or team. Sometimes it is too technical, requiring configuration or training that most users aren’t prepared for. In other cases, the tool is so heavily restricted that users cannot complete meaningful work. Any of these mistakes can derail adoption. A tool that is not trusted or useful will not be used. And without usage, there is no feedback, value, or justification for scale. ... The best entry point is a general-purpose AI assistant designed for enterprise use. It must be simple to access, require no setup, and provide immediate value across a range of roles. It must also meet enterprise requirements for data security, identity management, policy enforcement, and model transparency. This is not a niche solution. It is a foundation layer. It should allow employees to experiment, complete tasks, and build fluency in a way that is observable, governable, and safe. Several platforms meet these needs. ChatGPT Enterprise provides a secure, hosted version of GPT-5 with zero data retention, administrative oversight, and SSO integration. It is simple to deploy and easy to use. =


AI and the impact on our skills – the Precautionary Principle must apply

There is much public comment about AI replacing jobs or specific tasks within roles, and this is often cited as a source of productivity improvement. Often we hear about how junior legal professionals can be easily replaced since much of their work is related to the production of standard contracts and other documents, and these tasks can be performed by LLMs. We hear much of the same narrative from the accounting and consulting worlds. ... The greatest learning experiences come from making mistakes. Problem-solving skills come from experience. Intuition is a skill that is developed from repeatedly working in real-world environments. AI systems do make mistakes and these can be caught and corrected by a human, but it is not the same as the human making the mistake. Correcting the mistakes made by AI systems is in itself a skill, but a different one. ... In a rapidly evolving world in which AI has the potential to play a major role, it is appropriate that we apply the Precautionary Principle in determining how to automate with AI. The scientific evidence of the impact of AI-enabled automation is still incomplete, but more is being learned every day. However, skill loss is a serious, and possibly irreversible, risk. The integrity of education systems, the reputations of organisations and individuals, and our own ability to trust in complex decision-making processes, are at stake.


Ransomware-Resilient Storage: The New Frontline Defense in a High-Stakes Cyber Battle

The cornerstone of ransomware resilience is immutability: data written to storage cannot be altered or deleted ever. This write-once-read-many capability means backup snapshots or data blobs are locked for prescribed retention periods, impervious to tampering even by attackers or system administrators with elevated privileges. Hardware and software enforce this immutability by preventing any writes or deletes on designated volumes, snapshots, or objects once committed, creating a "logical air gap" of protection without the need for physical media isolation. ... Moving deeper, efforts are underway to harden storage hardware directly. Technologies such as FlashGuard, explored experimentally by IBM and Intel collaborations, embed rollback capabilities within SSD controllers. By preserving prior versions of data pages on-device, FlashGuard can quickly revert files corrupted or encrypted by ransomware without network or host dependency. ... Though not widespread in production, these capabilities signal a future where storage devices autonomously resist ransomware impact, a powerful complement to immutable snapshotting. While these cutting-edge hardware-level protections offer rapid recovery and autonomous resilience, organizations also consider complementary isolation strategies like air-gapping to create robust multi-layered defense boundaries against ransomware threats.


How an Internal AI Governance Council Drives Responsible Innovation

The efficacy of AI governance hinges on the council’s composition and operational approach. An optimal governance council typically includes cross-functional representation from executive leadership, IT, compliance and legal teams, human resources, product management, and frontline employees. This diversified representation ensures comprehensive coverage of ethical considerations, compliance requirements, and operational realities. Initial steps in operationalizing a council involve creating strong AI usage policies, establishing approved tools, and developing clear monitoring and validation protocols. ... While initial governance frameworks often focus on strict risk management and regulatory compliance, the long-term goal shifts toward empowerment and innovation. Mature governance practices balance caution with enablement, providing organizations with a dynamic, iterative approach to AI implementation. This involves reassessing and adapting governance strategies, aligning them with evolving technologies, organizational objectives, and regulatory expectations. AI’s non-deterministic, probabilistic nature, particularly generative models, necessitates a continuous human oversight component. Effective governance strategies embed this human-in-the-loop approach, ensuring AI enhances decision-making without fully automating critical processes.


The energy sector has no time to wait for the next cyberattack

Recent findings have raised concerns about solar infrastructure. Some Chinese-made solar inverters were found to have built-in communication equipment that isn’t fully explained. In theory, these devices could be triggered remotely to shut down inverters, potentially causing widespread power disruptions. The discovery has raised fears that covert malware may have been installed in critical energy infrastructure across the U.S. and Europe, which could enable remote attacks during conflicts. ... Many OT systems were built decades ago and weren’t designed with cyber threats in mind. They often lack updates, patches, and support, and older software and hardware don’t always work with new security solutions. Upgrading them without disrupting operations is a complex task. OT systems used to be kept separate from the Internet to prevent remote attacks. Now, the push for real-time data, remote monitoring, and automation has connected these systems to IT networks. That makes operations more efficient, but it also gives cybercriminals new ways to exploit weaknesses that were once isolated. Energy companies are cautious about overhauling old systems because it’s expensive and can interrupt service. But keeping legacy systems in play creates security gaps, especially when connected to networks or IoT devices. Protecting these systems while moving to newer, more secure tech takes planning, investment, and IT-OT collaboration.


Agentic AI Browser an Easy Mark for Online Scammers

In an Wednesday blog post, researchers from Guardio wrote that Comet - one of the first AI browsers to reach consumers - clicked through fake storefronts, submitted sensitive data to phishing sites and failed to recognize malicious prompts designed to hijack its behavior. The Tel Aviv-based security firm calls the problem "scamlexity," a messy intersection of human-like automation and old-fashioned social engineering creates "a new, invisible scam surface" that scales to millions of potential victims at once. In a clash between the sophistication of generative models built into browsers and the simplicity of phishing tricks that have trapped users for decades, "even the oldest tricks in the scammer's playbook become more dangerous in the hands of AI browsing." One of the headline features of AI browsers is one-click shopping. Researchers spun up a fake "Walmart" storefront complete with polished design, realistic listings and a seamless checkout flow. ... Rather than fooling a user into downloading malicious code to putatively fix a computer problem - as in ClickFix - a PromptFix attack is a malicious instruction was hidden inside what looks like a CAPTCHA. The AI treated the bogus challenge as routine, obeyed the hidden command and continued execution. AI agents are expected to ingest unstructured logs, alerts or even attacker-generated content during incident response.

Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.