Showing posts with label NHI. Show all posts
Showing posts with label NHI. Show all posts

Daily Tech Digest - February 28, 2026


Quote for the day:

"Stories are the single most powerful weapon in a leader's arsenal." -- Howard Gardner



AI ambitions collide with legacy integration problems

Many enterprises have moved beyond experimentation and are preparing for formal deployment. The survey found that 85% have begun adopting AI or expect to do so within the next 12 months. Respondents also reported efforts to formalise AI governance, reflecting greater attention to risk, accountability and oversight. ... Integration sits at the centre of that tension. AI initiatives often depend on clean data, consistent definitions and reliable access across multiple applications, requirements that legacy estates can complicate. The survey links these constraints to compliance risks, including data retention, access controls and auditability across connected systems. ... Security and privacy concerns featured prominently. Data privacy across systems was cited as a top risk by 49% of respondents, while 48% said they were concerned about third parties handling sensitive data. The results highlight the difficulty of managing information flows when AI systems interact with multiple internal applications and external providers. Governance approaches varied. Fewer than half (47%) said board-level reporting forms part of risk management for AI and related technology work, suggesting uneven executive oversight as AI moves into operational settings where incidents can carry regulatory and reputational consequences. ... Despite pressure to move quickly on AI initiatives, respondents said engineering quality remains a priority. 


Striking the Right Balance Between Automation and Manual Processes in IT

Rather than thinking of applying AI wherever possible and over-automating, leaders should think about the most beneficial uses of the technology and begin implementation of the technology in those areas first before expanding further. Automation is a powerful tool, but humans are the most powerful tool in the IT stack. Let’s discuss how today’s IT leaders can strike the right balance between automation and manual processes. ... Even with the many benefits of automation, human-led processes still reign supreme in certain areas. For example, optimal IT operations happen at the intersection of tools and teamwork. IT teams must still foster a collaborative culture, working with other departments to ensure cross team visibility and alignment on business goals. While the latest AI technology can help in these efforts, ultimately, humans must do this collaborative work. Team dynamics can also be complex at times. Conflict resolution and major team decisions are not things that automation can solve. Moreover, if there is a critical system issue, DBAs must be able to work with IT leaders to resolve this issue and forge a path forward. Finally, manual processes are often necessitated by convoluted workflows. Many DBA teams have workflows in which every step is a set of if-then-else decisions, with each possible outcome also encumbered with many if-then decisions cascading through multiple levels of decisions. 


Translating data science capabilities into business ROI

The fundamental challenge in demonstrating data science ROI is that most analytics infrastructure feels optional until it becomes essential. During normal operations, executives tolerate delays in reporting and gaps in visibility. During a crisis, those same gaps become existential threats. ... The turning point came when I realized we weren’t facing a data problem or a technology problem. We were facing a decision-making problem. Our leadership needed to maintain operational stability for a multi-trillion-dollar asset manager during unprecedented disruption. Every day without visibility meant delayed decisions, missed opportunities, and compounding uncertainty. ... Speed-to-value often trumps technical sophistication. The COVID dashboard taught me this lesson definitively. We could have spent months building a comprehensive data warehouse with sophisticated ETL pipelines and machine learning-powered forecasting. Instead, we focused ruthlessly on the minimum viable solution that executives needed immediately. ... Strategic positioning creates a disproportionate impact. I served as strategic architect for a major product repositioning — a multi-million-dollar initiative essential for our competitive positioning. My data-backed strategies produced immediate, quantifiable market share gains and resulted in substantially larger deal sizes and accelerated acquisition rates that fundamentally altered our market position.


The reliability cost of default timeouts

Many widely used libraries and systems default to infinite or extremely large timeouts. In Java, common HTTP clients treat a timeout of zero as “wait indefinitely” unless explicitly configured. In Python, requests will wait indefinitely unless a timeout is set explicitly. The Fetch API does not define a built-in timeout at all. These defaults aren’t careless. They’re intentionally generic. Libraries optimize for the correctness of a single request because they can’t know what “too slow” means for your system. Survivability under partial failure is left to the application. ... Long timeouts can also mask deeper design problems. If a request regularly times out because it returns thousands of items, the issue isn’t the timeout itself. It’s missing pagination or poor request shaping. By optimizing for individual request success, teams unintentionally trade away system-level resilience. ... A timeout defines where a failure is allowed to stop. Without timeouts, a single slow dependency can quietly consume threads, connections and memory across the system. With well-chosen timeouts, slowness stays contained instead of spreading into a system-wide failure. ... A timeout is a decision about value. Past a certain point, waiting longer does not improve user experience. It increases the amount of wasted work a system performs after the user has already left. A timeout is also a decision about containment. Without bounded waits, partial failures turn into system-wide failures through resource exhaustion: blocked threads, saturated pools, growing queues and cascading latency.


From dashboards to decisions: How streaming data transforms vertical software

For years, the standard for vertical software has been the nightly sync. You collect data all day, run a massive batch job at 2:00 AM, and provide your customers with a clean report the next morning. In a world of 2026, that delay is becoming a liability rather than a best practice. ... Data streaming isn’t just about moving bits faster; it’s about changing the fundamental value proposition of your application. Instead of being a system of record that tells a user what happened, your software becomes a system of agency that tells them what is happening right now. This shift requires a mental move away from static databases toward event-driven architectures. You’re no longer just storing a “state” (like current inventory); you’re capturing every “event” (every scan, every sale, every sensor ping) that leads to that state. ... One of the biggest mistakes I see software leaders make is treating real-time data as a “table stakes” feature that they give away for free. Streaming infrastructure is expensive to run and even more expensive to maintain. If you bake these costs into your standard subscription without a clear monetization strategy, you’ll watch your gross margins shrink as your customers’ data volumes grow. ... When you process data at the edge, you’re also solving the “data gravity” problem. Sending thousands of high-frequency sensor pings from a factory floor to the cloud just to filter out the noise is a waste of bandwidth and money.


MCP leaves much to be desired when it comes to data privacy and security

From a data privacy standpoint, one of the major issues is data leakage, while from a security perspective, there are several things that may cause issues, including prompt injections, difficulty in distinguishing between verified and unverified servers, and the fact that MCP servers sit below typical security controls. ... Fulkerson went on to say that runtime execution is another issue, and legacy tools for enforcing policies and privacy are static and don’t get enforced at runtime. When you’re dealing with non-deterministic systems, there needs to be a way to verifiably enforce policies at runtime execution because the blast radius of runtime data access has outgrown the protection mechanisms organizations have. He believes that confidential AI is the solution to these problems. Confidential AI builds on the properties of confidential computing, which involves using hardware that has an encrypted cache, allowing data and inference to be run inside an encrypted environment. While this helps prove that data is encrypted and nobody can see it, it doesn’t help with the governance challenge, which is where Fulkerson says confidential AI comes in. Confidential AI treats everything as a resource with its own set of policies that are cryptographically encoded. For example, you could limit an agent to only be able to talk to a specific agent, or only allow it to communicate with resources on a particular subnet.


3 Ways OT-IT Integration Helps Energy and Utilities Providers Modernize Grid Operations

Increasingly, energy providers are turning to digital twins to model and simulate critical infrastructure across generation, transmission and distribution environments. By feeding live telemetry from supervisory control and data acquisition systems, intelligent electronic devices and other OT assets into IT-based simulation platforms, utilities can create real-time digital replicas of substations, turbines, transformers and even entire grid segments. This enables teams to test load-balancing strategies, maintenance schedules or DER integrations without disrupting service. ... Private 5G networks offer a compelling alternative. Designed for high reliability and low latency, private 5G can operate effectively in interference-heavy environments such as substations or generation facilities. When paired with TSN, utilities can achieve deterministic, sub-millisecond communication between protection systems, controllers and analytics platforms. ... Federated machine learning allows utilities to train AI models locally at the edge — analyzing equipment performance, detecting anomalies and refining predictive maintenance strategies — without centralizing raw operational data. For industries such as energy and oil, remote sites can run local anomaly detection models tailored to site-specific conditions, while still sharing insights that strengthen enterprisewide safety and operational protocols.


Even if AI demand fades, India need not worry - about data centres

AI pushes rack densities from ~5–10kW to 50–100kW+, making liquid cooling, greater power capacity, and purpose‑built ‘AI‑ready’ Data Centre campuses essential — whether for regional training clusters or dense inference. What makes a Data Centre AI-ready is the ability to support advanced cooling, predictable scalability and direct access to clouds, networks and partners in a sustainable manner. ... In India, enterprises are rapidly adopting hybrid and multi-cloud architectures as they modernise their digital infrastructure. Domestic enterprises, particularly in BFSI and broking, are moving away from in-house data centres toward third-party colocation facilities to gain scalability, efficient interconnection with their required ecosystem, operational efficiency and access to specialised talent. This shift is being further accelerated by distributed AI, hybrid multi-cloud architectures and a growing focus on sustainability. ... India’s Data Centre market is distinctive because of the scale of its digital consumption, combined with the early stage of ecosystem development. India generates a significant share of global data, yet its installed data centre capacity remains comparatively low, creating strong long-term growth potential. This growth is now being amplified by hyperscalers and AI-led demand. India aims to become a USD 1 T digital economy by 2028. It is already making significant progress, supported by the country’s thriving startup ecosystem, the third largest in the world, and initiatives like Startup India.


Surprise! The One Being Ripped Off by Your AI Agent Is You

It’s now happening all the time: in the sale of location data and browsing histories to brokers who assemble and sell our highly personal profiles, and in DOGE’s and other data grabs across the federal government, where housing, tax, and health information is being weaponized for immigration enforcement or misleading voter fraud “investigations.” With AI agents, it just gets worse. Data betrayal is an even more intimate act. Yet the people who granted OpenClaw access to their accounts were making a reasonable choice—to use a powerful tool on their behalf. ... The data aggregation capabilities of AI add another dimension of risk that rarely gets even a mention, but represent a change in scale that adds up to a sea change, making someone marketed as “productivity” software a menacing vector for data weaponization. The same capabilities that make agents useful—synthesizing enormous amounts of information across sources and acting autonomously across platforms with persistence and memory—make them extraordinarily powerful instruments for state surveillance and targeted repression. An autocratic government could build dossiers on dissidents, journalists, or voters from financial records, social media, location data, and communications metadata, acting in real time: micro-targeting people with persuasion campaigns, swarming targets with coordinated social media attacks, engineering entrapment schemes, or flagging individuals based on patterns no court ever authorized.


What makes Non-Human Identities in AI secure

By aligning security goals with technological advancements, NHIs offer a tangible solution to the challenges posed by AI and cloud-based architectures. Forward-thinking organizations are leveraging this strategic advantage to stay ahead of potential threats, ensuring that their digital remain both protected and resilient. ... Can businesses effectively integrate Non-Human Identities across diverse sectors? Where industries such as financial services, healthcare, and travel become increasingly dependent on digital transformation, the need for securing NHIs is paramount. Each sector presents unique challenges and requirements that necessitate tailored approaches to NHI management. In financial services, for example, the emphasis might be on protecting transactional data, while healthcare organizations focus on safeguarding patient information. Thus, versatile solutions that accommodate varying security demands while maintaining robust protection standards are essential. ... What greater role can NHIs play where emerging technologies unfold? The growing intersection of AI and IoT devices creates a complex web of interactions that requires robust security measures. Non-Human Identities provide a framework for securely managing the myriad connections and transactions occurring between devices. In IoT networks, NHIs authenticate and authorize communication between endpoints, thus safeguarding the integrity of both data and operations.

Daily Tech Digest - February 17, 2026


Quote for the day:

"If you want to become the best leader you can be, you need to pay the price of self-discipline." -- John C. Maxwell



6 reasons why autonomous enterprises are still more a vision than reality

"AI is the first technology that allows systems that can reason and learn to be integrated into real business processes," Vohra said. ... Autonomous organizations, he continued, "are built on human-AI agent collaboration, where AI handles speed and scale, leaving judgment and strategy up to humans." They are defined by "AI systems that go beyond just generating insights in silos, which is how most enterprises are currently leveraging AI," he added. Now, the momentum is toward "executing decisions across workflows with humans setting intent and guardrails." ... The survey highlighted that work is required to help develop agents. Only 3% of organizations -- and 10% of leaders -- are actively implementing agentic orchestration. "This limited adoption signals that orchestration is still an emerging discipline," the report stated. "The scarcity of orchestration is a litmus test for both internal capability and external strategic positioning. Successful orchestration requires integrating AI into workflows, systems, and decision loops with precision and accountability." ... Workforce capability gaps continue to be the most frequently cited organizational constraint to AI adoption, as reported by six in 10 executives -- yet only 45% say their organizations offer AI training for all employees. ... As AI takes on more execution and pattern recognition, human value increasingly shifts toward system design, integration, governance, and judgment -- areas where trust, context, and accountability still sit firmly with people.


Finding the key to the AI agent control plane

Agents change the physics of risk. As I’ve noted, an agent doesn’t just recommend code. It can run the migration, open the ticket, change the permission, send the email, or approve the refund. As such, risk shifts from legal liability to existential reality. If a large language model hallucinates, you get a bad paragraph. ... Every time an AI system makes a mistake that a human has to clean up, the real cost of that system goes up. The only way to lower that tax is to stop treating governance as a policy problem and start treating it as architecture. That means least privilege for agents, not just humans. It means separating “draft” from “send.” It means making “read-only” a first-class capability, not an afterthought. It means auditable action logs and reversible workflows. It means designing your agent system as if it will be attacked because it will be. ... Right now, permissions are a mess of vendor-specific toggles. One platform has its own way of scoping actions. Another bolts on an approval workflow. A third punts the problem to your identity and access management team. That fragmentation will slow adoption, not accelerate it. Enterprises can’t scale agents until they can express simple rules. We need to be able to say that an agent can read production data but not write to it. We need to say an agent can draft emails but not send them. We need to say an agent can provision infrastructure only inside a sandbox, with quotas, or that it must request human approval before any destructive action.


PAM in Multi‑Cloud Infrastructure: Strategies for Effective Implementation

The "Identity Gap" has emerged as the leading cause of cloud security breaches. Traditional vault-based Privileged Access Management (PAM) solutions, designed for static server environments, are inadequate for today’s dynamic, API-driven cloud infrastructure. ... PAM has evolved from an optional security measure to an essential and fundamental requirement in multi-cloud environments. This shift is attributed to the increased complexity, decentralized structure, and rapid changes characteristic of modern cloud architectures. As organizations distribute workloads across AWS, Azure, Google Cloud, and on-premises systems, traditional security perimeters have become obsolete, positioning identity and privileged access as central elements of contemporary security strategies. ... Fragmented identity systems hinder multi‑cloud PAM. Centralizing identity and federating access resolves this, with a Unified Identity and Access Foundation managing all digital identities—human or machine—across the organization. This approach removes silos between on-premises, cloud, and legacy applications, providing a single control point for authentication, authorization, and lifecycle management. ... Cloud providers deliver robust IAM tools, but their features vary. A strong PAM approach aligns these tools using RBAC and ABAC. RBAC assigns permissions by job role for easy scaling, while ABAC uses user and environment attributes for tight security.


Giving AI ‘hands’ in your SaaS stack

If an attacker manages to use an indirect prompt injection — hiding malicious instructions in a calendar invite or a web page the agent reads — that agent essentially becomes a confused deputy. It has the keys to the kingdom. It can delete opportunities, export customer lists or modify pricing configurations. ... For AI agents, this means we must treat them as non-human identities (NHIs) with the same or greater scrutiny than we apply to employees. ... The industry is coalescing around the model context protocol (MCP) as a standard for this layer. It provides a universal USB-C port for connecting AI models to your data sources. By using an MCP server as your gateway, you ensure the agent never sees the credentials or the full API surface area, only the tools you explicitly allow. ... We need to treat AI actions with the same reverence. My rule for autonomous agents is simple: If it can’t dry run, it doesn’t ship. Every state-changing tool exposed to an agent must support a dry_run=true mode. When the agent wants to update a record, it first calls the tool in dry-run mode. The system returns a diff — a preview of exactly what will change . This allows us to implement a human-in-the-loop approval gate for high-risk actions. The agent proposes the change, the human confirms it and only then is the live transaction executed. ... As CIOs and IT leaders, our job isn’t to say “no” to AI. It’s to build the invisible rails that allow the business to say “yes” safely. By focusing on gateways, identity and transactional safety, we can give AI the hands it needs to do real work, without losing our grip on the wheel.


AI-fuelled supply chain cyber attacks surge in Asia-Pacific

Exposed credentials, source code, API keys and internal communications can provide detailed insight into business processes, supplier relationships and technology stacks. When combined with brokered access, that information can support impersonation, targeted intrusion and fraud activity that blends in with legitimate use. One area of concern is open-source software distribution, where widely used libraries can spread malicious code at scale. ... The report points to AI-assisted phishing campaigns that target OAuth flows and other single sign-on mechanisms. These techniques can bypass multi-factor authentication where users approve malicious prompts or where tokens are stolen after login. ... "AI did not create supply chain attacks, it has made them cheaper, faster, and harder to detect," Mr Volkov added. "Unchecked trust in software and services is now a strategic liability." The report names a range of actors associated with supply-chain-focused activity, including Lazarus, Scattered Spider, HAFNIUM, DragonForce and 888, as well as campaigns linked to Shai-Hulud. It said these groups illustrate how criminal organisations and state-aligned operators are targeting similar platforms and integration layers. ... The report's focus on upstream compromise reflects a broader trend in cyber risk management, where organisations assess not only their own exposure but also the resilience of vendors and technology supply chains.


Automation cannot come at the cost of accountability; trust has to be embedded into the architecture

Visa is actively working with issuers, merchants, and payment aggregators to roll out authentication mechanisms based on global standards. “Consumers want payments to be invisible,” Chhabra adds. “They want to enjoy the shopping experience, not struggle through the payment process.” Tokenisation plays a critical role in enabling this vision. By replacing sensitive card details with unique digital tokens, Visa has created a secure foundation for tap-and-pay, in-app purchases, and cross-border transactions. In India alone, nearly half a billion cards have already been tokenised. “Once tokenisation is in place, device-based payments and seamless commerce become possible,” Chhabra explains. “It’s the bedrock of frictionless payments.” Fraud prevention, however, is no longer limited to card-based transactions. With real-time and account-to-account payments gaining momentum, Visa has expanded its scope through strategic acquisitions such as Featurespace. The UK-based firm specialises in behavioural analytics for real-time fraud detection, an area Chhabra describes as increasingly critical. “We don’t just want to detect fraud on the Visa network. We want to help prevent fraud across payment types and networks,” he says. Before deploying such capabilities in India, Visa conducts extensive back-testing using localised data and works closely with regulators. “Global intelligence is powerful, but it has to be adapted to local behaviour. You can’t simply overfit global models to India’s unique payment patterns.”


Most ransomware playbooks don't address machine credentials. Attackers know it.

The gap between ransomware threats and the defenses meant to stop them is getting worse, not better. Ivanti’s 2026 State of Cybersecurity Report found that the preparedness gap widened by an average of 10 points year over year across every threat category the firm tracks. ... The accompanying Ransomware Playbook Toolkit walks teams through four phases: containment, analysis, remediation, and recovery. The credential reset step instructs teams to ensure all affected user and device accounts are reset. Service accounts are absent. So are API keys, tokens, and certificates. The most widely used playbook framework in enterprise security stops at human and device credentials. The organizations following it inherit that blind spot without realizing it. ... “Although defenders are optimistic about the promise of AI in cybersecurity, Ivanti’s findings also show companies are falling further behind in terms of how well prepared they are to defend against a variety of threats,” said Daniel Spicer, Ivanti’s Chief Security Officer. “This is what I call the ‘Cybersecurity Readiness Deficit,’ a persistent, year-over-year widening imbalance in an organization’s ability to defend their data, people, and networks against the evolving threat landscape.” ... You can’t reset credentials that you don’t know exist. Service accounts, API keys, and tokens need ownership assignments mapped pre-incident. Discovering them mid-breach costs days.


CISO Julie Chatman offers insights for you to take control of your security leadership role

In a few high-profile cases, security leaders have faced criminal charges for how they handled breach disclosures, and civil enforcement for how they reported risks to investors and regulators. The trend is toward holding CISOs personally accountable for governance and disclosure decisions. ... You’re seeing the rise of fractional CISOs, virtual CISOs, heads of IT security instead of full CISO titles. It’s a lot harder to hold a fractional CISO personally liable. This is relatively new. The liability conversation really intensified after some high-profile enforcement actions, and now we’re seeing the market respond. ... First, negotiate protection upfront. When you’re thinking about accepting a CISO role, explicitly ask about D&O insurance coverage. If the CISO is not considered a director or an officer of the company and can’t be given D&O coverage, will the company subsidize individual coverage? There are companies now selling CISO-specific policies. Make this part of your compensation negotiation. Second, do your job well but understand the paradox. Sometimes when you do your job properly, you’re labeled ‘the office of no,’ you’re seen as ‘difficult,’ and you last 18 months. It’s a catch-22. Real liability protection is changing how your organization thinks about risk ownership. Most organizations don’t have a unified view of risk or the vocabulary to discuss it properly. If you can advance that as a CISO, you can help the business understand that risk is theirs to accept, not yours.


The AI bubble will burst for firms that can’t get beyond demos and LLMs

Even though the discussion of a potential bubble is ubiquitous, what’s going on is more nuanced than simple boom-and-bust chatter, said Francisco Martin-Rayo, CEO of Helios AI. “What people are really debating is the gap between valuation and real-world impact. Many companies are labeled ‘AI-driven,’ but only a subset are delivering measurable value at scale,” Martin-Rayo said. Founders confuse fundraising with progress, which comes only when they are solving real problems for real clients, said Nacho De Marco, founder of BairesDev. “Fundraising gives you dopamine, but real progress comes from customers,” De Marco said. “The real value of a $1B valuation is customer validation.” ... The AI shakeout has already started, and the tenor at WEF “feels less like peak hype and more like the beginning of a sorting process,” Martin-Rayo said. ... Companies that survive the coming shakeout will be those willing to rebuild operations from the ground up rather than throwing AI into existing workflows, said Jinsook Han, chief agentic AI officer at Genpact. ”It’s not about just bolting some AI into your existing operation,” Han said. “You have to really build from ground up — it’s a complete operating model change.” Foundational models are becoming more mature and can do more of what startups sell. As a result, AI providers that don’t offer distinct value will have a tough time surviving, Han said.


What could make the EU Digital Identity Wallets fail?

Large-scale digital identity initiatives rarely fail because the technology does not work. They fail because adoption, incentives, trust, and accountability are underestimated. The EU Digital Identity Wallet could still fail, or partially fail, succeeding in some countries while struggling or stagnating in others. ... A realistic risk is fragmented success. Some member states are likely to deliver robust wallets on time. Others may launch late, with limited functionality, or without meaningful uptake. A smaller group may fail to deliver a convincing solution at all, at least in the first phase. From the perspective of users and service providers, this fragmentation already undermines cross border usage. If wallets differ significantly in capabilities, attributes, and reliability across borders, the promise of a seamless European digital identity weakens. ... While EU Digital Identity Wallets offer significantly higher security than current solutions, they will not eliminate fraud entirely. There will still be cases of wallets issued to the wrong individual, phishing attempts, and wallet takeovers. If early fraud cases are poorly handled or publicly misunderstood, trust in the ecosystem could erode quickly. The wallet’s strong privacy architecture introduces real trade-offs. One uncomfortable but necessary question worth asking is: are we going too far with privacy? ... The EU Digital Identity Wallet will succeed only if policymakers, wallet providers, and service providers treat trust, economics, and usability as core design principles, not secondary concerns.

Daily Tech Digest - December 23, 2025


Quote for the day:

"What seems to us as bitter trials are often blessings in disguise." -- Oscar Wilde



The CIO Playbook: Reimagining Transformation in a Shifting Economy

The CIO has travelled from managing mainframes to managing meaning and purpose-driven transformation. And as AI becomes the nervous system of the enterprise, technology’s centre of gravity has shifted decisively to the boardroom. The basement may be gone, but its persona remains — a reminder that every evolution begins with resistance and is ultimately tamed by the quiet persistence of those who keep the systems running and the vision alive. Those who embraced progressive technology and blended business with innovation became leaders; the rest faded into also-rans. At the end of the day, the concern isn’t technology — it’s transformation capacity and the enterprise’s appetite to take risks, embrace change, and stay relevant. Organisations that lack this mindset will fail to evolve from traditional enterprises into intelligent, interactive digital ecosystems built for the AI age. The question remains: how do you paint the plane while flying it — and keep repainting it as customer needs, markets, and technologies shift mid-air? In this GenAI-driven era, the enterprise must think like software: in continuous integration, continuous delivery, and continuous learning. This isn’t about upgrading systems; it’s about rewiring strategy, culture, and leadership to respond in real time. We are at a defining inflection point. The time is now to connect the dots — to build an experience delivery matrix that not only works for your organisation but evolves with your customer.


Flexibility or Captivity? The Data Storage Decision Shaping Your AI Future

Enterprises today must walk a tightrope: on one side, harness the performance, trust, and synergies of long-standing storage vendor relationships; on the other, avoid entanglements that limit their ability to extract maximum value from their data, especially as AI makes rapid reuse of massive unstructured data sets a strategic necessity. ... Financial barriers also play a role. Opaque or punitive egress fees charged by many cloud providers can make it prohibitively expensive to move large volumes of data out of their environments. At the same time, workflows that depend on a vendor’s APIs, caching mechanisms, or specific interfaces can make even technically feasible migrations risky and disruptive. ... Budget and performance pressures add another layer of urgency. You can save tremendously by offloading cold data to lower-cost storage tiers. Yet if retrieving that data requires rehydration, metadata reconciliation, or funneling requests through proprietary gateways, the savings are quickly offset. Finally, the rapid evolution of technology means enterprises need flexibility to adopt new tools and services. Being locked into a single vendor makes it harder to pivot as the landscape changes. ... Longstanding vendor relationships often provide stability, support, and volume pricing discounts. Abandoning these partnerships entirely in the pursuit of perfect flexibility could undermine those benefits. The more pragmatic approach is to partner deeply while insisting on open standards and negotiating agreements that preserve data mobility.


Agentic AI already hinting at cybersecurity’s pending identity crisis

First, many of these efforts are effectively shadow IT, where a line of business (LOB) executive has authorized the proof of concept to see what these agents can do. In these cases, IT or cyber teams haven’t likely been involved, and so security hasn’t been a top priority for the POC. Second, many executives — including third-party business partners handling supply chain, distribution, or manufacturing — have historically cut corners for POCs because they are traditionally confined to sandboxes isolated from the enterprise’s live environments. But agentic systems don’t work that way. To test their capabilities, they typically need to be released into the general environment. The proper way to proceed is for every agent in your environment — whether IT authorized, LOB launched, or that of a third party — to be tracked and controlled by PKI identities from agentic authentication vendors. ... “Traditional authentication frameworks assume static identities and predictable request patterns. Autonomous agents create a new category of risk because they initiate actions independently, escalate behavior based on memory, and form new communication pathways on their own. The threat surface becomes dynamic, not static,” Khan says. “When agents update their own internal state, learn from prior interactions, or modify their role within a workflow, their identity from a security perspective changes over time. Most organizations are not prepared for agents whose capabilities and behavior evolve after authentication.”


Expanding Zero Trust to Critical Infrastructure: Meeting Evolving Threats and NERC CIP 

StandardsPrevious compliance requirements have emphasized a perimeter defense model, leaving blind spots for any threats that happen to breach the perimeter. Zero Trust initiatives solve this by making accesses inside the perimeter visible and subjecting them to strong, identity-based policies. This proactive, Zero Trust-driven model naturally fulfills CIP-015-1 requirements, reducing or eliminating false positives compared to threat detection methods. In fact, an organization with a mature Zero Trust posture should be able to operate normally, even if the network is compromised. This resilience is possible when critical assets—such as controls in electrical substations or business software in the data center—are properly shielded from the shared network. Zero Trust enforces access based on verified identity, role, and context. Every connection is authenticated, authorized, encrypted, and logged. ... In short, Zero Trust’s identity-centric enforcement ensures that unauthorized network activity is detected and blocked. Even if a hacker has network access, they won’t be able to leverage that access to exfiltrate data or attack other hosts. A Zero Trust-protected organization can operate normally, even if the network is compromised. ... Zero Trust doesn’t replace your perimeter but instead reinforces it. Rather than replacing existing network firewalls, a Zero Trust can overlay existing security architectures, providing a comprehensive layer of defense through identity-based control and traffic visibility. 


Top 5 enterprise tech priorities for 2026

The first is that the top priority, cited by 211 of the enterprises, is to “deploy the hardware, software, data, and network tools needed to optimize AI project value.” ... “You can’t totally immunize yourself against a massive cloud or Internet problem,” say planners. Most cloud outages, they note, resolve in a maximum of a few hours, so you can let some applications ride things out. When you know the “what,” you can look at the “how.” Is multi-cloud the best approach, or can you build out some capacity in the data center? ... “We have too many things to buy and to manage,” one planner said. “Too many sources, too many technologies.” Nobody thinks they can do some massive fork-lift restructuring (there’s no budget), but they do believe that current projects can be aligned to a long-term simplification strategy. This, interestingly, is seen by over a hundred of the group as reducing the number of vendors. They think that “lock-in” is a small price to pay for greater efficiency and reduction in operations complexity, integration, and fault isolation. ... The biggest problem, these enterprises say, is that governance has tended to be applied to projects at the planning level, meaning that absent major projects, governance tended to limp along based on aging reviews. Enterprises note that, like AI, orderly expansions in how applications and data are used can introduce governance issues, just like changes in laws and regulations. 


Why flaky tests are increasing, and what you can do about it

One of the most persistent challenges is the lack of visibility into where flakiness originates. As build complexity rises, false positives or flaky tests often rise in tandem. In many organizations, CI remains a black box stitched together from multiple tools as artifact size increases. Failures may stem from unstable test code, misconfigured runners, dependency conflicts or resource contention, yet teams often lack the observability needed to pinpoint causes with confidence. Without clear visibility, debugging becomes guesswork and recurring failures become accepted as part of the process rather than issues to be resolved. The encouraging news is that high-performing teams are addressing this pattern directly. ... Better tooling alone will not solve the problem. organizations need to adopt a mindset that treats CI like production infrastructure. That means defining performance and reliability targets for test suites, setting alerts when flakiness rises above a threshold and reviewing pipeline health alongside feature metrics. It also means creating clear ownership over CI configuration and test stability so that flaky behaviour is not allowed to accumulate unchecked. ... Flaky tests may feel like a quality issue, but they are also a performance problem and a cultural one. They shape how developers perceive the reliability of their tools. They influence how quickly teams can ship. Most importantly, they determine whether CI/CD remains a source of confidence or becomes a source of drag.


Stop letting ‘urgent’ derail delivery. Manage interruptions proactively

As engineers and managers, we all have been interrupted by those unplanned, time-sensitive requests (or tasks) that arrive outside normal planning cadences. An “urgent” Slack, a last-minute requirement or an exec ask is enough to nuke your standard agile rituals. Apart from randomizing your sprint, it causes thrash for existing projects and leads to developer burnout. ... Existing team-level mechanisms like mid-sprint checkpoints provide teams the opportunity to “course correct”; however, many external randomizations arrive with an immediacy. ... Even well-triaged items can spiral into open-ended investigations and implementations that the team cannot afford. How do we manage that? Time-box it. Just a simple “we’ll execute for two days, then regroup” goes a long way in avoiding rabbit-holes. The randomization is for the team to manage, not for an individual. Teams should plan for handoffs as a normal part of supporting randomizations. Handoffs prevent bottlenecks, reduce burnout and keep the rest of the team moving. ... In cases where there are disagreements on priority, teams should not delay asking for leadership help. ... Without making it a heavy lift, teams should capture and periodically review health metrics. For our team, % unplanned work, interrupts per sprint, mean time to triage and periodic sentiment survey helped a lot. Teams should review these within their existing mechanisms (ex., sprint retrospectives) for trend analysis and adjustments.


How does Agentic AI enhance operational security

With Agentic AI, the deployment of automated security protocols becomes more contextual and responsive to immediate threats. The implementation of Agentic AI in cybersecurity environments involves continuous monitoring and assessment, ensuring that NHIs and their secrets remain fortified against evolving threats. ... Various industries have begun to recognize the strategic importance of integrating Agentic AI and NHI management into their security frameworks. Financial services, healthcare, travel, DevOps, and Security Operations Centers (SOC) have benefited from these technologies, especially those heavily reliant on cloud environments. In financial services, for instance, securing hybrid cloud environments is paramount to protecting sensitive client data. Healthcare institutions, with their vast troves of personal health information, have seen significant improvements in data protection through the use of these advanced cybersecurity measures. ... Agentic AI is reshaping how decisions are made in cybersecurity by offering algorithmic insights that enhance human judgment. Incorporating Agentic AI into cybersecurity operations provides the data-driven insights necessary for informed decision-making. Agentic AI’s capacity to process vast amounts of data at lightning speed means it can discern subtle signs of an impending threat long before a human analyst might notice. By providing detailed reports and forecasts, it offers decision-makers a 360-degree view of their security. 


AI-fuelled cyber onslaught to hit critical systems by 2026

"Historically, operational technology cyber security incidents were the domain of nation states, or sometimes the act of a disgruntled insider. But recently, we've seen year-on-year rises in operational technology ransomware from criminal groups as well and with hacktivists: All major threat actor categories have bridged the IT-OT gap. With that comes a shift from highly targeted, strategic campaigns to the types of opportunistic attacks CISA describes. These are the predators targeting the slowest gazelles, so to speak," said Dankaart. ... Australian policymakers are expected to revise cybersecurity legislation and regulations for critical sectors. Morris added that organisations are looking at overseas case studies to reduce fraud and infrastructure-level attacks. ... "The scam ecosystem will continue to be exposed globally, raising new awareness of the many aspects of these crimes, including payment processors, geographic distribution of call centres and connected financial crimes. ... "The solution will be to find the 'Goldilocks Spot' of high automation and human accountability, where AI aggregates related tasks, alerts and presents them as a single decision point for a human to make. Humans then make one accountable, auditable policy decision rather than hundreds to thousands of potentially inconsistent individual choices; maintaining human oversight while still leveraging AI's capacity for comprehensive, consistent work."


Rising Tides: When Cybersecurity Becomes Personal – Inside the Work of an OSINT Investigator

The upside of all the technology and access we have is also what creates so much risk in the multitude of dangerous situations that Miller has seen and helped people out of in the most efficient and least disruptive ways possible. But, we as a cyber community have to help, but building ethics and integrity into our products so they can be used less maliciously in human cases; not simply data cases. ... When everything complicated is failing, go back to basics, and teach them over and over again, until the audience moves forward. I’ve spent a decade doing this and still share the same basic principles and safety measures. Technology changes, so do people, but sometimes the things they need the most are to to be seen, heard and understood. This job is a lot of emotional support and working through the things where the client gets hung up making a decision, or moving forward. ...  The amount of energy and time devoted to cases has to have a balance. I say no to more cases than I say yes, simply because I don’t have the resources or time to do them. ... As the world changes, you have to adapt and shift your tactics, delivery, and capabilities to help more people. While people like to tussle over politics, I remind them, everything is political. It’s no different in community care, mutual aid, or non-profit work. If systems cannot or won’t support communities, you have a responsibility to help build parallel systems of care that can. This means not leaving anyone behind, not sacrificing one group over another.

Daily Tech Digest - November 22, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone



How CIOs can get a better handle on budgets as AI spend soars

Everyone wants to become AI-centric or AI-native, says West Monroe’s Greenstein. “But nobody has extra buckets of money to do this unless it’s existential to their company,” he says. So moving money from legacy projects to AI is a popular strategy. “It’s a shift of priorities within companies,” he says. “They look at their investments and ask how many are no longer needed because of AI, or how many can be done with AI. Plus, they’re putting pressure on vendors to drive down costs. They’re definitely squeezing existing suppliers.” Even large, tech-forward companies might have to do this kind of juggling. ... “AI is in a self-funding model at the moment,” he says. “We’re shifting investment from legacy technologies to AI.” ... Another challenge to budgeting is the demands that AI places on people, systems, and data. One of the most significant challenges to managing AI costs is talent, says Principal’s Arora. “Skill gaps and cross-team dependencies can slow deliveries and drive up costs,” he says. Then there’s the problem of evolving regulations, and the need to continuously adapt governance frameworks to stay resilient in the face of these changes. Organizations also often underestimate how much money will be needed to train employees, and to bring data and other foundational systems in line with what’s needed for AI. “Legacy environments add complexity and expense,” he adds. “These one-time costs are heavy but essential to avoid long-term inefficiencies.”


AI agent evaluation replaces data labeling as the critical path to production deployment

It's a fundamental shift in what enterprises need validated: not whether their model correctly classified an image, but whether their AI agent made good decisions across a complex, multi-step task involving reasoning, tool usage and code generation. If evaluation is just data labeling for AI outputs, then the shift from models to agents represents a step change in what needs to be labeled. Where traditional data labeling might involve marking images or categorizing text, agent evaluation requires judging multi-step reasoning chains, tool selection decisions and multi-modal outputs — all within a single interaction. "There is this very strong need for not just human in the loop anymore, but expert in the loop," Malyuk said. He pointed to high-stakes applications like healthcare and legal advice as examples where the cost of errors remains prohibitively high. ... The challenge with evaluating agents isn't just the volume of data, it's the complexity of what needs to be assessed. Agents don't produce simple text outputs; they generate reasoning chains, make tool selections, and produce artifacts across multiple modalities. ... While monitoring what AI systems do remains important, observability tools measure activity, not quality. Enterprises require dedicated evaluation infrastructure to assess outputs and drive improvement. These are distinct problems requiring different capabilities.


How IT leaders can build successful AI strategies — the VC view

It’s clear now that AI is transforming existing business structures, operational layers, organizational charts, and processes. “As a CIO, if you look at long term, you get better visibility of the outcomes of AI,” said Sandhya Venkatachalam, founder and partner at Axiom Partners. “Today, a lot of these net new capabilities are taking the form of AI performing the work or producing the outcomes that humans do, versus emulating or automating software tools,” Venkatachalam said. The shift will inevitably displace legacy systems and processes. She cited customer support as an early area ripe for upheaval. ... VCs typically don’t look at what buyers need right now; they look ahead. Similarly, IT leaders should look at how AI can transform their industry in the future. The real value of AI is in displacing legacy stacks and processes, and short wins or scattered AI initiatives mean nothing, Venkatachalam said. Adding AI to existing workflows — like building an internal large language model (LLM) — is often a waste. Enterprises are also wasting time building proprietary tools and infrastructures, which duplicates work already commoditized by big research labs, Venkatachalam said. ... AI strategies link IT directly to core products, which dictates market survival. IT decision-makers should align AI strategies to their verticals markets. Physical AI is considered the next big AI technology after agents in some areas. 


Could AI transparency backfire for businesses?

Work is underway to devise common ways to disclose the use of AI in content creation. The British Standards Institute’s (BSI) common standard (BS ISO/IEC 42001:2023) provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system (AIMS), ensuring AI applications are developed and operated ethically, transparently, and in alignment with regulatory standards. It helps manage AI-specific risks such as bias and lack of transparency. Mark Thirwell, the BSI’s global digital director, says that such standards are critical for building trust in AI. For his part, Thirwell is mainly focused on improving the transparency of underlying training data over whether content is disclosed as AI-generated. “You wouldn’t buy a toaster if someone hadn’t checked it to make sure it wasn’t going to set the kitchen on fire,” he argues. Thirwell posits that common standards can, and must, interrogate the trustworthiness of AI. Does it do what it says it’s going to do? Does it do that every time? Does it not do anything else – as hallucination and misinformation become increasingly problematic? Does it keep your data secure? Does it have integrity? And unique to AI, is it ethical? “If it’s detecting cancers or sifting through CVs,” he says, “is there going to be a bias based on the data it holds?” This is where transparency of the underlying data becomes key. 


The Importance of Having and Maintaining a Data Asset List and how to create one

The explosive growth of structured and unstructured data has made it increasingly difficult for organizations to track what information they hold across networks, devices, SaaS applications, and cloud platforms. Without clear visibility, businesses face higher risks, including security gaps, audit failures, regulatory penalties, and rising storage costs. ... Before we get into how to build a data asset inventory, it’s important to understand why regulators now expect organizations to maintain one. The compliance landscape in 2025 is more demanding than ever, and nearly every major framework explicitly or implicitly requires data mapping and data inventory management. ... A data asset inventory is a structured, centralized record of all the data types and systems that power your organization. The goal is to gain full visibility into what data exists, where it’s stored, who manages it, and how it flows, while also capturing any compliance obligations tied to that data. ... Many organizations rely on third-party providers to manage or process sensitive data, which can improve efficiency but also introduce new risks. External partnerships expand your organization’s digital footprint, increase the potential attack surface, and add complexity to data governance. ... A data asset inventory isn’t a one-time task, it’s a living, evolving document. As your organization adopts new tools, expands into new markets, or grows its teams, your inventory should evolve to reflect these changes. 


Building and Implementing Cyber Resilience Strategies

Currently, there is no unified standard for managing cyber resilience. Although many vendors offer their own solutions and some general standardization efforts are underway, a clear and consistent framework has yet to be established. As a result, organizations are forced to develop their own methods based on internal priorities and interpretations. The main challenge is that cyberattacks have become unavoidable and frequent. Traditional protective measures alone are no longer sufficient to fight modern threats. Another problem is the lack of coordination between IT, information security, and business units. ... In practice, however, its implementation largely depends on the organization’s maturity, scale, and specific infrastructure characteristics. The main difference lies in the level of detail: as a company grows, its infrastructure becomes more complex, the number of stakeholders increases, and each stage of analysis requires greater depth. In small organizations, identifying critical services is relatively quick, while in large enterprises, the process may involve analyzing hundreds of interconnected operations. Likewise, the scope of security measures varies—from basic hardening of key systems to multi-layered protection across distributed environments. At the same time, core principles such as threat analysis, incident response planning, and regular audits remain largely unchanged across all organizations.


Security researchers develop first-ever functional defense against cyberattacks on AI models

Researchers now warn that the most advanced of these attacks, called cryptanalytic extraction, can rebuild a model by asking it thousands of carefully chosen questions. Each answer helps reveal tiny clues about the model’s internal structure. Over time, those clues form a detailed map that exposes the model’s weights and biases. These attacks work surprisingly well when used on neural networks that rely on ReLU activation functions. Because these networks behave like piecewise linear systems, attackers can hunt for points where a neuron’s output flips between active and inactive and use those moments to uncover the neuron’s signature. ... Early methods could only recover partial information, but newer techniques can figure out both the size and the direction of the weights. Some even work using nothing more than the model’s predicted labels. All rely on the same core assumption. Neurons in a given layer behave differently enough that their signals can be separated. When that is true, the attack can cluster each neuron’s critical points and rebuild the entire network with surprising accuracy. ... The team tested this defense on neural networks that previous studies had broken in just a few hours. One of the clearest results comes from a model trained on the MNIST digit dataset with two small hidden layers. 


Draft Trump executive order signals new battle ahead over state AI powers

By eliminating that federal framework, the Trump White House positions itself not simply as preempting state authority, but also as reversing its immediate federal predecessor’s regulatory approach. The draft EO further states that the U.S. must sustain AI leadership through a “balanced, minimal regulatory environment,” language that signals a clear ideological orientation against safety-first or rights-protective models of AI governance. The administration wants the Department of Justice to challenge state AI laws it views as obstructive; the Department of Commerce to catalogue and publicly criticize state statutes deemed “burdensome;” and agencies like the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) to establish national standards that would override state requirements. ... The move immediately raises questions not only about the future of AI governance but also about the structure of American federalism. For years, states have been the primary actors experimenting with AI regulation. They have advanced bills aimed at biometric privacy, algorithmic fairness, deepfake disclosure, automated decision-making transparency, and even restrictions on government use of facial recognition. These experiments, often more aggressive than anything contemplated in Congress, have become the country’s de facto laboratories of AI oversight. 


Engineering the Perfect Product Launch: Lessons from Prototype to Production

Rushing a product to market without a strong quality framework is a gamble most companies regret. Recalls, warranty claims and reputational damage cost far more than investing in quality upfront. The smarter approach is to build quality into the process from the start rather than bolting it in the end. ... During the product rollout I supported, we built proactive quality checkpoints at every stage of assembly. This meant small defects were caught early, long before they reached final testing. In one instance, a supplier batch with a minor material inconsistency was identified at the first inspection step, preventing what could have been a costly recall. Conversely, I’ve also seen how skipping just one validation step resulted in weeks of rework.  ... When all three elements: Development, quality and ERP work in harmony, product launches move faster and run smoothly. Costs are kept in check because inefficiencies are addressed early. Time-to-market accelerates because bottlenecks are anticipated. Manufacturing excellence becomes the standard from the first unit shipped, not something achieved after painful trial and error. ... Engineering a product launch is about orchestrating dozens of small, interconnected decisions across design, quality and enterprise systems. The companies that consistently succeed treat the launch as an engineering challenge, not just a marketing deadline.


Organisations struggle with non-human identity risks & AI demands

Growth in digital identities-both human and non-human-continues to strain legacy identity and access management practices. This identity sprawl raises the risk of credential-based threats and increases the attack surface for cybercriminals. "With organizations struggling to govern an expanding mesh of digital identities across human, machine, and AI entities, over-permissioned roles, shadow identities, and disconnected IAM systems will continue to expose organizations to credential-based attacks and lateral movement. AI will also reshape traditional social engineering: synthetic voices, deepfakes, and adaptive phishing will erode the reliability of static authentication, forcing organizations to adopt continuous and context-aware verification as the new baseline," said Benoit Grange ... "The NIS2 directive has ushered in stricter cybersecurity measures and reporting for a wider range of critical infrastructure and essential services across the European Union. For industries newly brought under this directive, including manufacturing, logistics and certain digital services, 2026 will bring new growing pains. The sectors, many long accustomed to minimal compliance oversight, now face strict governance and reporting requirements. In contrast, mature sectors like finance and healthcare will adapt more smoothly. The disparity will expose structural weaknesses in organizations unfamiliar with continuous compliance, making them attractive targets for attackers exploiting regulatory confusion," said Niels Fenger.

Daily Tech Digest - October 08, 2025


Quote for the day:

"Life is what happens to you while you’re busy making other plans." -- John Lennon



Network digital twin technology faces headwinds

Just like Google Maps is able to overlay information, such as driving directions, traffic alerts or locations of gas stations or restaurants, digital twin technology enables network teams to overlay information, such as a software upgrade, a change to firewalls rules, new versions of network operating systems, vendor or tool consolidation, or network changes triggered by mergers and acquisitions. Network teams can then run the model, evaluate different approaches, make adjustments, and conduct validation and assurance to make sure any rollout accomplishes its goals and doesn’t cause any problems, explains Maccioni ... “Configuration errors are a major cause of network incidents resulting in downtime,” says Zimmerman. “Enterprise networks, as part of a modern change management process, should use digital twin tools to model and test network functionality business rules and policies. This approach will ensure that network capabilities won’t fall short in the age of vendor-driven agile development and updates to operating systems, firmware or functionality.” ... Another valuable use case is testing failover scenarios, says Wheeler. Network engineers can design a topology that has alternative traffic paths in case a network component fails, but there’s really no way to stress test the architecture under real world conditions. He says that in one digital twin customer engagement “they found failure scenarios that they never knew existed.”


Autonomous AI hacking and the future of cybersecurity

The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We’re potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance. The most skilled will likely retain an edge for now. But AI agents don’t have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws, AI can turn rare expertise into commodity capabilities and gives average criminals an outsized advantage. ... If enterprises adopt AI-powered security the way they adopted continuous integration/continuous delivery (CI/CD), several paths open up. AI vulnerability discovery could become a built-in stage in delivery pipelines. We can envision a world where AI vulnerability discovery becomes an integral part of the software development process, where vulnerabilities are automatically patched even before reaching production — a shift we might call continuous discovery/continuous repair (CD/CR).


AI inference: reshaping the enterprise IT landscape across industries

AI inference is a complex operation that transforms intricate models into actionable agents. This process is essential for making real-time decisions, which can significantly improve user experiences. ... As AI systems handle more sensitive information, data security and private AI become a key part of effective inference processes. In cloud and Edge computing environments, where data often moves between multiple networks and devices, ensuring the confidentiality of user information is paramount. Private AI limits queries and requests to a company's internal database, SharePoint, API, or other private sources. It prevents unauthorized access and ensures that sensitive information remains confidential even when processed in the cloud or at the Edge. ... For AI to be truly transformative, low latency is a necessity, ensuring that real-time responses are both swift and seamless. In the realm of AI chatbots, for instance, the difference between a seamless conversation and a frustrating user experience often comes down to the speed of the AI’s response. Users expect immediate and accurate replies, and any delay can lead to a loss of engagement and trust. By minimising latency, AI chatbots can provide a more natural and fluid interaction, enhancing user satisfaction, and driving better outcomes. ... By reducing the distance data must travel, Edge computing significantly reduces latency, enabling faster and more reliable AI inference.


Smarter Systems, Safer Data: How to Outsmart Threat Actors

One of the clearest signs that a cybersecurity strategy is outdated is a lack of control and visibility over who can access what data, and on which systems. Many organizations still rely on fragmented identity management systems or grant broad access to database administrators. Others have yet to implement basic protections such as multi-factor authentication. ... Security concerns are commonly quoted as a top barrier to innovation. This is why many organizations struggle to adopt artificial intelligence, migrate to the cloud, share data externally or even internally. The only way to unblock this impasse is to start treating security as an enabler. Think about it this way: when done right, security is that key element that allows data to be moved, analyzed and shared. To exemplify this approach, if data is de-identified to maintain data privacy through the means of encryption or tokenization, in a situation of a breach, it will remain useless to attackers. ... What’s been key for the organizations that succeed in managing data risk while simultaneously unlocking value is a mindset shift. They stop seeing security as a roadblock and start seeing it as a foundation for growth. As an example, a large financial institution client has built an AI-powered solution for anti-money laundering. By protecting incoming data before it enters their system, they ensure that no sensitive data is fed to their algorithms, and thus the risk of a privacy breach, even incidental, is essentially null.


AI could prove CIOs’ worst tech debt yet

AI tools can be used to clean up old code and trim down bloated software, thus reducing one major form of tech debt. In September, for example, Microsoft announced a new suite of autonomous AI agents designed to automatically modernize legacy Java and .NET applications. At the same time, IT leaders see the potential for AI to add to their tech debt, with too many AI projects relying on models or agents that can be expensive to deploy and maintain and AI coding assistants generating more lines of software than may be necessary. ... Endless AI pilot projects create their own form of tech debt as well, says Ryan Achterberg, CTO at IT consulting firm Resultant. This “pilot paralysis,” in which organizations launch dozens of proofs of concepts that never scale, can drain IT resources, he says. “Every experiment carries an ongoing cost,” Achterberg says. “Even if a model is never scaled, it leaves behind artifacts that require upkeep and security oversight.” Part of the problem is that AI data foundations are still shaky, even as AI ambition remains high, he adds. ... In addition to tech debt from too many AI pilot projects, coding assistants can create their own problems without proper oversight, adds Jaideep Vijay Dhok, COO for technology at digital engineering provider Persistent Systems. In some cases, AI coding assistants will generate more lines of software than a developer asked for, he says. 


Hackers Exploit RMM Tools to Deploy Malware

RMM platforms typically operate with elevated permissions across endpoints. Once compromised, they offer adversaries a ready-made channel for privilege escalation, lateral movement and payload delivery, including ransomware ... Threat actors frequently repurpose legitimate RMM tools or hijack valid credentials, allowing malicious activity to blend seamlessly with routine administrative tasks. This tactic complicates detection and response, especially in environments lacking behavioral baselining. ... "This is a typical living-off-the-land attack used by many adversaries considering the success and ease of execution. Typically, such software are whitelisted in most of the controls to avoid blocking and noise, due to which its activities are not monitored much," Varkey said. "Like in most adversarial acts, getting access to the software is their initial step, so if access is limited to specific people with multifactor authorization and audited periodically, unauthorized access can be limited. .." ... "Treat RMM seriously. Assume compromise is possible and build defenses around prevention, detection and rapid response. Start with a full audit of your RMM deployment - map every agent, session and integration to identify shadow access points: asset management is key and a good RMM solution should be able to assist here. Layered controls are key - think defense-in-depth tailored to RMM's remote nature," Beuchelt said.


From Data to Doing: Agentic AI Will Revolutionize the Enterprise

Where do organizations see the greatest opportunities for agentic AI? The answer is: everywhere. Survey results show that business leaders view agentic AI as equally relevant to productivity gains, better decision-making, and enhanced customer experiences. When asked to rank potential benefits, improving customer experience and personalization emerge as the top priority, followed closely by sharper decision-making and increased efficiency. What's telling is what landed at the bottom of the list. Few organizations currently view market and business expansion as critical. This suggests that, at least in the near term, agentic AI will be applied less as a driver of bold new growth and more as a catalyst for improving and extending existing operations. ... Agentic AI is not simply the next technology wave -- it is the next great inflection point for enterprise software. Just as client–server, the Internet, and the cloud radically redefined industry leaders, agentic AI will determine which vendors and enterprises can adapt quickly enough to thrive. The lesson is clear: organizations that treat data as a strategic asset, modernize their platforms, and embed intelligence into their workflows will not only move faster but also serve customers better. The rest risk being left behind -- just as the mainframe giants once were.


Is That Your Boss or a Deepfake on the Other Side of That Video Call?

Sophisticated deepfake technology had perfectly replicated not just the appearance but the mannerisms and decision-making patterns of the company’s executives. The real managers were elsewhere, unaware their digital twins were orchestrating one of the largest deepfake heists in corporate history. This reflects a terrifying trend of AI fraud that is shaking the financial services industry. Deepfake-enabled attacks have grown by an alarming 1,740% in just one year, representing one of the fastest-growing AI-powered threats. More than half of businesses in the U.S. and U.K. have been targeted by deepfake-powered financial scams, with 43% falling victim. ... The deepfake threat extends far beyond immediate financial losses. Each successful attack erodes the foundation of digital communication itself. When employees can no longer trust that their CEO is real during a video call, the entire remote work infrastructure becomes suspect in particular for financial institutions, which deal in the currency of trust. ... Financial services companies must implement comprehensive AI governance frameworks, continuous monitoring systems, and robust incident response plans to address these evolving threats while maintaining operational efficiency and customer trust. These systems and protocols must extend not only within their front office but to their back office, including vendor management and third-party suppliers who manage their data.


Rethinking AI security architectures beyond Earth

The researchers outline three architectures: centralized, distributed, and federated. In a centralized model, the heavy lifting happens on Earth. Satellites send telemetry data to a large AI system, which analyzes it and sends back security updates. Training is fast because powerful ground-based resources are available, but the response to threats is slower due to long transmission times. In a distributed model, satellites still rely on the ground for training but perform inference locally. This setup reduces delay when responding to a threat, though smaller onboard systems can limit model accuracy. Federated learning goes a step further. Satellites train and infer on their own data without sending it to Earth. They share only model updates with other satellites and ground stations. This keeps latency low and improves privacy, but synchronizing models across a large constellation can be difficult. ... Byrne pointed out that while space-based architectures vary in resilience, recovery often depends on shared fundamentals. “Most systems across all segments will need to be restored from secure backups,” he said. “One architectural enhancement to help reduce recovery time is the implementation of distributed Inter-Satellite Links. These links enable faster propagation of recovery updates between satellites, minimizing latency and accelerating system-wide restoration.”


Who Governs Your NHIs? The Challenge of Defining Ownership in Modern Enterprise IT

What we should actually mean by ownership is the person who can answer the basic questions about why this NHI exists, what access it has, how often credentials should be rotated, whether it's being used in a way that could introduce new risks, and whether the credentials have been properly stored or have been leaked. ... Instead of focusing solely on assigning human ownership, we should be working to ensure that the questions we would ask the owner are easily answerable by our tools. This approach makes answers persistent and usable by multiple teams over time and provides consistency across the organization. It does not rely on specific individuals being eternally available or up to speed on how the NHI they created is being used. Ultimately, it scales better than human-dependent processes. Just as governing an application and all of the NHIs involved is almost never going to be the responsibility of one person, the ideal scenario where a single person can outright own an NHI and be responsible for every aspect is going to be a rare situation. ... The conversation about ownership often gets stuck on blame. Let's reframe it around assurance. Let's ensure that if a secret exists, no matter where or how it is stored, governance questions can be answered quickly and consistently.