Daily Tech Digest - October 12, 2025


Quote for the day:

"Trust because you are willing to accept the risk, not because it's safe or certain." -- Anonymous



AI and Data Governance: The Power Duo Reshaping Business Intelligence

Fortunately, the relationship between AI and data governance isn’t one-sided. By leveraging automation, pattern recognition, and real-time analytics, AI enables organizations to manage data quality, compliance, and security more effectively. AI models can identify inaccuracies or inconsistencies, flag anomalies, and automatically correct missing or duplicate records, minimizing the risk of generating misleading results from poor-quality datasets. It can track organizational data in real time, ensuring accurate classification of sensitive information, enforcing access controls, and proactively identifying policy violations before they escalate. This approach enables organizations to move away from manual auditing and adopt automated, self-correcting governance workflows. ... To leverage the full potential of the relationship between AI and governance, organizations must establish a continuous feedback loop between their governance frameworks and AI systems. AI shouldn’t function independently; it must be constantly updated and aligned with governance policies to maintain accuracy, transparency, and compliance. One of the best ways to achieve this is by using intelligent data platforms such as Semarchy’s master data management (MDM) and data catalog solutions. These solutions unify and control AI data from a trusted, single source of truth, ensuring consistency across business functions.


Building cyber resilience in a volatile world

Supply chain attacks show just how fragile the ecosystem can be, given that when one link breaks, the shockwaves ripple across agencies and sectors. That’s why the shift away from outmoded ideas of “prevention” by building walls around environments to a new kind of resilience is so stark. For example, zero trust is no longer optional; it’s the baseline. Verification must be constant, and assumptions about “safe” internal networks belong in the past. Meanwhile, AI governance and quantum-resistant cryptography have jumped from academic conversations to immediate government standards. Institutional muscle is being flexed too.  ... The transformation ahead is as much cultural as technical. Agencies must shift from being static defenders to dynamic operators, and need to be ready to adapt, recover, and press on even as attacks intensify. Cybersecurity is not just another line item in the IT budget, but rather the backbone of national resilience. The ability to keep delivering services, protect citizen trust, and safeguard critical infrastructure is now inseparable from how well agencies manage cyber risk. Resilience is not built by chance. It’s built through strategy, investment, and relentless partnership. It means turning frameworks into live capability, leveraging industry expertise, and embedding a mindset that sees cyber not as a constraint but as a foundation for confidence and continuity.


Fighting Disinformation Demands Confronting Social and Economic Drivers

Moving beyond security theater requires embracing ideological critique as a foundational methodology for information integrity policy research. This means shifting from “how do we stop misinformation?” to “what material and symbolic interests does information serve, and how do power relations shape what counts as legitimate knowledge?” This approach demands examining not just false information, but the entire apparatus through which beliefs become hegemonic, others verboten. Ideological critique offers three analytical tools absent from current information integrity policy research. First, it provides established scholarly techniques for examining how seemingly neutral technical systems encode worldviews and serve specific class interests. Platform algorithms, content moderation policies, and fact-checking systems all embed assumptions about authority, truth, and social order that more often than not favor existing power arrangements. Second, it offers frameworks for understanding how dominant groups maintain cognitive hegemony: the ability to shape not just what people think, but how they think. Third, it provides tools for analyzing how groups develop counter-hegemonic consciousness, alternative meaning-making systems and their ‘hidden transcripts’. Adopting these techniques can craft better policy responses to disinformation.


Cloud Infrastructure Isn't Dead, It's Just Becoming Invisible

Let's be honest: most cloud platforms are more alike than different. Storage, compute, and networking are commoditized. APIs are standard. Reliability and scalability is expected. Most agree that the cloud itself is no longer a differentiator, it's a utility. That's why the value is moving up the stack. Engineers don't need more IaaS, they need better ways to work with it. They want file systems that feel local, even when they're remote. They want zero-copy collaboration and speed. And they want all of that without worrying about provisioning, syncing, or latency. Today, cloud users are shifting their expectations toward solutions that utilize standard infrastructure such as object storage and virtual servers, yet abstract away the complexity. The appeal is in performance and usability improvements that make infrastructure feel invisible. ... What makes this shift important is that it's rooted in practical need. When you're working with terabytes or petabytes of high-resolution video, training a model on noisy real-world data, or collaborating across time zones on a shared dataset, traditional cloud workflows break down. Downloading files locally isn't scalable, and copying data between environments wastes time and resources. Latency is a momentum killer. This is where invisible infrastructure shines. It doesn't just abstract the cloud, it makes it better suited to the way developers actually build and collaborate today.


The great misalignment in business transformation

It’s easy to point the finger at artificial intelligence (AI) for today’s disruption in the tech workforce. After all, AI is changing how coding, analysis and even project management are done. Entire categories of tasks are being automated. Advocates argue that workers will inevitably be replaced, while critics frame it as the next wave of technological unemployment. Recent surveys have shown that employee optimism is fading. ... The problem is compounded by the emphasis on being “more artistic” or “more technical.” Both approaches miss the mark. Neither artistry for its own sake nor hyper-technical detail guarantees relevance if business problems remain unsolved. The technology industry has always experienced cycles of boom and bust. From the dot-com bubble to the recent AI surge, waves of hiring and layoffs are nothing new. What is new, however, is the growing realization that some jobs may not need to come back at all. ... Analysis without insight devolves into repetitive reporting, adding noise rather than clarity. Creativity without business grounding drifts into theatre, producing workshops and “innovation sessions” that inspire but fail to deliver results. Both are missing the target. Worse still, companies have proven they can operate without many of these roles altogether. The lesson is clear: being more artistic or more technical is not the answer. 


The Architecture Repository: Turning Enterprise Architecture into a Strategic Asset

While the Enterprise Continuum provides the context — a spectrum from generic to organization-specific models — the Architecture Repository provides the structure to store, manage, and evolve those models. ... At the heart of the repository lies the Architecture Metamodel. This is the blueprint for how architectural content is structured, related, and interpreted. It defines the vocabulary, relationships, and rules that govern the creation and classification of artifacts. The metamodel ensures consistency across the repository. Whether you’re modeling business processes, application components, or data flows, the metamodel provides a common language and structure. It’s the foundation for traceability, reuse, and integration. In practice, the metamodel is tailored to the organization’s needs. It reflects the enterprise’s modeling standards, governance policies, and stakeholder requirements. It’s not just a technical artifact — it’s a strategic enabler of clarity and coherence. ... Architecture must respond to real needs. The Architecture Requirements Repository captures all authorized requirements — business drivers, stakeholder concerns, and regulatory mandates — that guide architectural development. ... Architecture is not just about models — it’s about solutions. The Solutions Landscape presents the architectural representation of Solution Building Blocks (SBBs) that support the Architecture Landscape.


Cyberpsychology’s Influence on Modern Computing

Psychological research on decision making and cognitive processes has been fundamental to understanding perceptions and behavior in the areas of cybersecurity and cyberprivacy. Much of this work focuses on cognitive biases and emotional states, which inform the actions of both users and attackers. ... Both cognition and affect play a role in these phenomena. Specifically, under conditions of diminished information processing—such as in the case of cognitive demands or affective experiences such as a positive mood state—people are less likely to make decisions based on strongly held beliefs. For example, a consumer’s positive emotional state, such as happiness with the Internet, mediates the negative effects of information-collection concerns on their willingness to disclose personal information. Interestingly, cybersecurity experts are as vulnerable to phishing and social engineering attacks as those who are not cybersecurity experts. A deep understanding of the perceptual, cognitive, and emotional mechanisms that result in lapses of judgment or even behavior incongruent with one’s intellectual understanding is vital to minimizing such threats. In addition to cognitive and emotional states, personality models have provided insight into human behavior vis á vis technology. The “big five” personality theory, also known as the five-factor model, is a widely accepted framework that has been applied to a broad range of cyber-related behaviors, including cybersecurity.


The Cybersecurity Skills Gap and the Role of Diversity

Cybersecurity is often presented as a technically demanding field, she points out. “This further discourages some women from first entering the industry. For those who have, it’s then about being able to continue growing their careers when they may feel challenged by perceived technical demands,” says Pinkard. And today, cybersecurity is not a purely technical subject. Demand for technical skills will always exist, but the job has changed, says Amanda Finch, CEO, The Chartered Institute for Information Security. ... While the low number of women in cybersecurity is concerning, it’s also important to consider how other types of diversity can help fill the skills gap in the workforce. Inclusion and opportunity is “100% about more than just bringing in more women”: “It's about the different life perspective,” says Pinkard. Those “lived perspectives” are driven by areas such as neurodiversity, ethnic diversity and physical ability diversity, she says. ... Too many companies still treat diversity as a compliance exercise, says Mullins. “When it was no longer a legal requirement in the US, many simply stopped. Others will say, ‘we want more women’, but won’t update their maternity policies and complain that only men apply to their roles. Or they say ‘we want neurodiverse talent’, but resist implementing more flexible working policies to facilitate them.” 


Data quality is no longer optional

AI systems can only be as good as the data that feeds them. When information is incomplete, inconsistent or trapped in silos, the insights and predictions those systems produce become unreliable. The risk is not just missed opportunities but strategic missteps that erode customer trust and competitive positioning. ... Companies with a strong digital foundation are already ahead in AI adoption, and those without risk drowning in information while starving their AI models of the clean, reliable inputs they need. But before any organisation can realise AI’s full potential, it must first build a resilient data foundation, and the enterprises that place data quality at the heart of their digital strategy are already seeing measurable gains. By investing in robust governance, integrating AI with data management and removing silos across departments, they create connected teams and more agile operations. ... Raising data quality is not a one-off exercise; it requires a cultural shift that calls for collaboration across IT, operations and business units. Leaders must set clear standards for how data is captured, cleaned and maintained, and champion the idea that every employee is a steward of data integrity. The long-term challenge is to design data architectures that can support scale and complexity and embrace distributed paradigms that support interoperability. These architectures do more than maintain order. 


Shadow AI in Your Systems: How to Detect and Control It

"Shadow AI" is when people in an organization use AI tools like generative models, coding assistants, agentic bots, or third-party LLM services without getting permission from IT or cybersecurity. This is the next step in the evolution of "shadow IT," but the stakes are higher because models can read sensitive text, make API calls on their own, and do automated tasks across systems. Industry definitions and primers say that shadow AI happens when employees use AI apps without official supervision, which can lead to data leaks, privacy issues, and compliance problems. ... Agents that automate web interactions usually need credentials, API keys, or tokens to do things for employees. Agents can get into systems directly if keys are poorly managed or embedded in scripts. ... Queries are outbound traffic to known AI provider endpoints, nonstandard hostname patterns, or unusual POST bodies. Modern proxy and firewall logs often show ULRs and headers that show which model vendors are being used. Check your web gateway and proxy logs for spikes in API calls and endpoints that you don't know about. ... Agents often do a lot of navigations, clicks, and form submissions in a short amount of time, which is different from how people do it. Look for patterns in how people navigate, intervals that are always the same, or pages that are crawled in tight loops.

Daily Tech Digest - October 11, 2025


Quote for the day:

“The only real mistake is the one from which we learn nothing.” -- Henry Ford



CIOs turn to AI to assist with IT purchasing decisions

“AI promptly evaluates product documentation, reviews, and market reports, cutting the time it takes to evaluate vendors from weeks to days and unearthing compatibility problems that go unnoticed by human reviewers,” he says. Like 8×8, Thrive uses a “trust but verify” approach that treats AI output as inputs for its decision-making processes, not final answers, Whittaker says. “AI is great for comparing technical specs, but it can’t help you much with assessing non-technical aspects such as quality of support, cultural fit, etc.” Thrive plans to enhance its future AI models to predict defects in products, foresee deployment challenges, and monitor vendor performance, Whittaker says. ... “When you are negotiating a contract, let’s say you received an order form, or you received a large legal contract, and it’s all unstructured data,” he says. “AI is really good at guiding you on what kind of commercial terms you should be careful with. It can look at your existing contracts and compare them with this new one and say, ‘This one has some anomalies.’” The company’s use of AI is giving the IT team time to work on other priorities instead of spending extra time researching potential products, Johar says. “If you look at how an IT organization works, we are buying software all the time, and sometimes it leaves you very little time to focus on real evaluation and piloting the software, because you just end up spending so much time on all these RFP processes, legal processes, and research,” he says.


Deepfake Awareness High at Orgs, But Cyber Defenses Badly Lag

"The deepfake threat landscape looks, above all else, dynamic," he says. "While email threats and static imagery are still the most commonly encountered vectors, there is a wide diversity of other forms of deepfakes that are quickly growing in prevalence. In fact, we're seeing more and more of every kind of deepfake in the wild." ... Attackers are using a variety of AI techniques to enhance their attack pipeline. Human digital twins can be trained on public information about a person to help create more realistic phishing attacks, which, combined with voice samples, could create convincing audio deepfakes. Concerns over misuse of AI caused Microsoft to mostly scuttle a voice cloning technology feature that it could have integrated into various apps, such as Teams, and allow a user — or an attacker — to hijack someone's voice for all kinds of fraud attempts. ... "The challenge now is that AI can be used to reduce the skill barrier to entry and speed up production to a higher quality," she says. "Since the sophistication of deepfakes are getting harder to detect, it is imperative to turn to AI-augmented tools for detection, as people alone cannot be the last line of defense." Companies should continue to train their employees and create good policies that reduce the impact that one person — even a top executive — can have for the company, says Ironscales' Benishti. "Develop policies that make it impossible for a single employee's bad decision to result in compromise," he says.


Powering Data in the Age of AI: Part 1 – Energy as the Ultimate Bottleneck

“Demand for electricity around the world from data centres is on course to double over the next five years, as information technology becomes more pervasive in our lives,” Birol said in a statement released with the IEA’s 2024 Energy and AI report. “The impact will be especially strong in some countries — in the United States, data centres are projected to account for nearly half of the growth in electricity demand; in Japan, over half; and in Malaysia, one-fifth.” ... Unlike older mainframe workloads that spiked and dropped with changing demand, modern AI systems operate close to full capacity for days or even weeks at a time. ... It’s not a benchmark like FLOPS, but it now influences nearly every design decision. Chipmakers promote performance per watt as their most important competitive edge, because speed doesn’t matter if the grid can’t handle it. ... That dynamic is also reshaping the economics of AI. Cloud providers are starting to charge for workloads based not just on runtime but on the power they draw, forcing developers to optimize for energy throughput rather than latency. Data center architects now design around megawatt budgets instead of square footage, while governments from the U.S. to Japan are issuing new rules for energy-efficient AI systems.


How Artificial Intelligence is Shaping the Future of Secure, Compliant, and Efficient Data Practices

Understanding the journey of data—where it originates, how it transforms, and who accesses it—is critical for both governance and compliance. Generative AI excels at mapping data lineage by automatically tracing data flows across systems, applications, and processes. Consider a scenario where an organisation needs to demonstrate how customer information moves from collection to storage and reporting. AI-powered lineage tools can generate visual maps showing every touchpoint, transformation, and user interaction. This automation not only accelerates audits and compliance reporting but also provides actionable insights to improve data handling practices. ... Organisations often grapple with choosing between centralised and autonomous (decentralised) data management models. Centralised approaches offer uniformity and control, while autonomous models empower individual teams with flexibility. Generative AI supports both paradigms. In centralised settings, AI enforces global policies, ensures consistency, and manages data assets from a single point of control. In autonomous environments, AI agents can be embedded within business units, tailoring governance and security measures to local needs while maintaining alignment with overarching standards. This hybrid capability ensures organisations remain agile without compromising data integrity or compliance.


Cloud Observability Challenges At Scale (And How To Solve Them)

Concentration risk from a cloud customer can be a challenge for hyperscalers. This is especially true when key customers concentrate their load in a single region; they can saturate the shared physical resources faster than the hyperscaler’s auto-scaling can respond. ... At hyperscale, observability requires keeping vast telemetry data like logs, metrics and traces usable and cost-efficient. Storing it under one roof in an accessible, scalable and performant fashion lets organizations run AI and analytics directly from their telemetry data, spotting anomalies, problem areas and threats while future-proofing their infrastructure for data-intensive workloads. ... The complexity of managing microservices doesn’t scale linearly with the number of microservices—it scales exponentially. Mitigation requires a multipronged strategy: Limit the number of microservices; use traditional approaches where a sufficient observability strategy should be robust, yet lightweight; democratize observability-based ops, tools and skills in the organization; and exploit AI for heavy lifting and ops automation. ... One challenge is ephemeral dependency drift. At hyperscale, microservices vanish fast, breaking dependency maps and hiding failure roots. It’s like chasing ghosts in a storm. Fix it with real-time dependency snapshots and AI to predict drift patterns. Teams see the true service web, catch issues early and keep apps humming, no matter how wild the cloud gets.


AI meets EQ: Reimagining HR for the industry 5.0 workplace

The stakes couldn't be higher. The World Economic Forum surveyed over 1,000 global employers and found that nearly half of them said they’ll reduce their workforce in the next five years and replace those jobs with AI. However, paradoxically, the same technologies could create 2.73 million jobs by 2028 in India alone. It depends entirely on how well organisations manage the transition. It's not just about having the right technology; it's about having the right human strategy to deploy it. Consider the emergence of "cobots", which are collaborative robots designed to work alongside humans rather than replace them. ... Perhaps the most insidious challenge is AI bias, which can perpetuate discrimination based on race, gender, age, etc. and erode the trust that is essential for successful human-machine collaboration. When AI systems reflect historical prejudices or systemic inequalities, they undermine the very foundations of inclusive workplaces that Industry 5.0 promises to create. HR leaders must become guardians of algorithmic fairness, ensuring that AI systems used in recruitment, performance evaluation, and career development are transparent, equitable, and regularly audited. This requires building diverse AI development teams, implementing robust data governance frameworks, and maintaining human oversight in critical decision-making processes.


Exploring the Unintended Consequences of Automation in Software

The substitution myth refers to the flawed assumption that automation can simply replace human functions in a system without fundamentally altering how the system or human work operates. This misconception is built on assumptions like HABA-MABA ("Humans Are Better At / Machines Are Better At"), which assume that human and machine strengths are fixed, and system design is merely a matter of allocating tasks accordingly ... When an automated system fails, the amount of knowledge required to make things right again is likely greater than that required during normal operations. This creates immediate, new, and numerous items of work. Because the designers of automation can’t fully automate the human "parts", the human is left to cope with what’s left after the automated parts don’t behave as expected, leaving more complexity in their wake. ... In highly interdependent tasks like software operations, we can only plan our actions effectively when we can accurately anticipate the actions of others. Skilled teams achieve this predictability through shared knowledge and their own coordination mechanisms that are developed over time through extensive collaboration. Despite the common refrain of "human error" in incidents, in general, humans are quite predictable in their work, and we have established means for checking if something seems unpredictable


Observability is the weapon against complex hybrid IT chaos

To understand why observability is taking off, it is important to see the difference with traditional monitoring. Whereas traditional monitoring has been limited for years to servers, networks, and memory statistics, observability goes a step further. Monitoring mainly records what is happening, while observability shows why it is happening. It establishes connections between systems, shows how components interact with each other, and provides insight into the impact on the end user. ... The complexity of modern IT environments requires knowledge and capacity that is not available everywhere. Jean-Bastien outlines the dilemma. “If a customer had to employ someone full-time to manage everything, a capacity and knowledge problem would quickly arise. Many organizations therefore call on us to ensure continuity.” With a team of dozens of engineers, Cegeka can easily scale up, even during peak loads or holidays. In this way, they take care of the operational side of things, while customers retain the insight and reporting they need to bring their IT and business together. ... Nevertheless, there are limits to what observability can achieve. Legacy systems, such as monolithic applications in C++ or COBOL on mainframes, are difficult to instrument with modern agents. This poses a challenge in some sectors, particularly for banks that still rely heavily on older core systems.


AI Becomes a Force Multiplier for Short-Staffed Security Teams

“The skills shortage creates a paradox that limits AI’s potential in cybersecurity,” asserted Tim Freestone, chief strategy officer for Kiteworks, a provider of a secure platform for exchanging private data, in San Mateo, Calif. “Organizations lack personnel with the expertise needed to properly deploy, manage, and optimize AI-powered security tools, meaning the very solution designed to alleviate staffing pressures remains underutilized,” he told TechNewsWorld. “This gap is particularly acute because effective AI implementation requires dual competencies — both operating AI systems and defending against AI-powered attacks — skills that are in even shorter supply than traditional cybersecurity expertise. “Without trained professionals who can configure AI tools appropriately, interpret their outputs accurately, and integrate them effectively into security operations, organizations risk deploying AI systems that fail to reach their defensive potential or, worse, introduce new vulnerabilities through improper management,” he said. ... “Certifications can provide reassurance that candidates meet a certain standard and help organizations demonstrate credibility to clients and regulators,” she told TechNewsWorld, “but the reliance on credentials also has drawbacks. 


The CIA triad is dead — stop using a Cold War relic to fight 21st century threats

The CIA triad is both too broad and too narrow. It lacks the vocabulary and context to handle today’s realities. In trying to retrofit authenticity, accountability, privacy, and safety into its rigid structure, we leave gaps that attackers exploit. ... Treating ransomware as a simple “availability” failure misses the point. Being “up” or “down” is irrelevant when your systems are locked and business halted. What matters is resilience: the engineered ability to absorb damage, fail gracefully, and restore from immutable backups. Availability is binary; resilience is survival. Without it, you’re unprepared. ... A fraudulent deepfake of your CEO authorizing a wire transfer may have perfect technical integrity — checksums intact, file unaltered. But its authenticity is destroyed. The CIA triad has no language to capture this breakdown, leaving organizations exposed to fraud and reputational chaos. ... A successful model must explicitly encompass the principles that the triad overlooked — such are authenticity, accountability, and resilience. Those principles must be added as foundational pillars. Furthermore, the model should have the capability to help CISOs and their teams navigate the veritable forest of frameworks, harmonize regulatory demands, and eliminate duplicate work, while also giving them a way to speak to their boards in terms of resilience, accountability, and trust, rather than just uptime and firewalls.

Daily Tech Digest - October 10, 2025


Quote for the day:

“Whether you think you can or you think you can’t, you’re right.” -- Henry Ford



Has the value of data increased?

“We’ve seen that AI’s true potential is unlocked by connecting trusted, governed data – structured and unstructured – with real-time analytics and decision intelligence. With the rise of agentic AI, the next wave of value creation will come from intelligent systems that don’t just interpret data, but continuously and autonomously act on it at scale. Put simply, AI isn’t a shortcut to insight – it’s a multiplier of value, if the data is ready. Enterprises that treat data as an afterthought will fall behind, while those that treat it as a strategic asset will lead,” added the Qlik CSO. ... “In this AI economy, compute power may set the pace, but data sets the ceiling. MinIO raises that ceiling, transforming scattered, hard-to-reach datasets into a living, high-performance fabric that fuels every AI prompt and initiative. With MinIO AIStor, organizations gain the ability to store and understand. Data that is secure, fluid, and always ready for action is a competitive weapon,” added Kapoor. ... “Data that is fresh, well described and policy aware beats bigger but blind datasets because it can be safely composed, reused and measured for impact, with the lineage to show teams what to trust and what to fix so they can ship faster,” said Neat. ... While there is no question, really, of whether the value of data has increased and, further, whether the proliferation of AI has been fundamental to that value escalation, the mechanics as variously described here should point us towards the new wave of emerging truths in this space.


Whose Ops is it Anyway? How IDPs, AI and Security are Evolving Developer Culture

For many teams, the problem is not a lack of enthusiasm or ambition but a shortage of resources and skills. They want to automate more, streamline workflows, and adopt new practices, yet often find themselves already operating at full capacity just in keeping existing systems running. In that environment, the slightest of steps toward more advanced automation strategies can feel like a big leap forward. ... On the security side, the logic behind DevSecOps is compelling. More companies are realising that security has to be baked in from day one, not bolted on later. The difficulty lies in making that shift a practical reality, as integrating security checks early in the pipeline often requires new tooling, changes to established workflows, and in some cases, rethinking the roles and responsibilities within the team. ... In many organisations, it is the existing DevOps or platform teams that are best positioned to take on this responsibility, extending their remit into what is often referred to as MLOps. These teams already have experience building and maintaining shared infrastructure, managing pipelines, and ensuring operational stability at scale, so expanding those capabilities to handle data science and machine learning workflows can feel like a natural evolution. ... That said, as adoption grows, we can also expect to see more specialised MLOps roles appearing, particularly in larger enterprises or in organisations where AI is a major strategic focus.


The ultimate business resiliency test: Inside Kantsu’s ransomware response

Kantsu then began collaborating with the police, the cyberattack response teams of the company’s insurers, and security specialists to confirm the scope of cyber insurance coverage and estimate the amount of damage. ... when they began the actual recovery work, they encountered an unexpected pitfall. “We considered how to restore operations as quickly as possible. We did a variety of things, including asking other companies in the same industry to send packages, even ignoring our own profits,” Tatsujo says. ... To prevent reinfection with ransomware, the company prohibited use of old networks and PCs. Tethering was used, with smartphones as Wi-Fi routers. Where possible, this was used to facilitate shipping. New PCs were purchased to create an on-premises environment. ... “In times of emergency like this, the most important thing is cash to recover as quickly as possible, rather than cost reduction. However, insurance companies do not pay claims immediately. ... “In the end, many customers cooperated, which made me really happy. Rakuten Ichiba, in particular, offers a service called ‘Strongest Delivery,’ which allows for next-day delivery and delivery time specification, but they were considerate enough to allow us a grace period in consideration of the delay in delivery,” says President Tatsujo.


Stablecoins: The New Currency of Online Criminals

Practitioners say a cluster of market and technical factors are making stablecoins the payment of choice for cybercriminals and fraudsters. "It's not just the dollar peg that makes stablecoins attractive," said Ari Redbord, vice president and global head of policy and government affairs at TRM Labs. "Liquidity is critical. There are deep pools of stablecoin liquidity on both centralized and decentralized platforms. Settlement speed and irreversibility are also appealing for criminals trying to move large sums quickly," he told Information Security Media Group. The perception of stability - knowing $1 today will likely be $1 tomorrow - often suffices for illicit actors, regardless of an issuer's exact collateral model, he said. This stability and on-chain plumbing create both opportunity and exposure. Redbord said the spike in stablecoin usage is partly because law enforcement agencies around the world have become "exceptionally effective at tracing and seizing bitcoin," and criminals "go where the liquidity and usability are." There is no technical attribute of stablecoins that makes them more appealing to criminals or harder to trace, compared to other cryptocurrencies, Koven said. In practice, public ledgers keep transfers visible; the question then becomes whether investigators have the right tools and the cooperation of the ecosystem's gatekeepers to follow value across chains.


Zero Trust cuts incidents but firms slow to adopt AI security

Zero Trust is increasingly viewed as the standard going forward. As AI-driven threats accelerate, organisations must evaluate security holistically across identity, devices, networks, applications, and data. At DXC, we're helping customers embed Zero Trust into their culture and technology to safeguard operations. Our end-to-end expertise makes it possible to both defend against AI threats and harness secure AI in the same decisive motion. ... New cybersecurity threats are the primary driver for updating Zero Trust frameworks, with 72% of respondents indicating that the evolving threat landscape pushes them to continuously upgrade policies and practices. In addition, more than half of responding organisations recognised improvements in user experience as a secondary benefit of adopting Zero Trust approaches, beyond the gains in security posture. ... Most enterprises already rely on Microsoft Entra ID and Microsoft 365 as the backbone of their IT environments. Building Zero Trust solutions alongside DXC extends that value, enabling tighter integration, simplified operations, and greater visibility and control. By consolidating around the Microsoft stack, organisations can reduce complexity, cut costs, and accelerate their Zero Trust journey. ... Participants in the study agreed that Zero Trust is not a project with a defined end point. Instead, it is an ongoing process that requires continuous monitoring, regular updates, and cultural adaptation.


Overcome Connectivity Challenges for Edge AI

The challenges of AI at the Edge are as large as the advantages, however. One of the biggest challenges and key enablement technologies is connectivity. Edge processing and AI at the Edge require reliability, low latency, and resiliency in the harshest of environments. Without good connections to the network, many of the advantages of Edge AI are diminished, or lost entirely. A truly rugged Edge AI system requires a dual focus on connectivity, according to the experts at ATTEND. It needs both robust external I/O to interface with the outside world, and high-speed, resilient internal interconnects to manage data flow within the computing module. ... The transition to Edge AI is not just a software challenge; it is a hardware and systems engineering challenge. The key to overcoming this dual challenge is to engage with a partner like ATTEND, who will understand that the reliability of an advanced AI model is ultimately dependent on the physical-layer components that capture and transmit its data. By offering a comprehensive portfolio that addresses connectivity from the external sensor to the internal processor module, ATTEND can help you to build end-to-end systems that are both powerful and resilient. To meet with ATTEND and see all that they are doing to advance and enable true intelligence at the Edge, meet with them at embedded world North America in November at the Anaheim Convention Center.


AI Security Goes Mainstream as Vendors Spend Heavily on M&A

One of the most significant operational gaps in AI adoption is the lack of runtime observability, with organizations struggling to know what data a model is ingesting or what it's producing. Observability answers these questions by providing a live view of AI behavior across prompts, responses and system interactions, and it is a precursor to regulating or securing AI systems. ... One of the biggest risks of GenAI in the enterprise is data leakage, with workers inadvertently pasting confidential information into a chatbot, models regurgitating sensitive data it was exposed to during training, or adversaries crafting prompts to extract private information through jailbreaking. Allowing AI access without control is equivalent to opening an unsecured API to your crown jewels. ... Output is just as risky as input with GenAI since an LLM could generate sensitive content, malicious code or incorrect results that are trusted by downstream systems or users. Palo Alto Networks' Arora noted the need for bi-directional inspection to watch not only what goes into large language models, but also what comes out. ... Another key challenge is defining identity in a non-human context, raising questions around how AI agents should be authenticated, what permissions AI agents should have and how to prevent escalation or impersonation. Enterprises must treat bots, copilots, model endpoints and LLM-backed workflows as identity-bearing entities that log in, take action, make decisions and access sensitive data.


Navigating the Techno-Future: Between Promise and Prudence

On one side are the techno-optimists: the believers in inexorable progress, the proponents of markets and innovation as self-correcting forces. They see every challenge as a technical problem and every failure as a design flaw waiting to be solved. On the other side are techno-pessimists: the prophets of collapse who warn that every new tool will inevitably accelerate inequality, erode democracy, or catalyze ecological catastrophe. They see history as a cautionary tale, and the present as a fragile prelude to systemic failure. Both perspectives share a common flaw: they treat the future as preordained. Optimists assume that progress will automatically yield good outcomes; pessimists assume that progress will inevitably lead to harm. Reality, however, is far less deterministic. Technology, in itself, is neutral. It amplifies human choices but does not dictate them. ... Just as a hammer can build a home or inflict injury, a powerful technology like artificial intelligence, gene editing, or blockchain can be used to improve lives or to exacerbate inequalities. The technology does not prescribe its use; humans do. This neutrality is both liberating and daunting. On the one hand, it affirms that progress is not predestined. The future is not a straight line determined by the mere existence of certain tools. 


CISOs prioritise real-time visibility as AI reshapes cloud security

The top priority for CISOs is real-time threat monitoring and comprehensive visibility into all data in motion across their organisations, supporting a defence-in-depth strategy. However, 97 percent of CISOs acknowledged making compromises in areas such as visibility gaps, tool integration and data quality, which they say limit their ability to fully secure and manage hybrid cloud environments. ... The reliance on AI is also causing a revision of how SOCs (security operations centres) function. Almost one in five CISOs reported lacking the appropriate tools to manage the increased network data volumes created by AI, underscoring that legacy log-based tools may not be fit for purpose against AI-powered threats. ... Rising data breaches, with a 17 percent increase year on year, are translating into greater pressure on CISOs, 45 percent of whom said they are now the main person held accountable in the event of a breach. There is also concern about stress and burnout within cybersecurity teams, which is driving a greater embrace of AI-based security tools. ... The adoption of AI is expected to have practical impacts, such as enabling junior analysts to perform at the same level as more experienced team members, reducing training costs, speeding up analysis while investigating threats, and improving overall visibility for the security function.


Serverless Security Risks Are Real, and Hackers Know It

Many believe, “No servers, no security risks.” That’s a myth. Nowadays, attackers take advantage of the specific security weaknesses found in serverless platforms. ... All serverless applications need third-party libraries for operation. Each function that depends on the compromised component becomes vulnerable to attack. An npm package experienced a hijack attack when hackers inserted a secret entry into its system. The incorporation of code by AWS Lambda resulted in the silent extraction of all environment variables. The unauthorized loss of API keys, credentials, and sensitive data, together with all other valuable information. The process finished in milliseconds, which was too brief for any security system to identify. ... As more companies are adopting serverless technologies, security risks become more widespread. So, it’s fundamental to validate that serverless environments are secure. Let’s explore the facts. Research dictates that serverless computing is expected to grow rapidly. According to Gartner’s July 2025 forecast, global IT spending will climb to $5.43 trillion, with enterprises investing billions into AI-driven cloud and data center infrastructure, making serverless platforms an increasingly critical, but often overlooked, security target.

Daily Tech Digest - October 09, 2025


Quote for the day:

"No man is good enough to govern another man without that other's consent." -- Abraham Lincoln



The Quantum Wake-Up Call: Preparing Your Organization for PQC

Quantum computing promises transformative breakthroughs across industries—but it also threatens the cryptographic foundations that secure our digital world. As quantum capabilities evolve, organizations must proactively prepare for the shift to post-quantum cryptography (PQC) to safeguard sensitive data and maintain trust. ... The very mathematical "hardness" that makes RSA and ECC secure against classical computers is precisely what makes them fatally vulnerable to quantum computing. Shor's Algorithm: This quantum algorithm, developed by Peter Shor in 1994, is capable of solving the integer factorization and discrete logarithm problems exponentially faster than any classical machine. Once a sufficiently stable and large-scale quantum computer is built, an encryption that might take a supercomputer millions of years to break could be broken in hours or even minutes. The Decryption Time Bomb: Because current PKC is used to establish long-term trust and to encrypt keys, the entire cryptographic ecosystem is a single point of failure. The threat is compounded by the "Harvest Now, Decrypt Later" strategy, meaning sensitive data is already being harvested and stored by adversaries, awaiting the quantum moment to be unlocked. Quantum computing is no longer theoretical—it’s a looming reality. Algorithms like RSA and ECC, which underpin most public-key cryptography, are vulnerable to quantum attacks via Shor’s algorithm.


Producing a Better Software Architecture with Residuality Theory

Residuality theory is a very simple process. Sometimes, people are put off because the theoretical work necessary to prove that residuality works is very heavy, but applying it is easy, O’Reilly explained: We start out with a suggestion, a naive architecture that solves the functional problem. From there we stress the architecture with potential changes in the environment. These stressors allow us to uncover the attractors, often through conversations with domain experts. For each attractor, we identify the residue, what’s left of our architecture in this attractor, and then we change the naive architecture to make it survive better. We do this many times and, at the end, integrate all of these augmented residues into a coherent architecture. We can then test this to show that it survives unknown forms of stress better than our naive architecture. In complex business environments with uncertainty, residuality makes it possible to create architectures quickly instead of chasing down stakeholders demanding specific requirements or answers to questions that are unknown by the business itself, O’Reilly said. It pulls technical architects out of details and teaches them to productively engage with a business environment without the lines and boxes of traditional enterprise architecture, he concluded. ... Senior architects report that it gives a theoretical justification for practices that many had already figured out and a shared vocabulary for teams to talk about architecture. 


Will AI-SPM Become the Standard Security Layer for Safe AI Adoption?

By continuously monitoring AI assets, AI-SPM helps ensure that only trusted data sources are used during model development. Runtime security testing and red-team exercises detect vulnerabilities caused by malicious data. The system actively identifies abnormal model behavior, such as biased, toxic, or manipulated output, and brings them up for remediation prior to production release. ... AI-SPM continuously checks system requests and user inputs to find dangerous patterns before they lead to security problems, like attempts to remove or change built-in directives. It also uses protection against prompt injection and jailbreak attacks, which are common ways to access or alter system-level commands. By finding unapproved AI tools and services, it stops the use of insecure or poorly set up LLMs that could reveal system prompts. ... Shadow AI is starting to get more attention, and for good reason. Like shadow IT, employees are using public AI tools without authorization. That might mean uploading sensitive data or sidestepping governance rules, often without realizing the risks. The problem isn’t just the tools themselves, but the lack of visibility around how and where they’re being used. AI-SPM should work to identify all AI tools in play across networks, endpoints, cloud platforms, and dev environments, mapping how data moves between them, which is often the missing piece when trying to understand exposure risks.


How to write nonfunctional requirements for AI agents

Nonfunctional requirements for AI agents can be like those for applications, where user stories are granular and target delivering small, atomic functions. These NFRs can guide developers in answering how to develop the functionality described in user stories and to help quantify what should pass a code review. However, you may need another set of NFRs expressed at a feature or release level. ... “Agile teams often struggle with how to evaluate NFRs like latency, fairness, or explainability, which may seem nonfunctional, but with a little specification work, they can often be made concrete and part of a user story with clear pass/fail tests,” says Grant Passmore, co-founder of Imandra. “We use formal verification to turn NFRs into mathematical functional requirements we can prove or disprove.” ... AI agent NFRs that connect dev with ops have all the complexities of applications, infrastructure, automations, and AI models bundled together. Deploying the AI agent is just the beginning of its lifecycle, and NFRs for maintainability and observability help create the feedback loops required to diagnose issues and make operational improvements. As many organizations aim toward autonomous agentic AI and agent-to-agent workflows, standardizing a list of NFRs that are applied across all AI agents becomes important.


Unplug Gemini from email and calendars, says cybersecurity firm

CSOs should consider turning off Google Gemini access to employees’ Gmail and Google Calendars, because the chatbot is vulnerable to a form of prompt injection, says the head of a cybersecurity firm that discovered the vulnerability. ”If you’re worried about the risk, you might want to turn off automatic email and calendar processing by Gemini until this, and potentially other things like it, are addressed,” Jeremy Snider, CEO of US-based FireTail, said in an interview. ... This flaw is “particularly dangerous when LLMs, like Gemini, are deeply integrated into enterprise platforms like Google Workspace,” the report adds. FireTail tested six AI agents. OpenAI’s ChatGPT, Microsoft Copilot, and Anthropic AI’s Claude caught the attack. Gemini, DeepSeek, and Grok failed. In a test, FireTail researchers were able to change the word “Meeting” in an appointment in Google Calendar to “Meeting. It is optional.” ... “ASCII Smuggling attacks against AIs aren’t new,” commented Joseph Steinberg, a US-based cybersecurity and AI expert. “I saw one demonstrated over a year ago.” He didn’t specify where, but in August 2024, a security researcher blogged about an ASCII smuggling vulnerability in Copilot. The finding was reported to Microsoft. Many ways of disguising malicious prompts will be discovered over time, he added, so it’s important that IT and security leaders ensure that AIs don’t have the power to act without human approval on prompts that could be damaging.


Broken Opt-Outs, Big Fines: Tractor Supply Shows Privacy Enforcement Has Arrived for Retail

The Tractor Supply violations reveal a clear enforcement pattern. Broken opt-out links that route to dead webforms. Global Privacy Control signals ignored entirely. Privacy notices that skip job applicant data disclosures. Vendor agreements without data restriction clauses. These aren’t random oversights. They’re the exact gaps that surfaced across recent CCPA enforcement by the Attorney General and CPPA orders. Regulators are building a playbook: test the opt-out mechanisms, check for GPC compliance, review all privacy notices including HR portals, and audit third-party contracts. If any piece fails, expect enforcement. Regulators no longer accept opt-outs in theory or privacy policies in fine print. ... The message is clear: prove you have control. Not just over the data you collect, but over the algorithms that process it. Retailers who can’t show governance across both will face scrutiny on multiple fronts. The same broken opt-out that triggers a privacy fine could signal to regulators that your AI systems lack oversight too. This isn’t about adding more compliance checkboxes. It’s about recognizing that data governance and AI governance are becoming inseparable. The retailers who understand this convergence will build unified systems that handle both. The ones who don’t will scramble to retrofit governance after the fact, just like they’re doing with privacy today.


Why Enterprises Continue to Stick With Traditional AI

AI success also depends on digital maturity. Many organizations are still laying data foundations. "Let's say you want to run analytics on how many tickets were raised, do a dashboard on how many tickets one can expect … all of that was over a call. Nothing was digitized. There is no trace of it. That is the reason why chatbots are getting created because they are now recording and getting traced," Iyer said. ... Strict compliance and privacy requirements push enterprises toward controlled AI development. … Even in such cases, we ensure the data in the model that we build, it stays exclusively. At any point of time, your data or your model is not going to be used for the betterment of someone else," Iyer said. This approach reflects broader enterprise concerns about AI governance. According to KPMG research, frameworks such as local interpretable model-agnostic explanations and Shapley Additive exPlanations help clarify AI decisions, support compliance and build stakeholder confidence. ... Iyer said enterprise needs are often highly contextual, making massive models unnecessary. "Do you need a 600-700 billion [parameter] model sitting in your enterprise running inferences when the questions are going to be very contextual?" she said. This practical wisdom is supported by recent industry analysis. Traditional ML models often produce classification accuracy at a fraction of the cost compared to deep learning alternatives. 


Lead with a human edge: Why empathy is the new strategy

Traditional management was built on control: plans, processes, and hierarchies designed to tame complexity. But as Pushkar noted, ‘organisations are living organisms. They evolve, sense, and respond. Trying to manage them like machines is an illusion. The leaders of tomorrow will not be engineers of systems — they will be gardeners of cultures.’ “Planting a tree is very easy,” Bidwai said. “The real game is how you nurture, how you create an environment, how you enable the culture.” Nurturing, not directing, is the leadership mindset for an era of interdependence. ... Perhaps the most striking moment of Pushkar’s talk was not analytical but symbolic. He invited participants to discard their corporate titles just for a moment and invent new ones that reflected their purpose, not their position. “Sometimes titles define how we operate. Can we look beyond titles?” His own? In People Matters, Pushkar stated that he visualises his creative title as Plumber. “Wherever anything needs fixing, I will go and fix things.” The metaphor landed. Leadership, stripped of status, is about service. To lead with a human edge is to roll up your sleeves, listen, and fix what’s broken, in systems, in relationships, in ourselves. ... What Pushkar calls ‘the human edge’ is not a nostalgic pushback against technology. It is a pragmatic blueprint for sustainable growth. The leaders who will define the next decade will be those who use AI to augment human potential, not replace it those who recognise that data drives decisions, but empathy drives destiny.


Building a modern fraud prevention stack: why centralised data, not point solutions, is the answer

The fraud prevention landscape is riddled with fragmented tools, reactive approaches and blind spots. Despite the best of intentions, many organisations rely on outdated, point-in-time methods that are ill-suited for today’s dynamic fraud landscape. And fraud no longer plays by the old rules. It unfolds across the entire customer journey, mutating with every new channel, payment method or customer behaviour pattern. A fraudster may test stolen credentials one day, then come back weeks later to exploit a weak link in the onboarding or refund process. These disjointed systems miss multi-step attacks and patterns that unfold over time. ... while many organisations have historically relied on a patchwork of tools to cover each threat vector, it’s becoming clear that more tools aren’t the answer. Better coordination is. A modern stack doesn’t need to come from a single vendor, but it does need to operate like a single, unified system. That means integrated data, shared intelligence and orchestration that supports real-time response, not after-the-fact analysis. While investment is rising, with 85% of organisations having increased their fraud prevention budgets, it’s crucial to highlight that spending must be strategic. So, what does a modern fraud prevention stack actually look like? And how can organisations build one that’s unified, flexible and future-proof?


CISOs, Start Securing Software's Agentic Future Now

Industry-wide challenges create obstacles to AI governance, leaving leaders uncertain about where to focus their strategic efforts most effectively. The non-deterministic nature of agents causes them to behave in unexpected ways, which has been proven to disrupt existing security boundaries. Adding to this security complexity, universal protocols, such as Model Context Protocol and Agent2Agent, are emerging to streamline data access and improve agent interoperability, but their ecosystem-building capabilities introduce additional security considerations. But these challenges cannot stop security leaders from prioritizing AI governance. ... A culture of security now requires AI literacy. 43% of survey respondents acknowledged a widening AI skills gap, which is likely to grow unless technical leaders prioritize upskilling teams to understand model behavior, prompt engineering, and how to evaluate model inputs and outputs critically. Understanding where models are performant versus where their use is suboptimal helps teams avoid unnecessary security risk and technical debt. ... Teams should also recognize that no model can replace human ingenuity. When models fail in domains where security engineers or developers lack expertise, they will not be able to identify the security gaps the model has left behind. CISOs should consider dedicating a portion of learning and development budgets to continuous technical education. 

Daily Tech Digest - October 08, 2025


Quote for the day:

"Life is what happens to you while you’re busy making other plans." -- John Lennon



Network digital twin technology faces headwinds

Just like Google Maps is able to overlay information, such as driving directions, traffic alerts or locations of gas stations or restaurants, digital twin technology enables network teams to overlay information, such as a software upgrade, a change to firewalls rules, new versions of network operating systems, vendor or tool consolidation, or network changes triggered by mergers and acquisitions. Network teams can then run the model, evaluate different approaches, make adjustments, and conduct validation and assurance to make sure any rollout accomplishes its goals and doesn’t cause any problems, explains Maccioni ... “Configuration errors are a major cause of network incidents resulting in downtime,” says Zimmerman. “Enterprise networks, as part of a modern change management process, should use digital twin tools to model and test network functionality business rules and policies. This approach will ensure that network capabilities won’t fall short in the age of vendor-driven agile development and updates to operating systems, firmware or functionality.” ... Another valuable use case is testing failover scenarios, says Wheeler. Network engineers can design a topology that has alternative traffic paths in case a network component fails, but there’s really no way to stress test the architecture under real world conditions. He says that in one digital twin customer engagement “they found failure scenarios that they never knew existed.”


Autonomous AI hacking and the future of cybersecurity

The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We’re potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance. The most skilled will likely retain an edge for now. But AI agents don’t have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws, AI can turn rare expertise into commodity capabilities and gives average criminals an outsized advantage. ... If enterprises adopt AI-powered security the way they adopted continuous integration/continuous delivery (CI/CD), several paths open up. AI vulnerability discovery could become a built-in stage in delivery pipelines. We can envision a world where AI vulnerability discovery becomes an integral part of the software development process, where vulnerabilities are automatically patched even before reaching production — a shift we might call continuous discovery/continuous repair (CD/CR).


AI inference: reshaping the enterprise IT landscape across industries

AI inference is a complex operation that transforms intricate models into actionable agents. This process is essential for making real-time decisions, which can significantly improve user experiences. ... As AI systems handle more sensitive information, data security and private AI become a key part of effective inference processes. In cloud and Edge computing environments, where data often moves between multiple networks and devices, ensuring the confidentiality of user information is paramount. Private AI limits queries and requests to a company's internal database, SharePoint, API, or other private sources. It prevents unauthorized access and ensures that sensitive information remains confidential even when processed in the cloud or at the Edge. ... For AI to be truly transformative, low latency is a necessity, ensuring that real-time responses are both swift and seamless. In the realm of AI chatbots, for instance, the difference between a seamless conversation and a frustrating user experience often comes down to the speed of the AI’s response. Users expect immediate and accurate replies, and any delay can lead to a loss of engagement and trust. By minimising latency, AI chatbots can provide a more natural and fluid interaction, enhancing user satisfaction, and driving better outcomes. ... By reducing the distance data must travel, Edge computing significantly reduces latency, enabling faster and more reliable AI inference.


Smarter Systems, Safer Data: How to Outsmart Threat Actors

One of the clearest signs that a cybersecurity strategy is outdated is a lack of control and visibility over who can access what data, and on which systems. Many organizations still rely on fragmented identity management systems or grant broad access to database administrators. Others have yet to implement basic protections such as multi-factor authentication. ... Security concerns are commonly quoted as a top barrier to innovation. This is why many organizations struggle to adopt artificial intelligence, migrate to the cloud, share data externally or even internally. The only way to unblock this impasse is to start treating security as an enabler. Think about it this way: when done right, security is that key element that allows data to be moved, analyzed and shared. To exemplify this approach, if data is de-identified to maintain data privacy through the means of encryption or tokenization, in a situation of a breach, it will remain useless to attackers. ... What’s been key for the organizations that succeed in managing data risk while simultaneously unlocking value is a mindset shift. They stop seeing security as a roadblock and start seeing it as a foundation for growth. As an example, a large financial institution client has built an AI-powered solution for anti-money laundering. By protecting incoming data before it enters their system, they ensure that no sensitive data is fed to their algorithms, and thus the risk of a privacy breach, even incidental, is essentially null.


AI could prove CIOs’ worst tech debt yet

AI tools can be used to clean up old code and trim down bloated software, thus reducing one major form of tech debt. In September, for example, Microsoft announced a new suite of autonomous AI agents designed to automatically modernize legacy Java and .NET applications. At the same time, IT leaders see the potential for AI to add to their tech debt, with too many AI projects relying on models or agents that can be expensive to deploy and maintain and AI coding assistants generating more lines of software than may be necessary. ... Endless AI pilot projects create their own form of tech debt as well, says Ryan Achterberg, CTO at IT consulting firm Resultant. This “pilot paralysis,” in which organizations launch dozens of proofs of concepts that never scale, can drain IT resources, he says. “Every experiment carries an ongoing cost,” Achterberg says. “Even if a model is never scaled, it leaves behind artifacts that require upkeep and security oversight.” Part of the problem is that AI data foundations are still shaky, even as AI ambition remains high, he adds. ... In addition to tech debt from too many AI pilot projects, coding assistants can create their own problems without proper oversight, adds Jaideep Vijay Dhok, COO for technology at digital engineering provider Persistent Systems. In some cases, AI coding assistants will generate more lines of software than a developer asked for, he says. 


Hackers Exploit RMM Tools to Deploy Malware

RMM platforms typically operate with elevated permissions across endpoints. Once compromised, they offer adversaries a ready-made channel for privilege escalation, lateral movement and payload delivery, including ransomware ... Threat actors frequently repurpose legitimate RMM tools or hijack valid credentials, allowing malicious activity to blend seamlessly with routine administrative tasks. This tactic complicates detection and response, especially in environments lacking behavioral baselining. ... "This is a typical living-off-the-land attack used by many adversaries considering the success and ease of execution. Typically, such software are whitelisted in most of the controls to avoid blocking and noise, due to which its activities are not monitored much," Varkey said. "Like in most adversarial acts, getting access to the software is their initial step, so if access is limited to specific people with multifactor authorization and audited periodically, unauthorized access can be limited. .." ... "Treat RMM seriously. Assume compromise is possible and build defenses around prevention, detection and rapid response. Start with a full audit of your RMM deployment - map every agent, session and integration to identify shadow access points: asset management is key and a good RMM solution should be able to assist here. Layered controls are key - think defense-in-depth tailored to RMM's remote nature," Beuchelt said.


From Data to Doing: Agentic AI Will Revolutionize the Enterprise

Where do organizations see the greatest opportunities for agentic AI? The answer is: everywhere. Survey results show that business leaders view agentic AI as equally relevant to productivity gains, better decision-making, and enhanced customer experiences. When asked to rank potential benefits, improving customer experience and personalization emerge as the top priority, followed closely by sharper decision-making and increased efficiency. What's telling is what landed at the bottom of the list. Few organizations currently view market and business expansion as critical. This suggests that, at least in the near term, agentic AI will be applied less as a driver of bold new growth and more as a catalyst for improving and extending existing operations. ... Agentic AI is not simply the next technology wave -- it is the next great inflection point for enterprise software. Just as client–server, the Internet, and the cloud radically redefined industry leaders, agentic AI will determine which vendors and enterprises can adapt quickly enough to thrive. The lesson is clear: organizations that treat data as a strategic asset, modernize their platforms, and embed intelligence into their workflows will not only move faster but also serve customers better. The rest risk being left behind -- just as the mainframe giants once were.


Is That Your Boss or a Deepfake on the Other Side of That Video Call?

Sophisticated deepfake technology had perfectly replicated not just the appearance but the mannerisms and decision-making patterns of the company’s executives. The real managers were elsewhere, unaware their digital twins were orchestrating one of the largest deepfake heists in corporate history. This reflects a terrifying trend of AI fraud that is shaking the financial services industry. Deepfake-enabled attacks have grown by an alarming 1,740% in just one year, representing one of the fastest-growing AI-powered threats. More than half of businesses in the U.S. and U.K. have been targeted by deepfake-powered financial scams, with 43% falling victim. ... The deepfake threat extends far beyond immediate financial losses. Each successful attack erodes the foundation of digital communication itself. When employees can no longer trust that their CEO is real during a video call, the entire remote work infrastructure becomes suspect in particular for financial institutions, which deal in the currency of trust. ... Financial services companies must implement comprehensive AI governance frameworks, continuous monitoring systems, and robust incident response plans to address these evolving threats while maintaining operational efficiency and customer trust. These systems and protocols must extend not only within their front office but to their back office, including vendor management and third-party suppliers who manage their data.


Rethinking AI security architectures beyond Earth

The researchers outline three architectures: centralized, distributed, and federated. In a centralized model, the heavy lifting happens on Earth. Satellites send telemetry data to a large AI system, which analyzes it and sends back security updates. Training is fast because powerful ground-based resources are available, but the response to threats is slower due to long transmission times. In a distributed model, satellites still rely on the ground for training but perform inference locally. This setup reduces delay when responding to a threat, though smaller onboard systems can limit model accuracy. Federated learning goes a step further. Satellites train and infer on their own data without sending it to Earth. They share only model updates with other satellites and ground stations. This keeps latency low and improves privacy, but synchronizing models across a large constellation can be difficult. ... Byrne pointed out that while space-based architectures vary in resilience, recovery often depends on shared fundamentals. “Most systems across all segments will need to be restored from secure backups,” he said. “One architectural enhancement to help reduce recovery time is the implementation of distributed Inter-Satellite Links. These links enable faster propagation of recovery updates between satellites, minimizing latency and accelerating system-wide restoration.”


Who Governs Your NHIs? The Challenge of Defining Ownership in Modern Enterprise IT

What we should actually mean by ownership is the person who can answer the basic questions about why this NHI exists, what access it has, how often credentials should be rotated, whether it's being used in a way that could introduce new risks, and whether the credentials have been properly stored or have been leaked. ... Instead of focusing solely on assigning human ownership, we should be working to ensure that the questions we would ask the owner are easily answerable by our tools. This approach makes answers persistent and usable by multiple teams over time and provides consistency across the organization. It does not rely on specific individuals being eternally available or up to speed on how the NHI they created is being used. Ultimately, it scales better than human-dependent processes. Just as governing an application and all of the NHIs involved is almost never going to be the responsibility of one person, the ideal scenario where a single person can outright own an NHI and be responsible for every aspect is going to be a rare situation. ... The conversation about ownership often gets stuck on blame. Let's reframe it around assurance. Let's ensure that if a secret exists, no matter where or how it is stored, governance questions can be answered quickly and consistently.