Daily Tech Digest - October 11, 2025


Quote for the day:

“The only real mistake is the one from which we learn nothing.” -- Henry Ford



CIOs turn to AI to assist with IT purchasing decisions

“AI promptly evaluates product documentation, reviews, and market reports, cutting the time it takes to evaluate vendors from weeks to days and unearthing compatibility problems that go unnoticed by human reviewers,” he says. Like 8×8, Thrive uses a “trust but verify” approach that treats AI output as inputs for its decision-making processes, not final answers, Whittaker says. “AI is great for comparing technical specs, but it can’t help you much with assessing non-technical aspects such as quality of support, cultural fit, etc.” Thrive plans to enhance its future AI models to predict defects in products, foresee deployment challenges, and monitor vendor performance, Whittaker says. ... “When you are negotiating a contract, let’s say you received an order form, or you received a large legal contract, and it’s all unstructured data,” he says. “AI is really good at guiding you on what kind of commercial terms you should be careful with. It can look at your existing contracts and compare them with this new one and say, ‘This one has some anomalies.’” The company’s use of AI is giving the IT team time to work on other priorities instead of spending extra time researching potential products, Johar says. “If you look at how an IT organization works, we are buying software all the time, and sometimes it leaves you very little time to focus on real evaluation and piloting the software, because you just end up spending so much time on all these RFP processes, legal processes, and research,” he says.


Deepfake Awareness High at Orgs, But Cyber Defenses Badly Lag

"The deepfake threat landscape looks, above all else, dynamic," he says. "While email threats and static imagery are still the most commonly encountered vectors, there is a wide diversity of other forms of deepfakes that are quickly growing in prevalence. In fact, we're seeing more and more of every kind of deepfake in the wild." ... Attackers are using a variety of AI techniques to enhance their attack pipeline. Human digital twins can be trained on public information about a person to help create more realistic phishing attacks, which, combined with voice samples, could create convincing audio deepfakes. Concerns over misuse of AI caused Microsoft to mostly scuttle a voice cloning technology feature that it could have integrated into various apps, such as Teams, and allow a user — or an attacker — to hijack someone's voice for all kinds of fraud attempts. ... "The challenge now is that AI can be used to reduce the skill barrier to entry and speed up production to a higher quality," she says. "Since the sophistication of deepfakes are getting harder to detect, it is imperative to turn to AI-augmented tools for detection, as people alone cannot be the last line of defense." Companies should continue to train their employees and create good policies that reduce the impact that one person — even a top executive — can have for the company, says Ironscales' Benishti. "Develop policies that make it impossible for a single employee's bad decision to result in compromise," he says.


Powering Data in the Age of AI: Part 1 – Energy as the Ultimate Bottleneck

“Demand for electricity around the world from data centres is on course to double over the next five years, as information technology becomes more pervasive in our lives,” Birol said in a statement released with the IEA’s 2024 Energy and AI report. “The impact will be especially strong in some countries — in the United States, data centres are projected to account for nearly half of the growth in electricity demand; in Japan, over half; and in Malaysia, one-fifth.” ... Unlike older mainframe workloads that spiked and dropped with changing demand, modern AI systems operate close to full capacity for days or even weeks at a time. ... It’s not a benchmark like FLOPS, but it now influences nearly every design decision. Chipmakers promote performance per watt as their most important competitive edge, because speed doesn’t matter if the grid can’t handle it. ... That dynamic is also reshaping the economics of AI. Cloud providers are starting to charge for workloads based not just on runtime but on the power they draw, forcing developers to optimize for energy throughput rather than latency. Data center architects now design around megawatt budgets instead of square footage, while governments from the U.S. to Japan are issuing new rules for energy-efficient AI systems.


How Artificial Intelligence is Shaping the Future of Secure, Compliant, and Efficient Data Practices

Understanding the journey of data—where it originates, how it transforms, and who accesses it—is critical for both governance and compliance. Generative AI excels at mapping data lineage by automatically tracing data flows across systems, applications, and processes. Consider a scenario where an organisation needs to demonstrate how customer information moves from collection to storage and reporting. AI-powered lineage tools can generate visual maps showing every touchpoint, transformation, and user interaction. This automation not only accelerates audits and compliance reporting but also provides actionable insights to improve data handling practices. ... Organisations often grapple with choosing between centralised and autonomous (decentralised) data management models. Centralised approaches offer uniformity and control, while autonomous models empower individual teams with flexibility. Generative AI supports both paradigms. In centralised settings, AI enforces global policies, ensures consistency, and manages data assets from a single point of control. In autonomous environments, AI agents can be embedded within business units, tailoring governance and security measures to local needs while maintaining alignment with overarching standards. This hybrid capability ensures organisations remain agile without compromising data integrity or compliance.


Cloud Observability Challenges At Scale (And How To Solve Them)

Concentration risk from a cloud customer can be a challenge for hyperscalers. This is especially true when key customers concentrate their load in a single region; they can saturate the shared physical resources faster than the hyperscaler’s auto-scaling can respond. ... At hyperscale, observability requires keeping vast telemetry data like logs, metrics and traces usable and cost-efficient. Storing it under one roof in an accessible, scalable and performant fashion lets organizations run AI and analytics directly from their telemetry data, spotting anomalies, problem areas and threats while future-proofing their infrastructure for data-intensive workloads. ... The complexity of managing microservices doesn’t scale linearly with the number of microservices—it scales exponentially. Mitigation requires a multipronged strategy: Limit the number of microservices; use traditional approaches where a sufficient observability strategy should be robust, yet lightweight; democratize observability-based ops, tools and skills in the organization; and exploit AI for heavy lifting and ops automation. ... One challenge is ephemeral dependency drift. At hyperscale, microservices vanish fast, breaking dependency maps and hiding failure roots. It’s like chasing ghosts in a storm. Fix it with real-time dependency snapshots and AI to predict drift patterns. Teams see the true service web, catch issues early and keep apps humming, no matter how wild the cloud gets.


AI meets EQ: Reimagining HR for the industry 5.0 workplace

The stakes couldn't be higher. The World Economic Forum surveyed over 1,000 global employers and found that nearly half of them said they’ll reduce their workforce in the next five years and replace those jobs with AI. However, paradoxically, the same technologies could create 2.73 million jobs by 2028 in India alone. It depends entirely on how well organisations manage the transition. It's not just about having the right technology; it's about having the right human strategy to deploy it. Consider the emergence of "cobots", which are collaborative robots designed to work alongside humans rather than replace them. ... Perhaps the most insidious challenge is AI bias, which can perpetuate discrimination based on race, gender, age, etc. and erode the trust that is essential for successful human-machine collaboration. When AI systems reflect historical prejudices or systemic inequalities, they undermine the very foundations of inclusive workplaces that Industry 5.0 promises to create. HR leaders must become guardians of algorithmic fairness, ensuring that AI systems used in recruitment, performance evaluation, and career development are transparent, equitable, and regularly audited. This requires building diverse AI development teams, implementing robust data governance frameworks, and maintaining human oversight in critical decision-making processes.


Exploring the Unintended Consequences of Automation in Software

The substitution myth refers to the flawed assumption that automation can simply replace human functions in a system without fundamentally altering how the system or human work operates. This misconception is built on assumptions like HABA-MABA ("Humans Are Better At / Machines Are Better At"), which assume that human and machine strengths are fixed, and system design is merely a matter of allocating tasks accordingly ... When an automated system fails, the amount of knowledge required to make things right again is likely greater than that required during normal operations. This creates immediate, new, and numerous items of work. Because the designers of automation can’t fully automate the human "parts", the human is left to cope with what’s left after the automated parts don’t behave as expected, leaving more complexity in their wake. ... In highly interdependent tasks like software operations, we can only plan our actions effectively when we can accurately anticipate the actions of others. Skilled teams achieve this predictability through shared knowledge and their own coordination mechanisms that are developed over time through extensive collaboration. Despite the common refrain of "human error" in incidents, in general, humans are quite predictable in their work, and we have established means for checking if something seems unpredictable


Observability is the weapon against complex hybrid IT chaos

To understand why observability is taking off, it is important to see the difference with traditional monitoring. Whereas traditional monitoring has been limited for years to servers, networks, and memory statistics, observability goes a step further. Monitoring mainly records what is happening, while observability shows why it is happening. It establishes connections between systems, shows how components interact with each other, and provides insight into the impact on the end user. ... The complexity of modern IT environments requires knowledge and capacity that is not available everywhere. Jean-Bastien outlines the dilemma. “If a customer had to employ someone full-time to manage everything, a capacity and knowledge problem would quickly arise. Many organizations therefore call on us to ensure continuity.” With a team of dozens of engineers, Cegeka can easily scale up, even during peak loads or holidays. In this way, they take care of the operational side of things, while customers retain the insight and reporting they need to bring their IT and business together. ... Nevertheless, there are limits to what observability can achieve. Legacy systems, such as monolithic applications in C++ or COBOL on mainframes, are difficult to instrument with modern agents. This poses a challenge in some sectors, particularly for banks that still rely heavily on older core systems.


AI Becomes a Force Multiplier for Short-Staffed Security Teams

“The skills shortage creates a paradox that limits AI’s potential in cybersecurity,” asserted Tim Freestone, chief strategy officer for Kiteworks, a provider of a secure platform for exchanging private data, in San Mateo, Calif. “Organizations lack personnel with the expertise needed to properly deploy, manage, and optimize AI-powered security tools, meaning the very solution designed to alleviate staffing pressures remains underutilized,” he told TechNewsWorld. “This gap is particularly acute because effective AI implementation requires dual competencies — both operating AI systems and defending against AI-powered attacks — skills that are in even shorter supply than traditional cybersecurity expertise. “Without trained professionals who can configure AI tools appropriately, interpret their outputs accurately, and integrate them effectively into security operations, organizations risk deploying AI systems that fail to reach their defensive potential or, worse, introduce new vulnerabilities through improper management,” he said. ... “Certifications can provide reassurance that candidates meet a certain standard and help organizations demonstrate credibility to clients and regulators,” she told TechNewsWorld, “but the reliance on credentials also has drawbacks. 


The CIA triad is dead — stop using a Cold War relic to fight 21st century threats

The CIA triad is both too broad and too narrow. It lacks the vocabulary and context to handle today’s realities. In trying to retrofit authenticity, accountability, privacy, and safety into its rigid structure, we leave gaps that attackers exploit. ... Treating ransomware as a simple “availability” failure misses the point. Being “up” or “down” is irrelevant when your systems are locked and business halted. What matters is resilience: the engineered ability to absorb damage, fail gracefully, and restore from immutable backups. Availability is binary; resilience is survival. Without it, you’re unprepared. ... A fraudulent deepfake of your CEO authorizing a wire transfer may have perfect technical integrity — checksums intact, file unaltered. But its authenticity is destroyed. The CIA triad has no language to capture this breakdown, leaving organizations exposed to fraud and reputational chaos. ... A successful model must explicitly encompass the principles that the triad overlooked — such are authenticity, accountability, and resilience. Those principles must be added as foundational pillars. Furthermore, the model should have the capability to help CISOs and their teams navigate the veritable forest of frameworks, harmonize regulatory demands, and eliminate duplicate work, while also giving them a way to speak to their boards in terms of resilience, accountability, and trust, rather than just uptime and firewalls.

Daily Tech Digest - October 10, 2025


Quote for the day:

“Whether you think you can or you think you can’t, you’re right.” -- Henry Ford



Has the value of data increased?

“We’ve seen that AI’s true potential is unlocked by connecting trusted, governed data – structured and unstructured – with real-time analytics and decision intelligence. With the rise of agentic AI, the next wave of value creation will come from intelligent systems that don’t just interpret data, but continuously and autonomously act on it at scale. Put simply, AI isn’t a shortcut to insight – it’s a multiplier of value, if the data is ready. Enterprises that treat data as an afterthought will fall behind, while those that treat it as a strategic asset will lead,” added the Qlik CSO. ... “In this AI economy, compute power may set the pace, but data sets the ceiling. MinIO raises that ceiling, transforming scattered, hard-to-reach datasets into a living, high-performance fabric that fuels every AI prompt and initiative. With MinIO AIStor, organizations gain the ability to store and understand. Data that is secure, fluid, and always ready for action is a competitive weapon,” added Kapoor. ... “Data that is fresh, well described and policy aware beats bigger but blind datasets because it can be safely composed, reused and measured for impact, with the lineage to show teams what to trust and what to fix so they can ship faster,” said Neat. ... While there is no question, really, of whether the value of data has increased and, further, whether the proliferation of AI has been fundamental to that value escalation, the mechanics as variously described here should point us towards the new wave of emerging truths in this space.


Whose Ops is it Anyway? How IDPs, AI and Security are Evolving Developer Culture

For many teams, the problem is not a lack of enthusiasm or ambition but a shortage of resources and skills. They want to automate more, streamline workflows, and adopt new practices, yet often find themselves already operating at full capacity just in keeping existing systems running. In that environment, the slightest of steps toward more advanced automation strategies can feel like a big leap forward. ... On the security side, the logic behind DevSecOps is compelling. More companies are realising that security has to be baked in from day one, not bolted on later. The difficulty lies in making that shift a practical reality, as integrating security checks early in the pipeline often requires new tooling, changes to established workflows, and in some cases, rethinking the roles and responsibilities within the team. ... In many organisations, it is the existing DevOps or platform teams that are best positioned to take on this responsibility, extending their remit into what is often referred to as MLOps. These teams already have experience building and maintaining shared infrastructure, managing pipelines, and ensuring operational stability at scale, so expanding those capabilities to handle data science and machine learning workflows can feel like a natural evolution. ... That said, as adoption grows, we can also expect to see more specialised MLOps roles appearing, particularly in larger enterprises or in organisations where AI is a major strategic focus.


The ultimate business resiliency test: Inside Kantsu’s ransomware response

Kantsu then began collaborating with the police, the cyberattack response teams of the company’s insurers, and security specialists to confirm the scope of cyber insurance coverage and estimate the amount of damage. ... when they began the actual recovery work, they encountered an unexpected pitfall. “We considered how to restore operations as quickly as possible. We did a variety of things, including asking other companies in the same industry to send packages, even ignoring our own profits,” Tatsujo says. ... To prevent reinfection with ransomware, the company prohibited use of old networks and PCs. Tethering was used, with smartphones as Wi-Fi routers. Where possible, this was used to facilitate shipping. New PCs were purchased to create an on-premises environment. ... “In times of emergency like this, the most important thing is cash to recover as quickly as possible, rather than cost reduction. However, insurance companies do not pay claims immediately. ... “In the end, many customers cooperated, which made me really happy. Rakuten Ichiba, in particular, offers a service called ‘Strongest Delivery,’ which allows for next-day delivery and delivery time specification, but they were considerate enough to allow us a grace period in consideration of the delay in delivery,” says President Tatsujo.


Stablecoins: The New Currency of Online Criminals

Practitioners say a cluster of market and technical factors are making stablecoins the payment of choice for cybercriminals and fraudsters. "It's not just the dollar peg that makes stablecoins attractive," said Ari Redbord, vice president and global head of policy and government affairs at TRM Labs. "Liquidity is critical. There are deep pools of stablecoin liquidity on both centralized and decentralized platforms. Settlement speed and irreversibility are also appealing for criminals trying to move large sums quickly," he told Information Security Media Group. The perception of stability - knowing $1 today will likely be $1 tomorrow - often suffices for illicit actors, regardless of an issuer's exact collateral model, he said. This stability and on-chain plumbing create both opportunity and exposure. Redbord said the spike in stablecoin usage is partly because law enforcement agencies around the world have become "exceptionally effective at tracing and seizing bitcoin," and criminals "go where the liquidity and usability are." There is no technical attribute of stablecoins that makes them more appealing to criminals or harder to trace, compared to other cryptocurrencies, Koven said. In practice, public ledgers keep transfers visible; the question then becomes whether investigators have the right tools and the cooperation of the ecosystem's gatekeepers to follow value across chains.


Zero Trust cuts incidents but firms slow to adopt AI security

Zero Trust is increasingly viewed as the standard going forward. As AI-driven threats accelerate, organisations must evaluate security holistically across identity, devices, networks, applications, and data. At DXC, we're helping customers embed Zero Trust into their culture and technology to safeguard operations. Our end-to-end expertise makes it possible to both defend against AI threats and harness secure AI in the same decisive motion. ... New cybersecurity threats are the primary driver for updating Zero Trust frameworks, with 72% of respondents indicating that the evolving threat landscape pushes them to continuously upgrade policies and practices. In addition, more than half of responding organisations recognised improvements in user experience as a secondary benefit of adopting Zero Trust approaches, beyond the gains in security posture. ... Most enterprises already rely on Microsoft Entra ID and Microsoft 365 as the backbone of their IT environments. Building Zero Trust solutions alongside DXC extends that value, enabling tighter integration, simplified operations, and greater visibility and control. By consolidating around the Microsoft stack, organisations can reduce complexity, cut costs, and accelerate their Zero Trust journey. ... Participants in the study agreed that Zero Trust is not a project with a defined end point. Instead, it is an ongoing process that requires continuous monitoring, regular updates, and cultural adaptation.


Overcome Connectivity Challenges for Edge AI

The challenges of AI at the Edge are as large as the advantages, however. One of the biggest challenges and key enablement technologies is connectivity. Edge processing and AI at the Edge require reliability, low latency, and resiliency in the harshest of environments. Without good connections to the network, many of the advantages of Edge AI are diminished, or lost entirely. A truly rugged Edge AI system requires a dual focus on connectivity, according to the experts at ATTEND. It needs both robust external I/O to interface with the outside world, and high-speed, resilient internal interconnects to manage data flow within the computing module. ... The transition to Edge AI is not just a software challenge; it is a hardware and systems engineering challenge. The key to overcoming this dual challenge is to engage with a partner like ATTEND, who will understand that the reliability of an advanced AI model is ultimately dependent on the physical-layer components that capture and transmit its data. By offering a comprehensive portfolio that addresses connectivity from the external sensor to the internal processor module, ATTEND can help you to build end-to-end systems that are both powerful and resilient. To meet with ATTEND and see all that they are doing to advance and enable true intelligence at the Edge, meet with them at embedded world North America in November at the Anaheim Convention Center.


AI Security Goes Mainstream as Vendors Spend Heavily on M&A

One of the most significant operational gaps in AI adoption is the lack of runtime observability, with organizations struggling to know what data a model is ingesting or what it's producing. Observability answers these questions by providing a live view of AI behavior across prompts, responses and system interactions, and it is a precursor to regulating or securing AI systems. ... One of the biggest risks of GenAI in the enterprise is data leakage, with workers inadvertently pasting confidential information into a chatbot, models regurgitating sensitive data it was exposed to during training, or adversaries crafting prompts to extract private information through jailbreaking. Allowing AI access without control is equivalent to opening an unsecured API to your crown jewels. ... Output is just as risky as input with GenAI since an LLM could generate sensitive content, malicious code or incorrect results that are trusted by downstream systems or users. Palo Alto Networks' Arora noted the need for bi-directional inspection to watch not only what goes into large language models, but also what comes out. ... Another key challenge is defining identity in a non-human context, raising questions around how AI agents should be authenticated, what permissions AI agents should have and how to prevent escalation or impersonation. Enterprises must treat bots, copilots, model endpoints and LLM-backed workflows as identity-bearing entities that log in, take action, make decisions and access sensitive data.


Navigating the Techno-Future: Between Promise and Prudence

On one side are the techno-optimists: the believers in inexorable progress, the proponents of markets and innovation as self-correcting forces. They see every challenge as a technical problem and every failure as a design flaw waiting to be solved. On the other side are techno-pessimists: the prophets of collapse who warn that every new tool will inevitably accelerate inequality, erode democracy, or catalyze ecological catastrophe. They see history as a cautionary tale, and the present as a fragile prelude to systemic failure. Both perspectives share a common flaw: they treat the future as preordained. Optimists assume that progress will automatically yield good outcomes; pessimists assume that progress will inevitably lead to harm. Reality, however, is far less deterministic. Technology, in itself, is neutral. It amplifies human choices but does not dictate them. ... Just as a hammer can build a home or inflict injury, a powerful technology like artificial intelligence, gene editing, or blockchain can be used to improve lives or to exacerbate inequalities. The technology does not prescribe its use; humans do. This neutrality is both liberating and daunting. On the one hand, it affirms that progress is not predestined. The future is not a straight line determined by the mere existence of certain tools. 


CISOs prioritise real-time visibility as AI reshapes cloud security

The top priority for CISOs is real-time threat monitoring and comprehensive visibility into all data in motion across their organisations, supporting a defence-in-depth strategy. However, 97 percent of CISOs acknowledged making compromises in areas such as visibility gaps, tool integration and data quality, which they say limit their ability to fully secure and manage hybrid cloud environments. ... The reliance on AI is also causing a revision of how SOCs (security operations centres) function. Almost one in five CISOs reported lacking the appropriate tools to manage the increased network data volumes created by AI, underscoring that legacy log-based tools may not be fit for purpose against AI-powered threats. ... Rising data breaches, with a 17 percent increase year on year, are translating into greater pressure on CISOs, 45 percent of whom said they are now the main person held accountable in the event of a breach. There is also concern about stress and burnout within cybersecurity teams, which is driving a greater embrace of AI-based security tools. ... The adoption of AI is expected to have practical impacts, such as enabling junior analysts to perform at the same level as more experienced team members, reducing training costs, speeding up analysis while investigating threats, and improving overall visibility for the security function.


Serverless Security Risks Are Real, and Hackers Know It

Many believe, “No servers, no security risks.” That’s a myth. Nowadays, attackers take advantage of the specific security weaknesses found in serverless platforms. ... All serverless applications need third-party libraries for operation. Each function that depends on the compromised component becomes vulnerable to attack. An npm package experienced a hijack attack when hackers inserted a secret entry into its system. The incorporation of code by AWS Lambda resulted in the silent extraction of all environment variables. The unauthorized loss of API keys, credentials, and sensitive data, together with all other valuable information. The process finished in milliseconds, which was too brief for any security system to identify. ... As more companies are adopting serverless technologies, security risks become more widespread. So, it’s fundamental to validate that serverless environments are secure. Let’s explore the facts. Research dictates that serverless computing is expected to grow rapidly. According to Gartner’s July 2025 forecast, global IT spending will climb to $5.43 trillion, with enterprises investing billions into AI-driven cloud and data center infrastructure, making serverless platforms an increasingly critical, but often overlooked, security target.

Daily Tech Digest - October 09, 2025


Quote for the day:

"No man is good enough to govern another man without that other's consent." -- Abraham Lincoln



The Quantum Wake-Up Call: Preparing Your Organization for PQC

Quantum computing promises transformative breakthroughs across industries—but it also threatens the cryptographic foundations that secure our digital world. As quantum capabilities evolve, organizations must proactively prepare for the shift to post-quantum cryptography (PQC) to safeguard sensitive data and maintain trust. ... The very mathematical "hardness" that makes RSA and ECC secure against classical computers is precisely what makes them fatally vulnerable to quantum computing. Shor's Algorithm: This quantum algorithm, developed by Peter Shor in 1994, is capable of solving the integer factorization and discrete logarithm problems exponentially faster than any classical machine. Once a sufficiently stable and large-scale quantum computer is built, an encryption that might take a supercomputer millions of years to break could be broken in hours or even minutes. The Decryption Time Bomb: Because current PKC is used to establish long-term trust and to encrypt keys, the entire cryptographic ecosystem is a single point of failure. The threat is compounded by the "Harvest Now, Decrypt Later" strategy, meaning sensitive data is already being harvested and stored by adversaries, awaiting the quantum moment to be unlocked. Quantum computing is no longer theoretical—it’s a looming reality. Algorithms like RSA and ECC, which underpin most public-key cryptography, are vulnerable to quantum attacks via Shor’s algorithm.


Producing a Better Software Architecture with Residuality Theory

Residuality theory is a very simple process. Sometimes, people are put off because the theoretical work necessary to prove that residuality works is very heavy, but applying it is easy, O’Reilly explained: We start out with a suggestion, a naive architecture that solves the functional problem. From there we stress the architecture with potential changes in the environment. These stressors allow us to uncover the attractors, often through conversations with domain experts. For each attractor, we identify the residue, what’s left of our architecture in this attractor, and then we change the naive architecture to make it survive better. We do this many times and, at the end, integrate all of these augmented residues into a coherent architecture. We can then test this to show that it survives unknown forms of stress better than our naive architecture. In complex business environments with uncertainty, residuality makes it possible to create architectures quickly instead of chasing down stakeholders demanding specific requirements or answers to questions that are unknown by the business itself, O’Reilly said. It pulls technical architects out of details and teaches them to productively engage with a business environment without the lines and boxes of traditional enterprise architecture, he concluded. ... Senior architects report that it gives a theoretical justification for practices that many had already figured out and a shared vocabulary for teams to talk about architecture. 


Will AI-SPM Become the Standard Security Layer for Safe AI Adoption?

By continuously monitoring AI assets, AI-SPM helps ensure that only trusted data sources are used during model development. Runtime security testing and red-team exercises detect vulnerabilities caused by malicious data. The system actively identifies abnormal model behavior, such as biased, toxic, or manipulated output, and brings them up for remediation prior to production release. ... AI-SPM continuously checks system requests and user inputs to find dangerous patterns before they lead to security problems, like attempts to remove or change built-in directives. It also uses protection against prompt injection and jailbreak attacks, which are common ways to access or alter system-level commands. By finding unapproved AI tools and services, it stops the use of insecure or poorly set up LLMs that could reveal system prompts. ... Shadow AI is starting to get more attention, and for good reason. Like shadow IT, employees are using public AI tools without authorization. That might mean uploading sensitive data or sidestepping governance rules, often without realizing the risks. The problem isn’t just the tools themselves, but the lack of visibility around how and where they’re being used. AI-SPM should work to identify all AI tools in play across networks, endpoints, cloud platforms, and dev environments, mapping how data moves between them, which is often the missing piece when trying to understand exposure risks.


How to write nonfunctional requirements for AI agents

Nonfunctional requirements for AI agents can be like those for applications, where user stories are granular and target delivering small, atomic functions. These NFRs can guide developers in answering how to develop the functionality described in user stories and to help quantify what should pass a code review. However, you may need another set of NFRs expressed at a feature or release level. ... “Agile teams often struggle with how to evaluate NFRs like latency, fairness, or explainability, which may seem nonfunctional, but with a little specification work, they can often be made concrete and part of a user story with clear pass/fail tests,” says Grant Passmore, co-founder of Imandra. “We use formal verification to turn NFRs into mathematical functional requirements we can prove or disprove.” ... AI agent NFRs that connect dev with ops have all the complexities of applications, infrastructure, automations, and AI models bundled together. Deploying the AI agent is just the beginning of its lifecycle, and NFRs for maintainability and observability help create the feedback loops required to diagnose issues and make operational improvements. As many organizations aim toward autonomous agentic AI and agent-to-agent workflows, standardizing a list of NFRs that are applied across all AI agents becomes important.


Unplug Gemini from email and calendars, says cybersecurity firm

CSOs should consider turning off Google Gemini access to employees’ Gmail and Google Calendars, because the chatbot is vulnerable to a form of prompt injection, says the head of a cybersecurity firm that discovered the vulnerability. ”If you’re worried about the risk, you might want to turn off automatic email and calendar processing by Gemini until this, and potentially other things like it, are addressed,” Jeremy Snider, CEO of US-based FireTail, said in an interview. ... This flaw is “particularly dangerous when LLMs, like Gemini, are deeply integrated into enterprise platforms like Google Workspace,” the report adds. FireTail tested six AI agents. OpenAI’s ChatGPT, Microsoft Copilot, and Anthropic AI’s Claude caught the attack. Gemini, DeepSeek, and Grok failed. In a test, FireTail researchers were able to change the word “Meeting” in an appointment in Google Calendar to “Meeting. It is optional.” ... “ASCII Smuggling attacks against AIs aren’t new,” commented Joseph Steinberg, a US-based cybersecurity and AI expert. “I saw one demonstrated over a year ago.” He didn’t specify where, but in August 2024, a security researcher blogged about an ASCII smuggling vulnerability in Copilot. The finding was reported to Microsoft. Many ways of disguising malicious prompts will be discovered over time, he added, so it’s important that IT and security leaders ensure that AIs don’t have the power to act without human approval on prompts that could be damaging.


Broken Opt-Outs, Big Fines: Tractor Supply Shows Privacy Enforcement Has Arrived for Retail

The Tractor Supply violations reveal a clear enforcement pattern. Broken opt-out links that route to dead webforms. Global Privacy Control signals ignored entirely. Privacy notices that skip job applicant data disclosures. Vendor agreements without data restriction clauses. These aren’t random oversights. They’re the exact gaps that surfaced across recent CCPA enforcement by the Attorney General and CPPA orders. Regulators are building a playbook: test the opt-out mechanisms, check for GPC compliance, review all privacy notices including HR portals, and audit third-party contracts. If any piece fails, expect enforcement. Regulators no longer accept opt-outs in theory or privacy policies in fine print. ... The message is clear: prove you have control. Not just over the data you collect, but over the algorithms that process it. Retailers who can’t show governance across both will face scrutiny on multiple fronts. The same broken opt-out that triggers a privacy fine could signal to regulators that your AI systems lack oversight too. This isn’t about adding more compliance checkboxes. It’s about recognizing that data governance and AI governance are becoming inseparable. The retailers who understand this convergence will build unified systems that handle both. The ones who don’t will scramble to retrofit governance after the fact, just like they’re doing with privacy today.


Why Enterprises Continue to Stick With Traditional AI

AI success also depends on digital maturity. Many organizations are still laying data foundations. "Let's say you want to run analytics on how many tickets were raised, do a dashboard on how many tickets one can expect … all of that was over a call. Nothing was digitized. There is no trace of it. That is the reason why chatbots are getting created because they are now recording and getting traced," Iyer said. ... Strict compliance and privacy requirements push enterprises toward controlled AI development. … Even in such cases, we ensure the data in the model that we build, it stays exclusively. At any point of time, your data or your model is not going to be used for the betterment of someone else," Iyer said. This approach reflects broader enterprise concerns about AI governance. According to KPMG research, frameworks such as local interpretable model-agnostic explanations and Shapley Additive exPlanations help clarify AI decisions, support compliance and build stakeholder confidence. ... Iyer said enterprise needs are often highly contextual, making massive models unnecessary. "Do you need a 600-700 billion [parameter] model sitting in your enterprise running inferences when the questions are going to be very contextual?" she said. This practical wisdom is supported by recent industry analysis. Traditional ML models often produce classification accuracy at a fraction of the cost compared to deep learning alternatives. 


Lead with a human edge: Why empathy is the new strategy

Traditional management was built on control: plans, processes, and hierarchies designed to tame complexity. But as Pushkar noted, ‘organisations are living organisms. They evolve, sense, and respond. Trying to manage them like machines is an illusion. The leaders of tomorrow will not be engineers of systems — they will be gardeners of cultures.’ “Planting a tree is very easy,” Bidwai said. “The real game is how you nurture, how you create an environment, how you enable the culture.” Nurturing, not directing, is the leadership mindset for an era of interdependence. ... Perhaps the most striking moment of Pushkar’s talk was not analytical but symbolic. He invited participants to discard their corporate titles just for a moment and invent new ones that reflected their purpose, not their position. “Sometimes titles define how we operate. Can we look beyond titles?” His own? In People Matters, Pushkar stated that he visualises his creative title as Plumber. “Wherever anything needs fixing, I will go and fix things.” The metaphor landed. Leadership, stripped of status, is about service. To lead with a human edge is to roll up your sleeves, listen, and fix what’s broken, in systems, in relationships, in ourselves. ... What Pushkar calls ‘the human edge’ is not a nostalgic pushback against technology. It is a pragmatic blueprint for sustainable growth. The leaders who will define the next decade will be those who use AI to augment human potential, not replace it those who recognise that data drives decisions, but empathy drives destiny.


Building a modern fraud prevention stack: why centralised data, not point solutions, is the answer

The fraud prevention landscape is riddled with fragmented tools, reactive approaches and blind spots. Despite the best of intentions, many organisations rely on outdated, point-in-time methods that are ill-suited for today’s dynamic fraud landscape. And fraud no longer plays by the old rules. It unfolds across the entire customer journey, mutating with every new channel, payment method or customer behaviour pattern. A fraudster may test stolen credentials one day, then come back weeks later to exploit a weak link in the onboarding or refund process. These disjointed systems miss multi-step attacks and patterns that unfold over time. ... while many organisations have historically relied on a patchwork of tools to cover each threat vector, it’s becoming clear that more tools aren’t the answer. Better coordination is. A modern stack doesn’t need to come from a single vendor, but it does need to operate like a single, unified system. That means integrated data, shared intelligence and orchestration that supports real-time response, not after-the-fact analysis. While investment is rising, with 85% of organisations having increased their fraud prevention budgets, it’s crucial to highlight that spending must be strategic. So, what does a modern fraud prevention stack actually look like? And how can organisations build one that’s unified, flexible and future-proof?


CISOs, Start Securing Software's Agentic Future Now

Industry-wide challenges create obstacles to AI governance, leaving leaders uncertain about where to focus their strategic efforts most effectively. The non-deterministic nature of agents causes them to behave in unexpected ways, which has been proven to disrupt existing security boundaries. Adding to this security complexity, universal protocols, such as Model Context Protocol and Agent2Agent, are emerging to streamline data access and improve agent interoperability, but their ecosystem-building capabilities introduce additional security considerations. But these challenges cannot stop security leaders from prioritizing AI governance. ... A culture of security now requires AI literacy. 43% of survey respondents acknowledged a widening AI skills gap, which is likely to grow unless technical leaders prioritize upskilling teams to understand model behavior, prompt engineering, and how to evaluate model inputs and outputs critically. Understanding where models are performant versus where their use is suboptimal helps teams avoid unnecessary security risk and technical debt. ... Teams should also recognize that no model can replace human ingenuity. When models fail in domains where security engineers or developers lack expertise, they will not be able to identify the security gaps the model has left behind. CISOs should consider dedicating a portion of learning and development budgets to continuous technical education. 

Daily Tech Digest - October 08, 2025


Quote for the day:

"Life is what happens to you while you’re busy making other plans." -- John Lennon



Network digital twin technology faces headwinds

Just like Google Maps is able to overlay information, such as driving directions, traffic alerts or locations of gas stations or restaurants, digital twin technology enables network teams to overlay information, such as a software upgrade, a change to firewalls rules, new versions of network operating systems, vendor or tool consolidation, or network changes triggered by mergers and acquisitions. Network teams can then run the model, evaluate different approaches, make adjustments, and conduct validation and assurance to make sure any rollout accomplishes its goals and doesn’t cause any problems, explains Maccioni ... “Configuration errors are a major cause of network incidents resulting in downtime,” says Zimmerman. “Enterprise networks, as part of a modern change management process, should use digital twin tools to model and test network functionality business rules and policies. This approach will ensure that network capabilities won’t fall short in the age of vendor-driven agile development and updates to operating systems, firmware or functionality.” ... Another valuable use case is testing failover scenarios, says Wheeler. Network engineers can design a topology that has alternative traffic paths in case a network component fails, but there’s really no way to stress test the architecture under real world conditions. He says that in one digital twin customer engagement “they found failure scenarios that they never knew existed.”


Autonomous AI hacking and the future of cybersecurity

The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We’re potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance. The most skilled will likely retain an edge for now. But AI agents don’t have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws, AI can turn rare expertise into commodity capabilities and gives average criminals an outsized advantage. ... If enterprises adopt AI-powered security the way they adopted continuous integration/continuous delivery (CI/CD), several paths open up. AI vulnerability discovery could become a built-in stage in delivery pipelines. We can envision a world where AI vulnerability discovery becomes an integral part of the software development process, where vulnerabilities are automatically patched even before reaching production — a shift we might call continuous discovery/continuous repair (CD/CR).


AI inference: reshaping the enterprise IT landscape across industries

AI inference is a complex operation that transforms intricate models into actionable agents. This process is essential for making real-time decisions, which can significantly improve user experiences. ... As AI systems handle more sensitive information, data security and private AI become a key part of effective inference processes. In cloud and Edge computing environments, where data often moves between multiple networks and devices, ensuring the confidentiality of user information is paramount. Private AI limits queries and requests to a company's internal database, SharePoint, API, or other private sources. It prevents unauthorized access and ensures that sensitive information remains confidential even when processed in the cloud or at the Edge. ... For AI to be truly transformative, low latency is a necessity, ensuring that real-time responses are both swift and seamless. In the realm of AI chatbots, for instance, the difference between a seamless conversation and a frustrating user experience often comes down to the speed of the AI’s response. Users expect immediate and accurate replies, and any delay can lead to a loss of engagement and trust. By minimising latency, AI chatbots can provide a more natural and fluid interaction, enhancing user satisfaction, and driving better outcomes. ... By reducing the distance data must travel, Edge computing significantly reduces latency, enabling faster and more reliable AI inference.


Smarter Systems, Safer Data: How to Outsmart Threat Actors

One of the clearest signs that a cybersecurity strategy is outdated is a lack of control and visibility over who can access what data, and on which systems. Many organizations still rely on fragmented identity management systems or grant broad access to database administrators. Others have yet to implement basic protections such as multi-factor authentication. ... Security concerns are commonly quoted as a top barrier to innovation. This is why many organizations struggle to adopt artificial intelligence, migrate to the cloud, share data externally or even internally. The only way to unblock this impasse is to start treating security as an enabler. Think about it this way: when done right, security is that key element that allows data to be moved, analyzed and shared. To exemplify this approach, if data is de-identified to maintain data privacy through the means of encryption or tokenization, in a situation of a breach, it will remain useless to attackers. ... What’s been key for the organizations that succeed in managing data risk while simultaneously unlocking value is a mindset shift. They stop seeing security as a roadblock and start seeing it as a foundation for growth. As an example, a large financial institution client has built an AI-powered solution for anti-money laundering. By protecting incoming data before it enters their system, they ensure that no sensitive data is fed to their algorithms, and thus the risk of a privacy breach, even incidental, is essentially null.


AI could prove CIOs’ worst tech debt yet

AI tools can be used to clean up old code and trim down bloated software, thus reducing one major form of tech debt. In September, for example, Microsoft announced a new suite of autonomous AI agents designed to automatically modernize legacy Java and .NET applications. At the same time, IT leaders see the potential for AI to add to their tech debt, with too many AI projects relying on models or agents that can be expensive to deploy and maintain and AI coding assistants generating more lines of software than may be necessary. ... Endless AI pilot projects create their own form of tech debt as well, says Ryan Achterberg, CTO at IT consulting firm Resultant. This “pilot paralysis,” in which organizations launch dozens of proofs of concepts that never scale, can drain IT resources, he says. “Every experiment carries an ongoing cost,” Achterberg says. “Even if a model is never scaled, it leaves behind artifacts that require upkeep and security oversight.” Part of the problem is that AI data foundations are still shaky, even as AI ambition remains high, he adds. ... In addition to tech debt from too many AI pilot projects, coding assistants can create their own problems without proper oversight, adds Jaideep Vijay Dhok, COO for technology at digital engineering provider Persistent Systems. In some cases, AI coding assistants will generate more lines of software than a developer asked for, he says. 


Hackers Exploit RMM Tools to Deploy Malware

RMM platforms typically operate with elevated permissions across endpoints. Once compromised, they offer adversaries a ready-made channel for privilege escalation, lateral movement and payload delivery, including ransomware ... Threat actors frequently repurpose legitimate RMM tools or hijack valid credentials, allowing malicious activity to blend seamlessly with routine administrative tasks. This tactic complicates detection and response, especially in environments lacking behavioral baselining. ... "This is a typical living-off-the-land attack used by many adversaries considering the success and ease of execution. Typically, such software are whitelisted in most of the controls to avoid blocking and noise, due to which its activities are not monitored much," Varkey said. "Like in most adversarial acts, getting access to the software is their initial step, so if access is limited to specific people with multifactor authorization and audited periodically, unauthorized access can be limited. .." ... "Treat RMM seriously. Assume compromise is possible and build defenses around prevention, detection and rapid response. Start with a full audit of your RMM deployment - map every agent, session and integration to identify shadow access points: asset management is key and a good RMM solution should be able to assist here. Layered controls are key - think defense-in-depth tailored to RMM's remote nature," Beuchelt said.


From Data to Doing: Agentic AI Will Revolutionize the Enterprise

Where do organizations see the greatest opportunities for agentic AI? The answer is: everywhere. Survey results show that business leaders view agentic AI as equally relevant to productivity gains, better decision-making, and enhanced customer experiences. When asked to rank potential benefits, improving customer experience and personalization emerge as the top priority, followed closely by sharper decision-making and increased efficiency. What's telling is what landed at the bottom of the list. Few organizations currently view market and business expansion as critical. This suggests that, at least in the near term, agentic AI will be applied less as a driver of bold new growth and more as a catalyst for improving and extending existing operations. ... Agentic AI is not simply the next technology wave -- it is the next great inflection point for enterprise software. Just as client–server, the Internet, and the cloud radically redefined industry leaders, agentic AI will determine which vendors and enterprises can adapt quickly enough to thrive. The lesson is clear: organizations that treat data as a strategic asset, modernize their platforms, and embed intelligence into their workflows will not only move faster but also serve customers better. The rest risk being left behind -- just as the mainframe giants once were.


Is That Your Boss or a Deepfake on the Other Side of That Video Call?

Sophisticated deepfake technology had perfectly replicated not just the appearance but the mannerisms and decision-making patterns of the company’s executives. The real managers were elsewhere, unaware their digital twins were orchestrating one of the largest deepfake heists in corporate history. This reflects a terrifying trend of AI fraud that is shaking the financial services industry. Deepfake-enabled attacks have grown by an alarming 1,740% in just one year, representing one of the fastest-growing AI-powered threats. More than half of businesses in the U.S. and U.K. have been targeted by deepfake-powered financial scams, with 43% falling victim. ... The deepfake threat extends far beyond immediate financial losses. Each successful attack erodes the foundation of digital communication itself. When employees can no longer trust that their CEO is real during a video call, the entire remote work infrastructure becomes suspect in particular for financial institutions, which deal in the currency of trust. ... Financial services companies must implement comprehensive AI governance frameworks, continuous monitoring systems, and robust incident response plans to address these evolving threats while maintaining operational efficiency and customer trust. These systems and protocols must extend not only within their front office but to their back office, including vendor management and third-party suppliers who manage their data.


Rethinking AI security architectures beyond Earth

The researchers outline three architectures: centralized, distributed, and federated. In a centralized model, the heavy lifting happens on Earth. Satellites send telemetry data to a large AI system, which analyzes it and sends back security updates. Training is fast because powerful ground-based resources are available, but the response to threats is slower due to long transmission times. In a distributed model, satellites still rely on the ground for training but perform inference locally. This setup reduces delay when responding to a threat, though smaller onboard systems can limit model accuracy. Federated learning goes a step further. Satellites train and infer on their own data without sending it to Earth. They share only model updates with other satellites and ground stations. This keeps latency low and improves privacy, but synchronizing models across a large constellation can be difficult. ... Byrne pointed out that while space-based architectures vary in resilience, recovery often depends on shared fundamentals. “Most systems across all segments will need to be restored from secure backups,” he said. “One architectural enhancement to help reduce recovery time is the implementation of distributed Inter-Satellite Links. These links enable faster propagation of recovery updates between satellites, minimizing latency and accelerating system-wide restoration.”


Who Governs Your NHIs? The Challenge of Defining Ownership in Modern Enterprise IT

What we should actually mean by ownership is the person who can answer the basic questions about why this NHI exists, what access it has, how often credentials should be rotated, whether it's being used in a way that could introduce new risks, and whether the credentials have been properly stored or have been leaked. ... Instead of focusing solely on assigning human ownership, we should be working to ensure that the questions we would ask the owner are easily answerable by our tools. This approach makes answers persistent and usable by multiple teams over time and provides consistency across the organization. It does not rely on specific individuals being eternally available or up to speed on how the NHI they created is being used. Ultimately, it scales better than human-dependent processes. Just as governing an application and all of the NHIs involved is almost never going to be the responsibility of one person, the ideal scenario where a single person can outright own an NHI and be responsible for every aspect is going to be a rare situation. ... The conversation about ownership often gets stuck on blame. Let's reframe it around assurance. Let's ensure that if a secret exists, no matter where or how it is stored, governance questions can be answered quickly and consistently.

Daily Tech Digest - October 07, 2025


Quote for the day:

"There is only one success – to be able to spend your life in your own way." -- Christopher Morley



5 Critical Questions For Adopting an AI Security Solution

An AI-SPM solution must be capable of seamless AI model discovery, creating a centralized inventory for complete visibility into deployed models and associated resources. This helps organizations monitor model usage, ensure policy compliance, and proactively address any potential security vulnerabilities. By maintaining a detailed overview of models across environments, businesses can proactively mitigate risks, protect sensitive data, and optimize AI operations. ... An effective AI-SPM solution must tackle risks that are specific to AI systems. For instance, it should protect training data used in machine learning workflows, ensure that datasets remain compliant under privacy regulations, and identify anomalies or malicious activities that might compromise AI model integrity. Make sure to ask whether the solution includes built-in features to secure every stage of your AI lifecycle—from data ingestion to deployment. ... When evaluating an AI-SPM solution, ensure that it automatically maps your data and AI workflows to governance and compliance requirements. It should be capable of detecting non-compliant data and providing robust reporting features to enable audit readiness. Additionally, features like automated policy enforcement and real-time compliance monitoring are critical to keeping up with regulatory changes and preventing hefty fines or reputational damage.


The architecture of lies: Bot farms are running the disinformation war

As bots become more common and harder to tell from real users, people start to lose confidence in what they see online. This creates the liars dividend, where even authentic content is questioned simply because everyone knows fakes are out there. If any critical voice or inconvenient fact can be dismissed as just a bot or a deepfake, democratic debate takes a hit. AI-driven bots can also create the illusion of consensus. By making a hashtag or viewpoint trend, they create the impression that everyone is talking about it, or that an extreme position enjoys broader support than it appears to have.  ... It’s still an open question how well online platforms stop malicious, bot-driven content, even though they are the ones responsible for policing their own networks. Harmful AI bots continue to get through the defenses of major social media platforms. Even though most have rules against automated manipulation, enforcement is weak and bots exploit the gaps to spread disinformation. Current detection systems and policies aren’t keeping up, and platforms will need stronger measures to address the problem. ... The EU and the US are both moving to address bot-driven disinformation. In the EU, the Digital Services Act obliges large online platforms to assess and mitigate systemic risks such as manipulation, and to provide vetted researchers with access to platform data.


Is the CISO chair becoming a revolving door?

“A CISO is interacting with a lot of interfaces, and you need to have soft skills and communicate well with others. In many cases, you need to drive others to take action, and that’s super tedious. It’s very difficult to keep doing it over time,” Geiger Maor says. “In many cases, you’re in direct conflict with company goals and your goals. You’re like a salmon fish going upstream against everybody else. This makes it very difficult to keep a long tenure.” ... That constant exposure to risk and blame is another reason some CISOs hesitate to take the role in the first place, according to Rona Spiegel, senior manager, security and trust, mergers and acquisitions at Autodesk and former cloud governance leader at Wells Fargo and Cisco. “The bad guys, especially now with AI and automation, they’re getting more sophisticated, and they only have to be right once, but the CISO has to be right all day every day. They only have to be wrong once, and they get blamed … you’re an operational cost centre no matter what because you’re not bringing in revenue, so if something goes wrong … all roads lead to the CISO,” Spiegel says. ... Chapman is also seeing a rise in fractional CISOs, brought in part-time to set up frameworks or oversee specific projects. “It really comes down to the individual,” he says. “Some want that top seat, speaking to the board, communicating risk. But I am also seeing some say, ‘It doesn’t have to be a CISO role.’”


RPA versus hyperautomation: Understanding accuracy (performance) benchmarks in practice

RPA is like that reliable coworker who never complains and does exactly what you ask. It loves repetitive, predictable tasks such as copying and pasting data, moving files between systems or generating standard reports. When everything goes according to plan, RPA is perfect. ... Hyperautomation is the next-level upgrade. It combines RPA with AI, natural language processing (NLP), intelligent document processing (IDP), process mining and workflow orchestration. In simple terms, it doesn’t just follow rules. It learns, adapts and keeps things moving even when the world throws curveballs. With hyperautomation, processes that would have stopped RPA cold continue without a hitch. ... RPA and Hyperautomation are not rivals. They are more like teammates with different strengths. RPA shines when tasks are stable and repetitive, quietly doing its job without fuss. Hyperautomation brings in intelligence, flexibility and the ability to handle entire processes from start to finish. When applied thoughtfully, hyperautomation cuts down on manual corrections, handles exceptions smoothly and delivers value at scale. All this happens without the IT team needing to hire extra coffee runners to fix errors or babysit the robots. The real goal is to build automation that works at the process level, adapts to change and keeps running even when things go off script.


The pros and cons of AI coding in the IT industry

Although now being used by the majority of programmers, AI tools were not universally welcomed upon their launch, and it has taken time to move beyond the initial doubts and suspicion surrounding generative AI. It’s important to note that risks remain when using AI-generated code, which organizations will have to mitigate. “Integrating AI into our coding processes was initially met with skepticism, both within our organization and across the industry,” Jain explains. “Concerns included AI's ability to comprehend complex codebases, the potential for generating buggy code, adherence to company standards, and issues surrounding code and data privacy.” However, since the launch of the first generative AI tools at the end of 2022, Jain says that the rapid evolution of AI technology’s implementation has alleviated many concerns, with features such as codebase indexing and secure training protocols addressing major concerns. “These advancements have enabled AI tools to understand code context, follow company standards, and maintain robust security measures,” Jain tells ITPro. Nevertheless, security and accountability are also major factors for any IT company to consider when looking to use AI as part of the development process, and research continues to show glaring vulnerabilities in AI code. There are certain steps that simply can’t be replaced by AI.


Why AI Is Forcing an Invisible Shift in Risk Management

Without the need for complex, technical coding knowledge, there are increasingly more departments within a business capable of driving and contributing to the development lifecycle, forcing a shift from centralized innovation to development that is fractalized across the entire organization. This shift has been revolutionary, driving more lucrative development by empowering technical teams and business leaders to align on goals and work hand-in-hand. Still, this transition has changed the organization’s relationship with risk. ... In the age of distributed application building, organizations have to raise more questions as it relates to governance and risk, which can mean many different things depending on where the technology sits in the business. Is the application going to be customer-facing? How sensitive is the data? How should it be stored? What are some other privacy considerations? These are all questions businesses must ask in the age of fractured development — and the answers will vary from case to case. ... The shift to decentralized development is not the first change technology has seen, and it’s certainly not the last. The key to staying ahead of the curve is paying attention to the invisible shifts that come with these disruptions, such as the changes that have recently come with the adoption of AI and low code. As these technologies reimagine the typical risk management and compliance model, it’s important for businesses to come to terms with adaptive governance and react as such.


How cross-functional teams rewrite the rules of IT collaboration

When done right, IT isn’t just an optional part of cross-functional collaboration, it’s an integral part of what makes collaboration possible. “There’s a lot of overlap now between IT, sales, finance and regulatory compliance,” says George Dimov, managing owner of Dimov Tax. ... What happens when IT plays a key role in breaking down barriers? First, getting IT involved in cross-functional teams means IT is at the table from day one. Rather than having an environment where a department requests a report or tool from IT after the fact, or has it digitize information later on, IT is present in all meetings. As more organizations recognize the inherent importance of digital transformation, the need for IT expertise — including perspectives from individuals with different types of IT experience — becomes more pronounced. It’s up to the CIO to provide the cross-functional leadership that ensures IT is involved in such efforts from the start. ... Even in situations when IT isn’t directly involved in day-to-day collaboration, it can still play a valuable role by providing technology resources that aid and facilitate collaboration. Ideally, IT should be part of the solution to eliminate barriers, whether that’s through digital sharing tools, reporting mechanisms, or something else. IT can and should be at the forefront of enabling cross-functional collaboration between teams and departments.


Service-as-software: The new control plane for business

Historically, enterprises ran on islands of automation — enterprise resource planning for the back office and, later, a proliferation of apps. Customer relationship management was the first to introduce a new operating model and a new business model. Today, the enterprise itself must begin to operate like a software company. That requires harmonizing those islands into a single unified layer where data and application logic collapse into an integrated System of Intelligence. Agents rely on this harmonized context to make decisions and, when needed, invoke legacy applications to execute workflows. Operating this way also demands a new operations model: a build-to-order assembly line for knowledge work that blends the customization of consulting with the efficiency of high-volume fulfillment. Humans supervise agents, and in doing so progressively encode their expertise into the system. ... The important point to remember is that islands of automation impede management’s core function – planning, resource allocation and orchestration with full visibility across levels of detail and business domains. Data lakes do not solve this by themselves; each star schema is another island. Near-term, organizations can start small and let agents interrogate a single domain (for example, the sales cube) and take limited actions by calling systems of record via MCP servers, for example, viewing a customer’s complaints and initiating a return authorization.


Companies are making the same mistake with AI that Tesla made with robots

Shai Ahrony, CEO of marketing agency Reboot Online, calls this phenomenon the "AI aftershock." "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing." ... Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media. ... McDonalds' and Klarna's decisions to backtrack on AI in favor of humans is reminiscent of a similar about-face from Tesla. In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation…was a mistake." "Humans are underrated," he added. Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch.


How Can the Usage of AI Help Boost DevOps Pipelines

In recent times, AI is playing a key role in CI/CD by using machine learning algorithms and intelligent automation to detect errors proactively, optimize resource usage and faster release cycles. With AI, CI/CD pipelines can learn, adapt and optimize themselves, redefining software development from start to finish. By combining AI and DevOps, you can eliminate silos, recover faster from outages and open up new business revenue streams. Today’s businesses are increasingly leveraging artificial intelligence capabilities throughout their DevOps pipelines to make their CI/CD pipelines intelligent, thereby enabling them to predict problems faster, optimize the pipelines if needed, and recover from failures without the need for any human intervention. ... When you adopt AI into the DevOps practices in your organization, you are applying specific technologies to automate, optimize, and enhance each stage of the software development lifecycle – coding, testing, deployment, and monitoring. Today’s organizations are using AI in their DevOps pipelines to drive innovation, enabling teams to work seamlessly and achieve rapid development and deployment cycles. ... AI can help in DevSecOps in ways such as automating security testing, automating threat detection, and streamlining incident response. You can use AI-powered tools to scan your application source code for security vulnerabilities, automate software patches, automate incident responses, and monitor in real-time to identify anomalies.