Daily Tech Digest - November 21, 2025


Quote for the day:

“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl



DPDP Rules and the Future of Child Data Safety

Most obligations for Data Fiduciaries, including verifiable parental consent, security safeguards, breach notifications, data minimisation, and processing restrictions for children’s data, come into force after 18 months. This means that although the law recognises children’s rights today, full legal protection will not be enforceable until the culmination of the 18-month window. ... Parents’ awareness of data rights, online safety, and responsible technology is the backbone of their informed participation. The government needs to undertake a nationwide Digital Parenting Awareness Campaign with the help of State Education Departments, modelled on literacy and health awareness drives. ... schools often outsource digital functions to vendors without due diligence. Over the next 18 months, they must map where the student data is collected and where it flows, renegotiate contracts with vendors, ensure secure data storage, and train teachers to spot data risks. Nationwide teacher-training programmes should embed digital pedagogy, data privacy, and ethical use of technology as core competencies. ... effective implementation will be contingent on the autonomy, resourcefulness, and accessibility of the Data Protection Board. The regulator should include specialised talent such as cybersecurity specialists and privacy engineers. It should be supported by building an in-house digital forensics unit, capable of investigating leaks, tracing unauthorised access, and examining algorithmic profiling. 


5 best practices for small and medium businesses (SMEs) to strengthen cybersecurity

First, begin with good access control which would entail restricting employees to only the permissions that they specifically require. It is also important to have multi-factor authentication in place, and regularly audit user accounts, particularly when roles shift or personnel depart. Second, keep systems and software current by immediately patching operating systems, applications, and security software to close vulnerabilities before they can be exploited by attackers. Similarly, updates should be automated to avoid human error. The staff are usually at the front line of the defence, so the third essential practice is the continuous ongoing training of employees in identifying phishing attempts, suspicious links, and social engineering methods, making them active guardians of corporate data and effectively cutting the risk of a data breach. Fourth is the safeguarding your data which can be implemented by having regular backups stored safely in multiple places and by complementing them with an explicit disaster recovery strategy, so that you are able to restore operations promptly, reduce downtime, and constrain losses in the event of a cyber attack. Fifth and finally, companies should embrace the layered security paradigm using antivirus tools, firewalls, endpoint protection, encryption, and safe networks. Each of those layers complement each other, creating a resilient defence that protects your digital ecosystem and strengthens trust with partners, customers, and stakeholders.


How Artificial Intelligence is Reshaping the Software Development Life Cycle (SDLC)

With AI tools, workflows become faster and more efficient, giving engineers more time to concentrate on creative innovation and tackling complex challenges. As these models advance, they can better grasp context, learn from previous projects, and adapt to evolving needs. ... AI streamlines software design by speeding up prototyping, automating routine tasks, optimizing with predictive analytics, and strengthening security. It generates design options, translates business goals into technical requirements, and uses fitness functions to keep code aligned with architecture. This allows architects to prioritize strategic innovation and boosts development quality and efficiency. ... AI is shifting developers’ roles from manual coding to strategic "code orchestration." Critical thinking, business insight, and ethical decision-making remain vital. AI can manage routine tasks, but human validation is necessary for security, quality, and goal alignment. Developers skilled in AI tools will be highly sought after. ... AI serves to augment, not replace, the contributions of human engineers by managing extensive data processing and pattern recognition tasks. The synergy between AI's computational proficiency and human analytical judgment results in outcomes that are both more precise and actionable. Engineers are thus empowered to concentrate on interpreting AI-generated insights and implementing informed decisions, as opposed to conducting manual data analysis.


Innovative Approaches To Addressing The Cybersecurity Skills Gap

In a talent-constrained world, forward-leaning organizations aren’t hiring more analysts—they’re deploying agentic AI to generate continuous, cryptographic proof that controls worked when it mattered. This defensible automation reduces breach impact, insurer friction and boardroom risk—no headcount required. ... Create an architecture and engineering review board (AERB) that all current and future technical designs are required to flow through. Make sure the AERB comprises a small group of your best engineers, developers, network engineers and security experts. The group should meet multiple times a year, and all technical staff should be required to rotate through to listen and contribute to the AERB. ... Build security into product design instead of adding it in afterward. Embed industry best practices through predefined controls and policy templates that enforce protection automatically—then partner with trusted experts who can extend that foundation with deep, domain-specific insight. Together, these strategies turn scarce talent into amplified capability. ... Rather than chasing scarce talent, companies should focus on visibility and context. Most breaches stem from unknown identities and unchecked access, not zero days. By strengthening identity governance and access intelligence, organizations can multiply the impact of small security teams, turning knowledge, not headcount, into their greatest defense.


The Configurable Bank: Low‑Code, AI, and Personalization at Scale

What does the present day modern banking system look like: The answer depends on where you stand. For customers, Digital banking solutions need to be instant, invisible, and intuitive – a seamless tap, a scan, a click. For banks, it’s an ever-evolving race to keep pace with rising expectations. ... What was once a luxury i.e. speed and dependability – has become the standard. Yet, behind the sleek mobile apps and fast payments, many banks are still anchored to quarterly release cycles and manual processes that slow innovation. To thrive in this landscape, banks don’t need to rip out their core systems. What they need is configurability – the ability to re-engineer services to be more agile, composable, and responsive. By making their systems configurable rather than fixed, banks can launch products faster, adapt policies in real time, and reduce the cost and complexity of change. ... The idea of the Configurable Bank is built on this shift – where technology, powered by low-code and AI, transforms banking into a living, adaptive platform. One that learns, evolves, and personalizes at scale – not by replacing the core, but by reimagining how it connects with everything around it. ... This is not just a technology shift; it’s a strategic one. With low-code, innovation is no longer the privilege of IT alone. Business teams, product leaders, and even customer-facing units can now shape and deploy digital experiences in near real time. 


Deepfake crisis gets dire prompting new investment, calls for regulation

Kevin Tian, Doppel’s CEO, says that organizations are not prepared for the flood of AI-generated deception coming at them. “Over the past few months, what’s gotten significantly better is the ability to do real-time, synchronous deepfake conversations in an intelligent manner. I can chat with my own deepfake in real-time. It’s not scripted, it’s dynamic.” Tian tells Fortune that Doppel’s mission is not to stamp out deepfakes, but “to stop social engineering attacks, and the malicious use of deepfakes, traditional impersonations, copycatting, fraud, phishing – you name it.” The firm says its R&D team has “just scratched the surface” of innovations it plans to bring to existing and upcoming products, notably in social engineering defense (SED). The Series C funds will “be used to invest in the core Doppel gang to meet the exponential surge in demand.” ... Advocating for “laws that prioritize human dignity and protect democracy,” the piece points to the EU’s AI Act and Digital Services Act as models, and specifically to new copyright legislation in Denmark, which bans the creation of deepfakes without a subject’s consent. In the authors’ words, Denmark’s law would “legally enshrine the principle that you own you.” ... “The rise of deepfake technology has shown that voluntary policies have failed; companies will not police themselves until it becomes too expensive not to do so,” says the piece.


The what, why and how of agentic AI for supply chain management

To be sure, software and automation are nothing new in the supply chain space. Businesses have long used digital tools to help track inventories, manage fleet schedules and so on as a way of boosting efficiency and scalability. Agentic AI, however, goes further than traditional SCM software tools, offering capabilities that conventional systems lack. For instance, because agents are guided by AI models, they are capable of identifying novel solutions to challenges they encounter. Traditional SCM tools can’t do this because they rely on pre-scripted options and don’t know what to do when they encounter a scenario no one envisioned beforehand. AI can also automate multiple, interdependent SCM processes, as I mentioned above. Traditional SCM tools don’t usually do this; they tend to focus on singular tasks that, although they may involve multiple steps, are challenging to automate fully because conventional tools can’t reason their way through unforeseen variables in the way AI agents do. ... Deploying agents directly into production is enormously risky because it can be challenging to predict what they’ll do. Instead, begin with a proof of concept and use it to validate agent features and reliability. Don’t let agents touch production systems until you’re deeply confident in their abilities. ... For high-stakes or particularly complex workflows, it’s often wise to keep a human in the loop.


How AI can magnify your tech debt - and 4 ways to avoid that trap

The survey, conducted in September, involved 123 executives and managers from large companies. There are high hopes that AI will help cut into and clear up issues, along with cost reduction. At least 80% expect productivity gains, and 55% anticipate AI will help reduce technical debt. However, the large segment expecting AI to increase technical debt reflects "real anxiety about security, legacy integration, and black-box behavior as AI scales across the stack," the researchers indicated. Top concerns include security vulnerabilities (59%), legacy integration complexity (50%), and loss of visibility (42%). ... "Technical debt exists at many different levels of the technology stack," Gary Hoberman, CEO of Unqork, told ZDNET. "You can have the best 10X engineer or the best AI model writing the most beautiful, efficient code ever seen, but that code could still be running on runtimes that are themselves filled with technical debt and security issues. Or they may also be relying on open-source libraries that are no longer supported." ... AI presents a new raft of problems to the tech debt challenge. The rising use of AI-assisted code risks "unintended consequences, such as runaway maintenance costs and increasing tech debt," Hoberman continued. IT is already overwhelmed with current system maintenance.


The State and Current Viability of Real-Time Analytics

Data managers now prefer real-time analytical capabilities built within their applications and systems, rather than a separate, standalone, or bolted-on proj­ect. Interest in real-time analytics as a standalone effort has dropped from 50% to 32% during the past 2 years, a recent survey of 259 data managers conducted by Unisphere Research finds ... So, the question becomes: Are real-time analytics ubiqui­tous to the point in which they are automatically integrated into any and all applications? By now, the use of real-time analyt­ics should be a “standard operating requirement” for customer experience, said Srini Srinivasan, founder and CTO at Aero­spike. This is where the rubber meets the road—where “the majority of the advances in real-time applications have been made in consumer-oriented enterprises,” he added. Along these lines, the most prominent use cases for real-time analytics include “risk analysis, fraud detection, recommenda­tion engines, user-based dynamic pricing, dynamic billing and charging, and customer 360,” Srinivasan continued. “For over a decade, these systems have been using AI and machine learning [ML], inferencing for improving the quality of real-time deci­sions to improve customer experience at scale. The goal is to ensure that the first customer and the hundred-millionth cus­tomer have the same vitality of customer experience.” ... “Within industries such as energy, life sciences, and chemicals, the next decade of real-time analytics will be driven by more autono­mous operations,” said David Streit


You Down with EDD? Making Sense of LLMs Through Evaluations

We're facing a major infrastructure maturity gap in AI development — the same gap the software world faced decades ago when applications grew too complex for informal testing and crossed fingers. Shipping fast with user feedback works early on, but when done at scale with rising stakes, "vibes" break down and developers demand structure, predictability, and confidence in their deployments. ... AI engineering teams are turning to an emerging solution: evaluation-driven development (EDD), the probabilistic cousin to TDD. An evaluation looks similar to a traditional software test. You have an assertion, a response, and pass-fail criteria, but instead of asking "Does this function return 42?" you're asking "Does this legal AI application correctly flag the three highest-risk clauses in this nightmare of a merger agreement?" Our trust in AI systems comes from our trust in the evaluations themselves, and if you never see an evaluation fail, you're not testing the right behaviors. The practice of Evaluation-Driven Development (EDD) is about repeatedly testing these evaluations. ... The technology for EDD is ready. Modern AI platforms provide solid evaluation frameworks that integrate with existing development workflows, but the challenge facing wide adoption is cultural. Teams need to embrace the discipline of writing evaluations before changing systems, just like they learned to write tests before shipping code. It requires a mindset shift from "move fast and break things," to "move deliberately and measure everything."

Daily Tech Digest - November 20, 2025


Quote for the day:

"Choose your heroes very carefully and then emulate them. You will never be perfect, but you can always be better." -- Warren Buffet



A developer’s guide to avoiding the brambles

Protect against the impossible, because it just might happen. Code has a way of surprising you, and it definitely changes. Right now you might think there is no way that a given integer variable would be less than zero, but you have no idea what some crazed future developer might do. Go ahead and guard against the impossible, and you’ll never have to worry about it becoming possible. ... If you’re ever tempted to reuse a variable within a routine for something completely different, don’t do it. Just declare another variable. If you’re ever tempted to have a function do two things depending on a “flag” that you passed in as a parameter, write two different functions. If you have a switch statement that is going to pick from five different queries for a class to execute, write a class for each query and use a factory to produce the right class for the job. ... Ruthlessly root out the smallest of mistakes. I follow this rule religiously when I code. I don’t allow typos in comments. I don’t allow myself even the smallest of formatting inconsistencies. I remove any unused variables. I don’t allow commented code to remain in the code base. If your language of choice is case-insensitive, refuse to allow inconsistent casing in your code. ... Implicitness increases cognitive load. When code does things implicitly, the developer has to stop and guess what the compiler is going to do. Default variables, hidden conversions, and hidden side effects all make code hard to reason about.


SaaS Rolls Forward, Not Backward: Strategies to Prevent Data Loss and Downtime

The SaaS provider owns infrastructure-level redundancy and backups to maintain operational continuity during regional outages or major disruptions. InfoSec and SaaS teams are no longer responsible for infrastructure resilience. Instead, they are responsible for backing up and recovering data and files stored in their SaaS instances. This is significant for two primary reasons. First, the RTO and RPO for SaaS data become dependent on the vendor's capabilities, which are not within the control of the customer. ... A common misconception, even among mature InfoSec teams, is the assumption that SaaS data protection is fully managed by the vendor. This “set it and forget it” mindset, while understandable given the cloud promise, overlooks the need for organizations to backup their SaaS data. Common causes of data loss and corruption are human errors within the customer’s SaaS instance, including accidental deletion, integration issues, and migration mishaps which fall under the customer’s responsibility. ... InfoSec and SaaS teams must combine their knowledge and experience to ensure that backups contain all necessary data, as well as metadata, which provides the necessary context, and can be restored reliably. SaaS administrators can prevent users from logging in, disable automations, block upstream data from being sent, or restrict data from being sent to downstream systems as needed.


EU publishes Digital Omnibus leaving AI Act future uncertain

The European Commission unveiled amendments on Wednesday designed to simplify its digital regulatory framework, including the AI Act and data privacy rules, in a bid to boost innovation. The Digital Omnibus package introduces several measures, including delaying the stricter regulation of ‘high-risk’ AI applications until late 2027 and allowing companies to use sensitive data, such as biometrics, for AI training under certain conditions. ... The Digital Omnibus also attempts to adapt rules within privacy regulation, such as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act. The Commission plans to clarify when data stops being “personal.” This could open the doors for tech companies to include anonymous information from EU citizens into large datasets for training AI, even when they contain sensitive information such as biometric data, as long as they make reasonable efforts to remove it. ... EU member states have also called for postponing the rollout of the AI Act altogether, citing difficulties in defining related technical standards and the need for Europe to stay competitive in the global technological race. “Europe has not so far reaped the full benefits of the digital revolution,” says European economy commissioner Valdis Dombrovskis. “And we cannot afford to pay the price for failing to keep up with demands of the changing world.”


Building Distributed Event-Driven Architectures Across Multi-Cloud Boundaries

The elegant simplicity of "fire an event and forget" becomes a complex orchestration of latency optimization, failure recovery, and data consistency across provider boundaries. Yet, when done right, multi-cloud event-driven architectures offer unprecedented resilience, performance, and business agility. ... Multi-cloud latency isn't just about network speed, it's about the compound effect of architectural decisions across cloud boundaries. Consider a transaction that needs to traverse from on-premise to AWS for risk assessment, then to Azure for analytics processing, and back to on-premise for core banking updates. Each hop introduces latency, but the cumulative effect can transform a sub-100 ms transaction into a multi-second operation. ... Here is an uncomfortable truth: Most resilience strategies focus on the wrong problem. As engineers, we typically put our efforts into handling failures that occur during an outage or when a service component is down. Equally important is how you recover from those failures after the outage is over. This approach to recovery creates systems that "fail fast" but "recover never". ... The combination of event stores, resilient policies, and systematic event replay capabilities creates a distributed system that not only survives failures, but also recovers automatically, which is a critical requirement for multi-cloud architectures. ... While duplicate risk processing merely wastes resources, duplicate financial transactions create regulatory nightmares and audit failures.


For AI to succeed in the SOC, CISOs need to remove legacy walls now

"The legacy SOC, as we know it, can't compete. It's turned into a modern-day firefighter," warned CrowdStrike CEO George Kurtz during his keynote at Fal.Con 2025. "The world is entering an arms race for AI superiority as adversaries weaponize AI to accelerate attacks. In the AI era, security comes down to three things: the quality of your data, the speed of your response, and the precision of your enforcement." Enterprise SOCs average 83 security tools across 29 different vendors, each generating isolated data streams that defy easy integration to the latest generation of AI systems. System fragmentation and lack of integration represent AI's greatest vulnerability, and organizations' most fixable problem. The mathematics of tool sprawl proves devastating. Organizations deploying AI across fragmented toolsets report significantly elevated false-positive rates. ... Getting governance right is one of a CISO's most formidable challenges and often includes removing longstanding roadblocks to make sure their organization can connect and make contributions across the business. ... A CISO's transformation from security gatekeeper to business enabler and strategist is the single best step any security professional can take in their career. CISOS often remark in interviews that the transition from being an app and data disciplinarian to an enabler of new growth with the ultimate goal of showing how their teams help drive revenue was the catalyst their careers needed.


Selling to the CISO: An open letter to the cybersecurity industry

Vendors think they’re selling technology. They’re not. They’re trying to sell confidence to people whose jobs depend on managing the impossible. As a CISO, I buy because I’m trying to reduce the odds that something catastrophic happens on my watch. Every decision is a gamble. There is no “safe” option in this field. I buy to reduce personal and organizational risk, knowing there’s no such thing as perfect protection. Cybersecurity is not a puzzle you solve. It’s a game you play — and it never ends. You make the best moves you can, knowing you’ll never win. Even if I somehow patched every system and closed every gap, the cost of perfection would cripple the company. ... The truth is that most organizations don’t need more tools. They need to get the fundamentals right. If you can patch consistently, maintain good access controls, and segment your networks so you aren’t running flat, you’re ahead of most of the market — no shiny tools required. Strong patching alone will eliminate most of the attack surface that vendors keep promising to “detect.” ... We can’t blame vendors alone. We created the market they’re serving. We bought into the illusion that innovation equals progress. We ignored the fundamentals because they’re hard and unglamorous. We filled our environments with products we couldn’t fully use and called it maturity. We built complexity and called it strategy. Then we act shocked when the same root causes keep taking us down. Good security still starts with good IT. Always has. Always will. If you don’t know what you own, you can’t protect it.


When IT fails, OT pays the price

Criminal groups are now demonstrating a better understanding of industrial dependencies. The Qilin group carried out 63 confirmed attacks against industrial entities since mid 2024 and has focused on energy distribution and water utilities. Their use of Windows and Linux payloads gives them wider reach inside mixed environments. Several incidents involved encryption of shared engineering resources and historian systems, which caused operational delays even when controllers remained untouched. ... Across intrusions, attackers favored techniques that exploit weak segmentation. PowerShell activity made up the largest share of detections, followed by Cobalt Strike. The findings show that adversaries rarely need ICS specific exploits at the start of an attack. They rely on stolen accounts, remote access tools, and administrative shares to move toward engineering assets. ... The vulnerability data reinforces the emphasis on the boundary between enterprise systems and industrial systems. Ongoing exploitation of Cisco ASA and FTD devices, including attacks that modified device firmware. Several critical flaws in SAP NetWeaver and other manufacturing operations software were also exploited, which created direct pivot points into factory workflows. Recent disclosures affecting Rockwell ControlLogix and GuardLogix platforms allow remote code execution or force the controller into a failed state. Attacks on these devices pose immediate availability and safety risks. 


India has the building blocks to influence global standards in AI infrastructure

The convergence of cloud, edge, and connectivity represents the foundation of India’s next AI leap. In a country as geographically and economically diverse as India, AI workloads can’t depend solely on centralized cloud resources. Edge computing allows us to bring compute closer to the source of data be it in a factory, retail store, or farm which reduces latency, lowers costs, and enhances privacy. Cloud provides elasticity and scalability, while secure connectivity ensures that both environments communicate seamlessly. This triad enables an AI model to be trained in the cloud, refined at the edge, and deployed securely across networks unlocking innovation in every geography. We have been building this connected fabric to ensure that access to compute and intelligence isn’t limited by location or scale. ... We see this evolution already unfolding. AI-as-a-Service will thrive when infrastructure, connectivity, and platforms converge under a single, interoperable framework. Each stakeholder; telecoms, data centres, and hyperscalers brings a unique value: scale, proximity, and reach. ... India is already shaping global conversations around digital equity and secure connectivity, and the same potential exists in AI infrastructure. In next 5 years, India could stand out not for the size of its compute capacity but for how effectively it builds an inclusive digital foundation, one that blends cloud, edge, data governance, and innovation seamlessly.


How to Overcome Latency in Your Cyber Career

The presence of latency is not an indictment of your ability. It's a signal that something in your system needs attention. Identifying what creates latency in your professional life and learning how to address it are essential components of long-term growth. With a diagnostic mindset and a willingness to optimize, you can restore throughput and move forward with purpose. ... Career latency often appears when your knowledge no longer reflects current industry expectations. Even highly capable professionals experience slowdown when their technical foundation lags behind evolving practices. ... Unclear goals create misalignment between where you invest your time and where you want to progress. Without a defined direction, you may be working hard but not moving in a way that supports advancement. ... Professionals often operate under heavy workloads that dilute productivity. Too many competing responsibilities, constant context switching or tasks disconnected from your goals can limit your effectiveness and delay growth. ... Career progress can slow when your professional network lacks the signal strength needed to route opportunities in your direction. Without mentorship, community or visibility, growth becomes harder to sustain. ... Missed opportunities often stem from limited readiness. Preparation, bandwidth or timing may be misaligned, and promising chances can disappear before you can act.


Why IT-SecOps Convergence is Non-Negotiable

The message is clear: siloed operations are no longer just inefficient—they’re a security liability. ... The first, and often the most difficult step toward achieving true IT-SecOps convergence, is cultural. For years, IT and security teams have operated in silos, essentially functioning as two different businesses. ... On paper, these Key Performance Indicators (KPIs) appear aligned—both measure speed and efficiency. But in practice, they reflect different views: one is laser-focused on minimizing risk, the other on maximizing uptime. ... The real opportunity lies in establishing a shared mandate. Both teams need to understand that their goals are two sides of the same coin: you can’t have productive systems that aren’t secure, and security that breaks the system isn’t sustainable; therefore, convergence begins not with tools, but with alignment of intent. Once this clicks, both teams begin working from a common set of goals, shared KPIs, and joint decision frameworks. ... The strongest security posture doesn’t come from piling on more tools. It comes from creating continuous alignment between management, security, and user experience. When those three functions operate in sync, IT doesn’t deploy technology that security can’t enforce, security doesn’t introduce controls that slow down work, and users don’t feel the need to bypass policies with shadow apps or risky shortcuts. ... When a unified structure is implemented, policies can be deployed instantly, validated automatically, and adjusted based on real user impact—all without waiting for separate teams to sync.

Daily Tech Digest - November 19, 2025


Quote for the day:

"You are not a team because you work together. You are a team because you trust, respect and care for each other." -- Vala Afshar



How to automate the testing of AI agents

Experts view testing AI agents as a strategic risk management function that encompasses architecture, development, offline testing, and observability for online production agents. ... “Testing agentic AI is no longer QA, it is enterprise risk management, and leaders are building digital twins to stress test agents against messy realities: bad data, adversarial inputs, and edge cases,” says Srikumar Ramanathan ... “Agentic systems are non-deterministic and can’t be trusted with traditional QA alone; enterprises need tools that trace reasoning, evaluate judgment, test resilience, and ensure adaptability over time,” says Nikolaos Vasiloglou ... Part of the implementation strategy will require integrating feedback from production back into development and test environments. Although testing AI agents should be automated, QA engineers will need to develop workflows that include reviews from subject matter experts and feedback from other end users. “Hierarchical scenario-based testing, sandboxed environments, and integrated regression suites—built with cross-team collaboration—form the core approach for test strategy,” says Chris Li ... Mike Finley, says, “One key way to automate testing of agentic AI is to use verifiers, which are AI supervisor agents whose job is to watch the work of others and ensure that they fall in line. Beyond accuracy, they’re also looking for subtle things like tone and other cues. If we want these agents to do human work, we have to watch them like we would human workers.”


AI For Proactive Risk Governance In Today’s Uncertain Landscape

Emerging risks are no longer confined to familiar categories like credit or operational performance. Instead, leaders are contending with a complex web of financial, regulatory, technological and reputational pressures that are interconnected and fast-moving. This shift has made it harder for executives to anticipate vulnerabilities and act before risks escalate into real business impact. ... The sheer volume of evolving requirements can overwhelm compliance teams, increasing the risk of oversight gaps, missed deadlines or inconsistent reporting. For many organizations, the challenge is not simply keeping up but proving to regulators and stakeholders that governance practices are both proactive and defensible. ... As businesses evaluate their options to get ahead of risk, AI is top of the list. But not all AI is created equal, and paradoxically, some approaches may introduce added risk. General-purpose large language models can be powerful tools for information synthesis, but they are not designed to deliver the accuracy, transparency and auditability required for high-stakes enterprise decisions. Their probabilistic nature means outputs can at times be incomplete or inaccurate. ... Every AI output must be explainable, traceable and auditable. Executives need to understand the reasoning behind the recommendations they present to boards, regulators or shareholders. Defensible AI ensures that decisions can withstand scrutiny, fostering both compliance and trust between human and machine.


Navigating India's Data Landscape: Essential Compliance Requirements under the DPDP Act

The Digital Personal Data Protection Act, 2023 (DPDP Act) marks a pivotal shift in how digital personal data is managed in India, establishing a framework that simultaneously recognizes the individual's right to protect their personal data and the necessity for processing such data for lawful purposes. For any organization—defined broadly to include individuals, companies, firms, and the State—that determines the purpose and means of processing personal data (a "Data Fiduciary" or DF), compliance with the DPDP Act requires strict adherence to several core principles and newly defined rules. Compliance with the DPDP Act is like designing a secure building: it requires strong foundational principles, robust security systems, specific safety features for vulnerable occupants (Child Data rules), specialized certifications for large structures, and a clear plan for Data Erasure. Organizations must begin planning now, as the core operational rules governing notice, security, child data, and retention come into force eighteen months after the publication date of the DPDP Rules in November 2025. ... DFs must implement appropriate technical and organizational measures. These safeguards must include techniques like encryption, obfuscation, masking, or the use of virtual tokens, along with controlled access to computer resources and measures for continued processing in case of compromise, such as data backups.


Doomed enterprise AI projects usually lack vision

CIOs and other IT decision-makers are under pressure from boards and CEOs who want their companies to be “AI-first” operations; that runs the risk of moving too fast on execution and choosing the right projects, said Steven Dickens, principal analyst at Hyperframe Research. Smart leaders are cautious and pragmatic and focused on validated value, not jumping the gun on mission-critical processes. “They are ring-fencing pilot projects to low-risk, high-impact areas like internal code generation or customer service triage,” Dickens said. ... In this experimental period, organizations viewing AI as a way to reimagine business will take an early lead, Tara Balakrishnan, associate partner at McKinsey, said in the study. “While many see leading indicators from efficiency gains, focusing only on cost can limit AI’s impact,” Balakrishnan wrote. Scalability, project costs, and talent availability also play key roles in moving proof-of-concept projects to production. AI tools are not just plug and play, said Jinsook Han, chief strategy and agentic AI officer at Genpact. While companies can experiment with flashy demos and proofs of concept, the technology also needs to be usable and relevant, Han said. ... Many AI projects fail because they are built atop legacy IT systems, Han said, adding that modifying a company’s technology stack, workflows, and processes will maximize what AI can do. Humans also still need to oversee AI projects and outcomes — especially when agentic AI is involved, Han said. 


GenAI vs Agentic AI: From creation to action — What enterprises need to know

Generative AI and Agentic AI are two separate – but often interrelated – paradigms. Generative AI excels in authoring or creating content from prompts, while Agentic AI involves taking autonomous actions to achieve objectives in complex workflows that involve multiple steps. ... Agentic AI is the next step to advances in data science – from construction to self-execution. They act as intelligent digital workers capable of managing a vast array of complex multi-step workflows. In banking and financial services, Agentic AI enables autonomous function for trading and portfolio management. Given a strategic objective like “maximize return within an acceptable risk parameter,” it can perform autonomously by monitoring market signals, executing traders’ decisions by rebalancing assets and adjusting portfolios, all in real-time. ... The difference between Generative AI and Agentic AI is starting to fade. We are heading toward a future version of generative models being the “thinking engine” of agentic systems. It will not be Generative AI versus Agentic AI. Intelligent systems will reason, create and act across business ecosystems. For this to happen, there will be a need for interoperable systems and common standards. There are frameworks such as the Model Context Protocol (MCP) and metadata standards like AgentFacts already laying the groundwork for a transparent and plug-and-play agent ecosystems to provide trust, transparency, and safe collaboration for agents between platforms.


Pushing the thermal envelope

“When new data centers are designed today, instead of relying solely on the grid, they are integrating on-site power stations with their facilities. These on-site generators function like traditional power stations, and as heat engines, they produce substantial byproduct heat,” Hannah explains. This high-grade, abundant heat opens new possibilities. Technologies such as absorption chillers, historically underutilized in data centers due to insufficient heat, can now be deployed effectively when coupled with BYOP systems. This flexibility extends to operational optimization as well. ... The digital twin methodology allows engineers to create theoretical models of systems to simulate responses and tune control algorithms accordingly. Operational or production-based digital twins extend this approach by using field and system data to continuously improve model accuracy over time. ... The thermal chain and power train now operate less as separate systems and more as partners in a shared ecosystem, each dependent on the other for optimal performance. This growing synergy extends beyond technology, driving closer collaboration between traditionally separate teams across design, engineering, manufacturing, and operations. “The growth is so incredible that customers are looking for products and systems they can deploy quickly – solutions that are easy to install, reliable, densified, cost-effective, and efficient,” says Hannah. “Right now, speed of deployment is the priority.”


Cloud Services Face Scrutiny Under the Digital Markets Act

Today, European authorities announced three new market investigations into cloud-computing services under the Digital Markets Act (DMA), as EU leaders gather in Berlin for the Summit on European Digital Sovereignty — an event billed as a push for an “independent, secure and innovation-friendly digital future for Europe.” Two investigations will assess whether Amazon Web Services (AWS) and Microsoft’s Azure should be designated as gatekeepers, despite apparently “not meeting the DMA gatekeeper thresholds for size, user number and market position.” A third investigation is to assess if the DMA is best placed to “effectively tackle practices that may limit competitiveness and fairness in the cloud computing sector in the EU.” ... Europe is increasingly concerned about data security and sovereignty, spurred in part by the Trump administration’s ongoing hostility to the EU and the powers granted by the CLOUD Act (Clarifying Lawful Overseas Use of Data Act), which allows US law enforcement to obtain data stored abroad, even data concerning non-US citizens. Fears of a potential “kill switch” have pushed digital sovereignty up the EU agenda, with some member states switching away from the biggest cloud providers and adopting European alternatives. However, to switch away from US providers at scale may require competition law enforcement and regulation. The European Commission has passed the Data Act, which requires cloud providers to eliminate switching charges by 2027 and bans “technical, contractual and organisational obstacles’ to switching to another provider.” 


IBM readies commercially valuable quantum computer technology

According to Chong, Loon puts a separate layer on the chip, going three-dimensional, allowing connections between qubits that aren’t immediate neighbors. Even separate chips, the ones contained in the boxes at the base of those giant cryogenic chandelier-shaped refrigerators, can be linked together, says IBM’s Crowder. In fact, that’s already possible with Nighthawk. “You can think of it as wires going between the boxes at the bottom,” Crowder says. “Nighthawk is designed to be able to do that, and it’ll also be used to connect the fault-tolerant modules in the large-scale fault-tolerant system as well.” “That is a big announcement for the industry,” says IDC analyst Heather West. “Now we’re seeing ways to actually begin scaling these systems without squeezing thousands or hundreds of thousands of qubits on a chip.” It’s a misperception that quantum computing isn’t beneficial and can’t be used today. Organizations should already be thinking about how they will use quantum computing, especially if they expect to be able to get a competitive edge from it, West says. “Waiting until the technology advances further could be detrimental because the learning curve that you need to be able to understand quantum and to program quantum algorithms is quite high,” West says. It’s difficult to develop these skills internally, and difficult to bring them into an organization. And then there’s the time it takes to develop use cases and figure out new workflows.


Why modular AI is emerging as the next enterprise architecture standard

LLMs are remarkable, but they are not inherently aligned with enterprise control frameworks. Without a way to govern the reasoning and retrieval pathways, organizations place themselves at risk of unpredictable outputs — and unpredictable headlines. ... The modular approach I explored is built on two ideas: small language models and retrieval-augmented generation. SLMs focus on specific domains rather than being trained to handle everything. Because they are compact and specialized, they can run on more common infrastructure and offer predictable performance. Instead of forcing one model to understand every topic in the enterprise, SLMs stay close to the context they are responsible for. ... Together, SLMs and RAG form a system where intelligence is both efficient and explainable. The model contributes language understanding, while retrieval ensures accuracy and alignment with business rules. It’s an approach that favors control and clarity over brute-force scale — exactly what large organizations need when AI decisions must be defended, not just delivered. ... At the heart of this approach is what I call a semantic layer: a coordination surface where AI agents reason only over the business context and data sources assigned to them. This layer defines three critical elements: What information an agent can access; How its decisions are validated; and When it should escalate or defer to humans. In this design, smaller language models are used where focus matters more than size. 


The long conversations that reveal how scammers work

The slow cadence is what scammers use to build trust. The study shows how predictable that progression is when viewed at scale. Early messages tend to focus on small talk, harmless questions, light personal details, and daily routines. These early exchanges often contain subtle checks to see if the target is human. Some scammers ask directly. “By the way, there are a lot of fake people here, are you a real person” is one of the lines captured in the study. ... That distance between the greeting and the attempted cash out is the core challenge in studying long game fraud. Scammers send photos of meals or walks, talk about family, and bring up current events to lay the groundwork for later requests. Scammers often sent images, while audio and video were less common, but when used, they tended to appear at moments when scammers wanted to strengthen the sense of presence. The researchers found that 20 percent of conversations included selfie requests, and more than half of those requests took place on WhatsApp. ... Long haul scams do not rely on high urgency. They rely on comfort, familiarity, and patience. This is a different challenge than technical support scams or prize scams. Defenders need to detect slow moving risk signals before money leaves accounts. The study also shows the scale challenge. Manual research that covers weeks of dialog is difficult to sustain. The researchers address this by blending an LLM with a workflow that pulls in human reviewers at key points. 

Daily Tech Digest - November 18, 2025


Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous



The rise of the chief trust officer: Where does the CISO fit?

Trust is touted as a differentiator for organizations looking to strengthen customer confidence and find a competitive advantage. Trust cuts across security, privacy, compliance, ethics, customer assurance, and internal culture. For the custodians of trust, that’s a wide-ranging remit without the obvious definition of other C-suite roles. Typically, the CISO continues to own controls and protection, while the CTrO broadens the remit to reputation, ethics, and customer confidence. Where cybersecurity reports to the CTrO, it is a way to escape IT and the competing priorities with the CIO. This partnership repositions security from ‘department of no’ to business enabler, Forrester notes. ... Patel says that strong alignment between customer trust and business strategy is critical. “If you don’t have credibility in the marketplace, with your partners and customers, your business strategy is dead on arrival,” he tells CSO. Whereas CISO’s day-to-day responsibilities include checking on the SOC, reviewing alerts, GRC, managing other security operations and board reporting, the chief trust officer role weaves customer trust throughout, says Patel. “It’s really bringing that trust lens into the decision-making equation and challenging colleagues and partners to think in the same manner.” ... There is also the question of how organizations operationalize trust — and can it be measured? No off-the-shelf platform exists, so CTrOs must build their own dashboards combining customer and employee metrics to track trends and identify early signs of trust erosion.


When Machines Attack Machines: The New Reality of AI Security

Attackers decomposed tasks and distributed them across thousands of instructions fed into multiple Claude instances, masquerading as legitimate security tests and circumventing guardrails. The campaign’s velocity and scale dwarfed what human operators could manage, representing a fundamental leap for automated adversarial capability. Anthropic detected the operation by correlating anomalous session patterns and observing operational persistence achievable only through AI-driven task decomposition at superhuman speeds. Though AI-generated attacks sometimes faltered—hallucinating data, forging credentials, or overstating findings—the impact proved significant enough to trigger immediate global warnings and precipitate major investments in new safeguards. Anthropic concluded that this development brings advanced offensive tradecraft within reach of far less sophisticated actors, marking a turning point in the balance between AI’s promise and peril. ... AI-based offensive operations exploit vulnerabilities across entire ecosystems instantly with the goal of exfiltrating critical intelligence and causing damage to the target. Offensive AI iterates adversarial attacks and novel exploits on a scale human red teams cannot attain. Defenses that work well against traditional techniques often fail outright under continuous, machine-driven attack cycles. 


From chatbots to colleagues: How agentic AI is redefining enterprise automation

According to Flores, agentic AI changes that equation. Each agent has a name, a mission defined by its system prompt, and a connection to company data through retrieval-augmented generation. Many of them also wield tools such as CRMs, databases, or workflow platforms. “An agent is like hiring a new employee who already knows your systems on day one,” Flores said. “It doesn’t just respond — it executes.” This new mode of collaboration also changes how employees interact with technology. Flores noted that his clients often name their agents, treating them as teammates rather than tools. “When marketing needs to check something, they’ll say, ‘Let’s ask Marco,’” he added. “That naming makes adoption easier — it feels human.” ... One of IBM’s first success stories came with password resets — an unglamorous but ubiquitous use case. Two agents now collaborate: one triages the request, while the other verifies credentials and performs the reset, all under the company’s identity-and-access-management system. Each agent has its own digital identity, ensuring audit trails and preventing impersonation. ... Agentic AI isn’t a software upgrade — it’s a redesign of how digital work gets done. Each of the leaders interviewed for this story emphasized that success depends as much on data and governance as on culture and experimentation. Before moving beyond chatbots, IT directors should ask not only “Can we do this?” but “Where should we start — and how do we do it safely?”


What to look for in an AI implementation partner

Good AI implementation partners need not be limited to big professional services firms. Smaller firms such as AI consultancies and startups can provide lots of value. Regardless, many organizations require outside expertise when deploying, monitoring, and maintaining AI tools and services. ... “Many firms understand AI tools at a surface level, but what truly matters is the ability to contextualize AI within the nuances of a specific industry,” says Hrishi Pippadipally, CIO at accounting and business advisory firm Wiss. ... An effective partner must be able to balance innovation with the guardrails of security, privacy, and industry-specific compliance, Agrawal adds. “Otherwise, IT leaders will inherit long-term liabilities,” he says. ... “The mistake many organizations make is focusing only on technical credentials or flashy demos,” Agrawal says. “What’s often overlooked and what I prioritize is whether the partner can embed AI into existing workflows without disrupting business continuity. A good partner knows how to integrate AI so that it doesn’t just work in theory, but delivers impact in the complex reality of enterprise operations.” ... “Most evaluation checklists focus on the technical side — security, compliance, data governance, etc.,” says Sara Gallagher, president of The Persimmon Group, a business management consultancy. “While that matters, too many execs are skipping over the thornier questions.


Magnetic tape is going strong in the age of AI, and it's about to get even better

Aramid permits the manufacture of significantly thinner and smoother media, enabling longer tape lengths in a standard LTO Ultrium cartridge form factor,” the organization noted in a statement. “This material innovation provides an extra 10 TB of native capacity than the currently available 30 TB LTO-10 cartridge, which is manufactured using different materials.” Stephen Bacon, VP for data protection solutions product management at HPE, said the new cartridges are aimed at enterprises spanning an array of industries dealing with high data volumes, from manufacturing to financial services. “AI has turned archives into strategic assets,” Bacon commented. ... Tape storage has a number of distinct advantages, including low cost, durability, and easy portability. According to previous analysis from the LTO Program, companies using tape recorded an 86% lower total cost of ownership (TCO) compared to disk storage. TCO compared to cloud storage was also 66% lower across a 10 year period, figures showed. Notably, the use of tape for unstructured data storage also adds to the appeal, with this now vital in the training process for large language models (LLMs). ... Long-term, tape storage is only going to improve, at least if the LTO Program’s roadmap is to be believed. Through generations 11 through to 14, enterprises can expect to see significant capacity gains, eventually peaking with a 913 TB cartridge.


The rebellion against robot drivel

LLMs are “lousy writers and (most importantly!) they are not you,” Cantrill argues. That “you” is what persuades. We don’t read Steinbeck’s The Grapes of Wrath to find a robotic approximation of what desperation and hurt seem to be; we read it because we find ourselves in the writing. No one needs to be Steinbeck to draft press releases, but if that press release sounds samesy and dull, does it really matter that you did it in 10 seconds with an LLM versus an hour on your own mental steam? A few years ago, a friend in product marketing told me that an LLM generated better sales collateral than the more junior product marketing professionals he’d hired. His verdict was that he would hire fewer people and rely on LLMs for that collateral, which only got a few dozen downloads anyway, from a sales force that numbered in the thousands. Problem solved, right? Wrong. If few people are reading the collateral, it’s likely the collateral isn’t needed in the first place. Using LLMs to save money on creating worthless content doesn’t seem to be the correct conclusion. Ditto using LLMs to write press releases or other marketing content. I’ve said before that the average press release sounds like it was written by a computer (and not a particularly advanced computer), so it’s fine to say we should use LLMs to write such drivel. But isn’t it better to avoid the drivel in the first place? Good PR people think about content and its place in a wider context rather than just mindlessly putting out press releases.


AI’s Impact on Mental Health

“Talking to a therapist can be intimidating, expensive, or complicated to access, and sometimes you need someone—or something—to listen at that exact moment,’’ said Stephanie Lewis, a licensed clinical social worker and executive director of Epiphany Wellness addiction and mental health treatment centers. Chatbots allow people to vent, process their feelings, and get advice without worrying about being judged or misunderstood, Lewis said. “I also see that people who struggle with anxiety, social discomfort, or trust issues sometimes find it easier to open up to a chatbot than a real person.” Users are “often looking for a safe space to express emotions, receive reassurance, or find quick stress-management strategies,’’ added Dr. Bryan Bruno, medical director of Mid City TMS, a New York City-based medical center focused on treating depression. ... “Chatbots created for therapy are often built with input from mental health professionals and integrate evidence-based approaches, like cognitive behavioral therapy techniques,’’ Tse said. “They can prompt reflection and guide users toward actionable steps.” Lewis agreed that some therapeutic chatbots are designed with real therapy techniques, like Cognitive Behavioral Therapy (CBT), which can help manage stress or anxiety. “They can guide users through breathing exercises, mindfulness techniques, and journaling prompts, all great tools,” she said.


Holistic Engineering: Organic Problem Solving for Complex Evolving Systems & Late projects. 

Architectures that drift from their original design. Code that mysteriously evolves into something nobody planned. These persistent problems in software development often stem not from technical failures ... Holistic engineering is the practice of deliberately factoring these non-technical forces into our technical decisions, designs, and strategies. ... Holistic engineering involves considering, during technical design, among the factors, not only traditional technical factors, but also all the other non-technical forces that will be influencing your system anyhow. By acknowledging these forces, teams can view the problem as an organic system and influence, to some extent, various parts of the system. ... Consider the actual information structure within your organization. Understanding actual workflow patterns and communication channels reveals how work truly gets accomplished. These communication patterns often differ significantly from the formal hierarchy. Next, identify which processes could block your progress. For example, some organizations require approval from twenty people, including the CTO, to decide on a release. ... Organizations that embrace holistic engineering gain predictable control over forces that typically derail technical projects. Instead of reacting to "unforeseen" delays and architectural drift, teams can anticipate and plan for organizational constraints that inevitably influence technical outcomes.
At its heart, industrial AI is about automating and optimising business processes to improve decision-making, enhance efficiency and increase profitability. It requires the collection of vast volumes of data from sources like IoT sensors, cameras, and back-office systems, and the application of machine and deep learning algorithms to surface insights. In some cases, the AI powers robots to supercharge automation, and in others, it utilises edge computing for faster, localised processing. Agentic AI helps firms go even further, by working autonomously, dynamically and intelligently to achieve the goals it is set. ... “You get the data in from IoT and you trigger that as an anomaly,” says Pederson. “You analyse the anomaly against all your historic records – other incidents that have happened with customers and how they have been fixed. You relate it to your knowledge base articles. And then you relate it to your inventory on your service vans, like which service vans and which technicians are equipped to do the job. “So it’s the whole estate of structured, unstructured and processed data. In the past, they would send a technician out, and they could get it right 84% of the time. Now they have improved their first-time fix rate to 97%.” Both this and the aforementioned field service deployment feature an “agentic dispatcher” which autonomously creates and publishes the schedules to the relevant service technicians, updates their calendar and suggests the best route to take. “In the very near future, AI agents will not only be helping to address work for people behind a desk, but guiding robots directly,” says Pederson.


What security pros should know about insurance coverage for AI chatbot wiretapping claims

There are subtle differences in the way courts are viewing privacy litigation arising from the use of AI chatbots in comparison to litigation involving analytical tools like session reply or cookies. Both claims involve allegations that a third party is intercepting communications without proper consent, often under state wiretapping laws, but the legal arguments and defenses vary because the data being collected is different. ... Whether or not an exclusion will ultimately impact coverage depends both on the specific language of the exclusion and also the allegations raised in the underlying lawsuit. For example, broadly worded exclusions with “catch-all” phrases precluding coverage for any statutory violation may be more difficult for policyholder to overcome than an exclusion that identifies by name specific statutes. As these claims are relatively new, we have yet to see significant examples of how this plays out in the context of insurance coverage litigation. However, we saw similar coverage arguments in the context of insurance coverage litigation where the underlying suit alleged violations of the Biometric Information Privacy Act (BIPA). ... To help mitigate risks, organizations should review their user consent mechanisms for AI Bot Communications. Consent does not always mean signing a form, but could include prominently displaying chatbot privacy notices before any data collection, providing easy access to the business’s privacy policy detailing how chatbot interactions are stored, and using automated disclaimers at the start of each chat session. 

Daily Tech Digest - November 17, 2025


Quote for the day:

"Keep steadily before you the fact that all true success depends at last upon yourself." -- Theodore T. Hunger



You already use a software-only approach to passkey authentication - why that matters

After decades of compromises, exfiltrations, and financial losses resulting from inadequate password hygiene, you'd think that we would have learned by now. However, even after comprehensive cybersecurity training, research shows that 98% of users are still easily tricked into divulging their passwords to threat actors. Realizing that hope -- the hope that users will one day fix their password management habits -- is a futile strategy to mitigate the negative consequences of shared secrets, the tech industry got together to invent a new type of login credential. The passkey doesn't involve a shared secret, nor does it require the discipline or the imagination of the end user. Unfortunately, passkeys are not as simple to put into practice as passwords, which is why a fair amount of education is still required. ... Passkeys still involve a secret. But unlike passwords, users just have no way of sharing it -- not with legitimate relying parties and especially not with threat actors. ... In most situations where users are working with passkeys but not using one of the platform authenticators, they'll most likely be working with a virtual authenticator. These are essentially BYO authenticators, none of which rely on the device's underlying security hardware for any passkey-related public key cryptography or encryption tasks, unlike platform authenticators.


Getting started with agentic AI

A working agentic AI strategy relies on AI agents connected by a metadata layer, whereby people understand where and when to delegate certain decisions to the AI or pass work to external contractors. It’s a focus on defining the role of the AI and where people involved in the workflow need to contribute. ... Data lineage tracking should happen at the code level through metadata propagation systems that tag every data transformation, model inference and decision point with unique identifiers. Willson says this creates an immutable audit trail that regulatory frameworks increasingly demand. According to Willson, advanced implementations may use blockchain-like append-only logs to ensure governance data cannot be retroactively modified. ... One of the areas IT leaders need to consider is that their organisation will more than likely rely on a number of AI models to support agentic AI workflows.  ... Organisations need to have the right data strategy in place, and they should already be well ahead on their path to full digitisation, where automation through RPA is being used to connect many disparate workflows. Agentic AI is the next stage of this automation, where an AI is tasked with making decisions in a way that would have previously been too clunky using RPA. However, automation of workflows and business processes are just pieces of an overall jigsaw. 


Human-centric IAM is failing: Agentic AI requires a new identity control plane

Agentic AI does not just use software; it behaves like a user. It authenticates to systems, assumes roles and calls APIs. If you treat these agents as mere features of an application, you invite invisible privilege creep and untraceable actions. A single over-permissioned agent can exfiltrate data or trigger erroneous business processes at machine speed, with no one the wiser until it is too late. The static nature of legacy IAM is the core vulnerability. You cannot pre-define a fixed role for an agent whose tasks and required data access might change daily. The only way to keep access decisions accurate is to move policy enforcement from a one-time grant to a continuous, runtime evaluation. ... Securing this new workforce requires a shift in mindset. Each AI agent must be treated as a first-class citizen within your identity ecosystem. First, every agent needs a unique, verifiable identity. This is not just a technical ID; it must be linked to a human owner, a specific business use case and a software bill of materials (SBOM). The era of shared service accounts is over; they are the equivalent of giving a master key to a faceless crowd. Second, replace set-and-forget roles with session-based, risk-aware permissions. Access should be granted just in time, scoped to the immediate task and the minimum necessary dataset, then automatically revoked when the job is complete. Think of it as giving an agent a key to a single room for one meeting, not the master key to the entire building.


Don’t ignore the security risks of agentic AI

We need policy engines that understand intent, monitor behavioral drift and can detect when an agent begins to act out of character. We need developers to implement fine-grained scopes for what agents can do, limiting not just which tools they use, but how, when and under what conditions. Auditability is also critical. Many of today’s AI agents operate in ephemeral runtime environments with little to no traceability. If an agent makes a flawed decision, there’s often no clear log of its thought process, actions or triggers. That lack of forensic clarity is a nightmare for security teams. In at least some cases, models resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors Finally, we need robust testing frameworks that simulate adversarial inputs in agentic workflows. Penetration-testing a chatbot is one thing; evaluating an autonomous agent that can trigger real-world actions is a completely different challenge. It requires scenario-based simulations, sandboxed deployments and real-time anomaly detection. ... Until security is baked into the development lifecycle of agentic AI, rather than being patched on afterward, we risk repeating the same mistakes we made during the early days of cloud computing: excessive trust in automation before building resilient guardrails.


How Technological Continuity and High Availability Strengthen IT Resilience in Critical Sectors

Within the context of business continuity, high availability ensures technology supports the organization’s ability to operate without disruption. It minimizes downtime and maintains the confidentiality, integrity, and availability of information. ... To achieve true high availability, organizations implement architectures that combine redundancy, automation, and fault tolerance. Database replication whether synchronous or asynchronous allows data to be duplicated across primary and secondary nodes, ensuring continuous access in the event of a failure. Synchronous replication guarantees data consistency but introduces latency, while asynchronous models reduce latency at the expense of a small data gap. Both approaches, when properly configured, strengthen the integrity and continuity of critical databases. ... One of the most effective strategies to reduce technological dependence is the implementation of hybrid continuity models that integrate both on-premises and cloud environments. Organizations that rely exclusively on a single cloud service provider expose themselves to the risk of total outage if that provider experiences downtime or disruption. By maintaining mirrored environments between cloud infrastructure and local servers, it is possible to achieve operational flexibility and independence across channels.


The tech that turns supply chains from brittle to unbreakable

When organizations begin crafting a supply chain strategy, one of the most common misconceptions is viewing it as purely a logistics exercise rather than a holistic framework that spans procurement, planning and risk management. Another frequent misstep is underestimating the role of technology. Digital tools are essential for visibility, predictive analytics and automation, not optional. Equally critical is recognizing that strategy is not static, it must evolve continuously to address shifting market conditions and emerging threats. ... Resilience comes from treating cyber and physical risks as one integrated challenge. That means embedding security into every layer of the supply chain, from vendor onboarding to logistics execution, while leveraging advanced visibility tools and zero trust principles. ... Executive buy‑in for resilience investments begins with reframing the conversation from cost to value. We position resilience as a strategic enabler rather than an expense by linking it to business continuity, customer trust and competitive advantage. Instead of focusing solely on immediate ROI, emphasize measurable risk reduction, regulatory compliance and the cost of inaction during disruptions. Use real‑world scenarios and data to show how resilience safeguards revenue streams and accelerates recovery when crises hit. Engage executives early, align initiatives with corporate objectives and present resilience as a driver of long‑term growth and brand reputation.


ISO and ISMS: 9 reasons security certifications go wrong

Without management’s commitment, it’s often difficult to get all employees on board and ensure that ISO standards, or even IT baseline protection standards, are integrated into daily business operations. As a result, companies should provide top-down clarity about the importance of such initiatives — even if implementation can be costly and inconvenient. “Cleaning up” isn’t always pleasant, but the result is all the more worthwhile. ... Without genuine integration into daily operations, the certification becomes useless, and the benefits it offers remain unrealized. In the worst-case scenario, organizations even end up losing money, while also missing out on the implementation’s potential value. When integrating a management system, it’s important not to get bogged down in details. The practical application of the system in real-world work situations is crucial for its success. ... Employees need to understand why the implementation is important, how it will be integrated into their daily workflows, and how it will make their work easier. If this isn’t the case, it will be difficult to implement the system and maintain any resulting certification. ... Without a detailed plan, companies focus on areas that are irrelevant or do not meet the requirements of the ISO/IT baseline protection standards. Furthermore, if the implementation of a management system takes too long, regular business development can overtake the process itself, resulting in duplicated work to keep up with changes.


State of the API 2025: API Strategy Is Becoming AI Strategy

What distinguishes fully API-first teams? They treat APIs as long-lived products with roadmaps, SLAs, versioning, and feedback loops. They align product and engineering early, embed governance into workflows, and standardize patterns so that consumers, human or agent, can rely on consistent contracts. In our experience, that "productization" of APIs is what unlocks long-lived, reusable APIs and parallel delivery. When your agents can trust your schemas, error semantics, and rate-limit behaviors, they can compose capabilities far faster than code-level abstractions ever could. ... As AI agents become primary API consumers, security assumptions must evolve. 51% of developers cite unauthorized or excessive agent calls as a top concern; 49% worry about AI systems accessing sensitive data they shouldn't; and 46% highlight the risk of credential leakage and over-scoped keys. Traditional controls, designed for predictable human traffic, struggle against machine-speed persistence, long-running automation, and credential amplification. ... Even as API-first adoption grows, collaboration remains a bottleneck. 93% of teams report challenges such as inconsistent documentation, duplicated work, and difficulty discovering existing APIs. With 69% of respondents spending 10+ hours per week on API-related tasks, and with a global workforce, asynchronous collaboration is the norm. 


Embedded Intelligence: JK Tyre's Smart Tyre Use Case

Unlike traditional valve-mounted tire pressure monitoring devices, or TPMS, these sensors are permanently integrated for consistent data accuracy. Each chip is designed to last five to seven years, depending on usage and conditions. "These sensors are permanently embedded during the assembly process," said V.K. Misra, technical director at JK Tyre. "They continuously send live data on air pressure and temperature to the dashboard and mobile device. The moment there's a variation, the driver is alerted before a small problem becomes a serious risk." ... The embedded version takes this further by integrating the chip within the tire's internal structure, creating a closed feedback loop between the tire, the driver and the cloud. "We have created an entire connected ecosystem," Misra said. "The tire is just the beginning. The data generated feeds predictive models for maintenance and safety. Through Treel, our platform can now talk to vehicles, drivers and service networks simultaneously." The Treel platform processes sensor data through APIs and cloud analytics, providing actionable insights for drivers and fleet operators. Over time, this data contributes to predictive maintenance models, product design improvements and operational analytics for connected vehicles. ... "AI allows decisions that earlier took days to happen within minutes," Misra said. "It also provides valuable data on wear patterns and helps us improve quality control across plants."


Regulation gives structure and voice to security leaders: Darshan Chavan

Chavan has witnessed a remarkable shift over the past decade in how businesses view cybersecurity. ... The increased visibility of cybersecurity, he says, has given CISOs a strategic voice. “Frequent regulatory updates, data breaches in the news, and rising public awareness have made organisations realize that cybersecurity is fundamental to business continuity,” he explains. “Every organisation now understands that to operate in a fast-evolving digital landscape, you need a cybersecurity leader with authority — and frameworks, regulations, and policies that are implemented and accepted by the business.” He views cybersecurity guidelines — whether from SEBI, RBI, or other regulatory bodies — as empowering rather than restrictive. “Regulation gives structure and voice to security leaders,” he says. “It ensures that cybersecurity is treated not as a cost centre but as a core enabler of business trust.” ... While he acknowledges that the DPDP Act will help formalise this journey, he refuses to wait for regulation to act. “I’m not waiting for the law to push me,” he says. “Tomorrow, investors will start asking how we manage their data, how we protect their bank account numbers, and how we ensure confidentiality. I want to be ready before those questions arise.” Beyond data privacy, Chavan highlights network defense and layered security as ongoing imperatives. “