Daily Tech Digest - October 07, 2025


Quote for the day:

"There is only one success – to be able to spend your life in your own way." -- Christopher Morley



5 Critical Questions For Adopting an AI Security Solution

An AI-SPM solution must be capable of seamless AI model discovery, creating a centralized inventory for complete visibility into deployed models and associated resources. This helps organizations monitor model usage, ensure policy compliance, and proactively address any potential security vulnerabilities. By maintaining a detailed overview of models across environments, businesses can proactively mitigate risks, protect sensitive data, and optimize AI operations. ... An effective AI-SPM solution must tackle risks that are specific to AI systems. For instance, it should protect training data used in machine learning workflows, ensure that datasets remain compliant under privacy regulations, and identify anomalies or malicious activities that might compromise AI model integrity. Make sure to ask whether the solution includes built-in features to secure every stage of your AI lifecycle—from data ingestion to deployment. ... When evaluating an AI-SPM solution, ensure that it automatically maps your data and AI workflows to governance and compliance requirements. It should be capable of detecting non-compliant data and providing robust reporting features to enable audit readiness. Additionally, features like automated policy enforcement and real-time compliance monitoring are critical to keeping up with regulatory changes and preventing hefty fines or reputational damage.


The architecture of lies: Bot farms are running the disinformation war

As bots become more common and harder to tell from real users, people start to lose confidence in what they see online. This creates the liars dividend, where even authentic content is questioned simply because everyone knows fakes are out there. If any critical voice or inconvenient fact can be dismissed as just a bot or a deepfake, democratic debate takes a hit. AI-driven bots can also create the illusion of consensus. By making a hashtag or viewpoint trend, they create the impression that everyone is talking about it, or that an extreme position enjoys broader support than it appears to have.  ... It’s still an open question how well online platforms stop malicious, bot-driven content, even though they are the ones responsible for policing their own networks. Harmful AI bots continue to get through the defenses of major social media platforms. Even though most have rules against automated manipulation, enforcement is weak and bots exploit the gaps to spread disinformation. Current detection systems and policies aren’t keeping up, and platforms will need stronger measures to address the problem. ... The EU and the US are both moving to address bot-driven disinformation. In the EU, the Digital Services Act obliges large online platforms to assess and mitigate systemic risks such as manipulation, and to provide vetted researchers with access to platform data.


Is the CISO chair becoming a revolving door?

“A CISO is interacting with a lot of interfaces, and you need to have soft skills and communicate well with others. In many cases, you need to drive others to take action, and that’s super tedious. It’s very difficult to keep doing it over time,” Geiger Maor says. “In many cases, you’re in direct conflict with company goals and your goals. You’re like a salmon fish going upstream against everybody else. This makes it very difficult to keep a long tenure.” ... That constant exposure to risk and blame is another reason some CISOs hesitate to take the role in the first place, according to Rona Spiegel, senior manager, security and trust, mergers and acquisitions at Autodesk and former cloud governance leader at Wells Fargo and Cisco. “The bad guys, especially now with AI and automation, they’re getting more sophisticated, and they only have to be right once, but the CISO has to be right all day every day. They only have to be wrong once, and they get blamed … you’re an operational cost centre no matter what because you’re not bringing in revenue, so if something goes wrong … all roads lead to the CISO,” Spiegel says. ... Chapman is also seeing a rise in fractional CISOs, brought in part-time to set up frameworks or oversee specific projects. “It really comes down to the individual,” he says. “Some want that top seat, speaking to the board, communicating risk. But I am also seeing some say, ‘It doesn’t have to be a CISO role.’”


RPA versus hyperautomation: Understanding accuracy (performance) benchmarks in practice

RPA is like that reliable coworker who never complains and does exactly what you ask. It loves repetitive, predictable tasks such as copying and pasting data, moving files between systems or generating standard reports. When everything goes according to plan, RPA is perfect. ... Hyperautomation is the next-level upgrade. It combines RPA with AI, natural language processing (NLP), intelligent document processing (IDP), process mining and workflow orchestration. In simple terms, it doesn’t just follow rules. It learns, adapts and keeps things moving even when the world throws curveballs. With hyperautomation, processes that would have stopped RPA cold continue without a hitch. ... RPA and Hyperautomation are not rivals. They are more like teammates with different strengths. RPA shines when tasks are stable and repetitive, quietly doing its job without fuss. Hyperautomation brings in intelligence, flexibility and the ability to handle entire processes from start to finish. When applied thoughtfully, hyperautomation cuts down on manual corrections, handles exceptions smoothly and delivers value at scale. All this happens without the IT team needing to hire extra coffee runners to fix errors or babysit the robots. The real goal is to build automation that works at the process level, adapts to change and keeps running even when things go off script.


The pros and cons of AI coding in the IT industry

Although now being used by the majority of programmers, AI tools were not universally welcomed upon their launch, and it has taken time to move beyond the initial doubts and suspicion surrounding generative AI. It’s important to note that risks remain when using AI-generated code, which organizations will have to mitigate. “Integrating AI into our coding processes was initially met with skepticism, both within our organization and across the industry,” Jain explains. “Concerns included AI's ability to comprehend complex codebases, the potential for generating buggy code, adherence to company standards, and issues surrounding code and data privacy.” However, since the launch of the first generative AI tools at the end of 2022, Jain says that the rapid evolution of AI technology’s implementation has alleviated many concerns, with features such as codebase indexing and secure training protocols addressing major concerns. “These advancements have enabled AI tools to understand code context, follow company standards, and maintain robust security measures,” Jain tells ITPro. Nevertheless, security and accountability are also major factors for any IT company to consider when looking to use AI as part of the development process, and research continues to show glaring vulnerabilities in AI code. There are certain steps that simply can’t be replaced by AI.


Why AI Is Forcing an Invisible Shift in Risk Management

Without the need for complex, technical coding knowledge, there are increasingly more departments within a business capable of driving and contributing to the development lifecycle, forcing a shift from centralized innovation to development that is fractalized across the entire organization. This shift has been revolutionary, driving more lucrative development by empowering technical teams and business leaders to align on goals and work hand-in-hand. Still, this transition has changed the organization’s relationship with risk. ... In the age of distributed application building, organizations have to raise more questions as it relates to governance and risk, which can mean many different things depending on where the technology sits in the business. Is the application going to be customer-facing? How sensitive is the data? How should it be stored? What are some other privacy considerations? These are all questions businesses must ask in the age of fractured development — and the answers will vary from case to case. ... The shift to decentralized development is not the first change technology has seen, and it’s certainly not the last. The key to staying ahead of the curve is paying attention to the invisible shifts that come with these disruptions, such as the changes that have recently come with the adoption of AI and low code. As these technologies reimagine the typical risk management and compliance model, it’s important for businesses to come to terms with adaptive governance and react as such.


How cross-functional teams rewrite the rules of IT collaboration

When done right, IT isn’t just an optional part of cross-functional collaboration, it’s an integral part of what makes collaboration possible. “There’s a lot of overlap now between IT, sales, finance and regulatory compliance,” says George Dimov, managing owner of Dimov Tax. ... What happens when IT plays a key role in breaking down barriers? First, getting IT involved in cross-functional teams means IT is at the table from day one. Rather than having an environment where a department requests a report or tool from IT after the fact, or has it digitize information later on, IT is present in all meetings. As more organizations recognize the inherent importance of digital transformation, the need for IT expertise — including perspectives from individuals with different types of IT experience — becomes more pronounced. It’s up to the CIO to provide the cross-functional leadership that ensures IT is involved in such efforts from the start. ... Even in situations when IT isn’t directly involved in day-to-day collaboration, it can still play a valuable role by providing technology resources that aid and facilitate collaboration. Ideally, IT should be part of the solution to eliminate barriers, whether that’s through digital sharing tools, reporting mechanisms, or something else. IT can and should be at the forefront of enabling cross-functional collaboration between teams and departments.


Service-as-software: The new control plane for business

Historically, enterprises ran on islands of automation — enterprise resource planning for the back office and, later, a proliferation of apps. Customer relationship management was the first to introduce a new operating model and a new business model. Today, the enterprise itself must begin to operate like a software company. That requires harmonizing those islands into a single unified layer where data and application logic collapse into an integrated System of Intelligence. Agents rely on this harmonized context to make decisions and, when needed, invoke legacy applications to execute workflows. Operating this way also demands a new operations model: a build-to-order assembly line for knowledge work that blends the customization of consulting with the efficiency of high-volume fulfillment. Humans supervise agents, and in doing so progressively encode their expertise into the system. ... The important point to remember is that islands of automation impede management’s core function – planning, resource allocation and orchestration with full visibility across levels of detail and business domains. Data lakes do not solve this by themselves; each star schema is another island. Near-term, organizations can start small and let agents interrogate a single domain (for example, the sales cube) and take limited actions by calling systems of record via MCP servers, for example, viewing a customer’s complaints and initiating a return authorization.


Companies are making the same mistake with AI that Tesla made with robots

Shai Ahrony, CEO of marketing agency Reboot Online, calls this phenomenon the "AI aftershock." "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing." ... Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media. ... McDonalds' and Klarna's decisions to backtrack on AI in favor of humans is reminiscent of a similar about-face from Tesla. In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation…was a mistake." "Humans are underrated," he added. Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch.


How Can the Usage of AI Help Boost DevOps Pipelines

In recent times, AI is playing a key role in CI/CD by using machine learning algorithms and intelligent automation to detect errors proactively, optimize resource usage and faster release cycles. With AI, CI/CD pipelines can learn, adapt and optimize themselves, redefining software development from start to finish. By combining AI and DevOps, you can eliminate silos, recover faster from outages and open up new business revenue streams. Today’s businesses are increasingly leveraging artificial intelligence capabilities throughout their DevOps pipelines to make their CI/CD pipelines intelligent, thereby enabling them to predict problems faster, optimize the pipelines if needed, and recover from failures without the need for any human intervention. ... When you adopt AI into the DevOps practices in your organization, you are applying specific technologies to automate, optimize, and enhance each stage of the software development lifecycle – coding, testing, deployment, and monitoring. Today’s organizations are using AI in their DevOps pipelines to drive innovation, enabling teams to work seamlessly and achieve rapid development and deployment cycles. ... AI can help in DevSecOps in ways such as automating security testing, automating threat detection, and streamlining incident response. You can use AI-powered tools to scan your application source code for security vulnerabilities, automate software patches, automate incident responses, and monitor in real-time to identify anomalies.

Daily Tech Digest - October 06, 2025


Quote for the day:

"Success seems to be connected with action. Successful people keep moving. They make mistakes but they don’t quit." -- Conrad Hilton


Beyond Von Neumann: Toward a unified deterministic architecture

In large AI workloads, datasets often cannot fit into caches, and the processor must pull them directly from DRAM or HBM. Accesses can take hundreds of cycles, leaving functional units idle and burning energy. Traditional pipelines stall on every dependency, magnifying the performance gap between theoretical and delivered throughput. Deterministic Execution addresses these challenges in three important ways. First, it provides a unified architecture in which general-purpose processing and AI acceleration coexist on a single chip, eliminating the overhead of switching between units. Second, it delivers predictable performance through cycle-accurate execution, making it ideal for latency-sensitive applications such as large langauge model (LLM) inference, fraud detection and industrial automation. Finally, it reduces power consumption and physical footprint by simplifying control logic, which in turn translates to a smaller die area and lower energy use. ... For enterprises deploying AI at scale, architectural efficiency translates directly into competitive advantage. Predictable, latency-free execution simplifies capacity planning for LLM inference clusters, ensuring consistent response times even under peak loads. Lower power consumption and reduced silicon footprint cut operational expenses, especially in large data centers where cooling and energy costs dominate budgets. 


Invest in quantum adoption now to be a winner in the quantum revolution

History shows that transformative compute paradigms require years of preparation before delivering real returns. Graphics processing units (GPUs), for example, took more than a decade of groundwork before fueling the AI revolution that now powers almost every sector of the economy. Organizations that invested early positioned themselves to capture this growth, while those who waited paid more, were caught flat-footed, and lost ground to competitors. Quantum will follow the same trajectory. ... Investing in readiness today reduces both risk and cost. By spreading integration work over time, organizations avoid the disruption and price premium of a sudden adoption push once the full enterprise value of quantum computing is achieved. Budget holders know that rushed, unplanned programs often exceed forecasts and erode margins. Smaller projects with clear deliverables can be managed within existing budgets and allow lessons to be learned incrementally, lowering both financial exposure and operational risk. For decision-makers, this creates a predictable investment profile rather than a costly “big bang” rollout. Early engagement also builds skills at a fraction of the future cost. Recruiting or retraining talent under pressure once the market overheats will be significantly more expensive. 


What an IT career will look like in 5 years — and how to thrive through the changes

Success in the near future will depend less on narrow expertise — mastering a specific technology stack for example — and more on evaluating, adapting, and applying the right tools to solve organizational problems. “People shift into cloud, security, data, or AI work depending on business need,” says Chris Camacho, COO and co-founder at Abstract Security. “Titles matter less than visible proof-of-work — small wins shared internally or publicly. Pick a lane and go deep, then layer AI expertise on top. And show your work — on GitHub, LinkedIn, wherever recruiters can see results.” Justina Nixon-Saintil, global chief impact officer at IBM, says success in the future will favor those who are adaptable and use AI to amplify creativity rather than replace it. “Technology roles are evolving from traditional tasks into more dynamic, interdisciplinary pathways that blend technical expertise with strategic thinking,” Nixon-Saintil says. “Those who can navigate the ethical challenges of AI and technology will succeed, leveraging innovation responsibly to solve complex problems and anticipate evolving business needs. You’ll not only future-proof your career but also unlock new opportunities for growth and innovation.” Beth Scagnoli, vice president of product management of Redpoint Global, agrees the successful pro of the near future will easily move between related but traditionally separate IT domains, such as system architecture and development.


Using AI as a Therapist? Why Professionals Say You Should Think Again

It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they're always ready to engage with you. That can be a downside in some cases, where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently.  ... While chatbots are great at holding a conversation -- they almost never get tired of talking to you -- that's not what makes a therapist a therapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the recent study alongside experts from Minnesota, Stanford and Texas. "To a large extent it seems like we are trying to solve the many problems that therapy has with the wrong tool," Agnew told me. "At the end of the day, AI in the foreseeable future just isn't going to be able to be embodied, be within the community, do the many tasks that comprise therapy that aren't texting or speaking."


CISOs rethink the security organization for the AI era

“Organizations that have invested in security over time are seeing efficiencies by layering AI-driven tools into their workflows,” Oleksak says. “But those who haven’t taken security seriously are still stuck with the same exposures they’ve always had. AI doesn’t magically catch them up.’” In fact, because attackers are using AI to make phishing, scanning, and deepfakes cheaper and faster, Oleksak adds, the gap between mature and unprepared organizations is widening. ... “We’re now embedding cybersecurity into AI initiatives from the start, working closely across teams to ensure innovation is both safe and ethical,” she stresses. “Our commitment to responsible AI means every solution is designed with transparency, fairness, and accountability in mind.” Jason Lander, senior vice president of product management at Aya Healthcare, who manages security for the organization, is also seeing a change in the dynamics between cybersecurity and IT. “AI is noticeably reshaping how security and IT departments collaborate, streamline workflows, blend responsibilities, make decisions and redefine trust dynamics,” he says.  ... “IT’s focus is on speed, efficiency, and enabling the business, while the CISO’s focus is on protecting the business. That distinction is often misunderstood,” he maintains. “As AI introduces powerful new risks, from deepfakes and AI-driven phishing to employees unintentionally exposing sensitive IP through AI queries, only the CISO is positioned to anticipate and mitigate these threats.”


Why Secure Data Migration is the Next Big Boardroom Priority

Industries with the highest dependency on sensitive data are leading the way in secure migration. Financial services, with their heavy regulatory responsibilities and high stakes for customer trust, are among the most proactive industries when it comes to secure data migration. Banks moving from legacy mainframes to cloud-native platforms know that a single misstep could cascade into systemic risk. Healthcare, another high-stakes sector, faces similar urgency. ... Technology hyperscalers such as Microsoft, Google, and Amazon Web Services (AWS) play a dual role: enablers of secure migration and, simultaneously, critical dependencies for enterprises. This reliance brings resilience but also concentration risk. Many CIOs remain concerned about vendor lock-in, even as few alternatives exist at a comparable scale. Enterprises must therefore ensure secure migration while also diversifying their strategy to avoid overreliance on a single ecosystem. ... The shift is clear: secure data migration is no longer an IT department problem. It is a board-level agenda item, shaping strategy and shareholder value. As per the latest findings, 82% of CISOs now directly report to the CEO’s, underscoring their elevated importance. The World Economic Forum has gone further, warning in its 2025 Global Risks Report that data migration failures represent an underappreciated threat to global business resilience


How self-learning AI agents will reshape operational workflows

Experience-based training for AI agents offers strong potential because it allows agents to act autonomously in real-world situations, guided by rewards that emerge from the environment. In the context of operations management, this means agents can learn from past incidents, events, customer tickets, application and infrastructure metrics and logs, as well as any other metrics made available to them. While modern-day hype cycles demand rapid results, much of the promise of AI agents lies in how they will improve operations management over time. Given enough time and training data, the AI agent will be able to plan actions and predict their consequences in the environment—i.e., predict the reward—much better than a human. ... Experience-based learning in this context requires human engineers to conduct post-incident reviews to understand an incident and establish actions to prevent that incident from recurring. However, in many cases, the learnings from a post-incident review are siloed to individual teams and not shared with the wider organization. ... Given that organizations do not consistently conduct post-incident learning reviews or share their findings across the wider organization, operations management is ripe for “agentification” powered by self-learning agents. Instead of burdening busy human engineers with post-incident reviews, AI agents can conduct these reviews and then apply this valuable experience-based training data. 


The DPDPA’s impact on law firms

Most of the personal data processing in HR departments is for purposes related to employment. The DPDPA does provide exemption from obtaining consent from employment purposes under Sec. 7(i) ... However, a reading of this Section would indicate that this exemption is applicable only to current employees and it excludes all processing which happens post-employment or pre-employment. In some instances, where an employee or intern voluntarily emails their resumes to HR departments and the HR departments do not consider the application or take any action on the resume received through email, the DPDPA compliances will not kick in as DPDPA does not apply to personal data which is provided voluntarily by a data principal. But HR departments will need to be vigilant about data collected through designated online portals available on their websites, as in such a case, they can be said to be actively inviting applications unlike the former scenario wherein a candidate is voluntarily sharing their data. ... Under Section 3 of the DPDPA, any foreign entity offering services to individuals in India falls within the law’s extra-territorial scope. ... Several law firms in India have shown significant efforts in enhancing operational standards to ensure that client and partner data is handled safely. Several law firms have implemented standards like ISO 27001, which improves information security, risk management and compliance with regulations.



Is quantum computing poised for another breakthrough?

“Almost all of us in the quantum computing field are absolutely convinced,” Kulkarni said. “But even the skeptics who always thought this was something of the future and never really going to materialize, I think, can concur with us that this is going to happen.” ... Quantum processors currently provide physicists and other scientists with the tools to do big research projects that simply aren’t realistic with other computers. That’s the main use of the technology for now, Boixo said, but as things continue to move forward, the pool of who will use quantum computers will grow. Of course, it’s not just scientists trying to uncover the limits of quantum technology who are using the computers. Marc Lijour, a researcher with the Institute of Electrical and Electronics Engineers, told IT Brew that attackers are interested in how quantum computers can potentially crack encryption much faster than traditional computers. They’re probably already playing with the technology, and waiting until the computers are widely available. “Attackers…are downloading everything they can at the moment and storing it, basically copying the internet and anything they can so they can open it later [using quantum technology],” Lijour said. That’s still a ways in the future. Boixo estimated chaining together 50–100 logical qubits is about five or so years away. With a number of firms looking at developing the next level of quantum computing, it’s a race. 


CISO Spotlights Cybersecurity Challenges in Education Following Kido Breach

Budget is certainly going to be a challenge for all, but more so for state-funded schools and organizations. We do see that as being a challenge everywhere, they have limited resources. The overwhelming feedback is that they just don't have any money to spend, and it's perceived that, therefore, that they can't deploy the security controls that they need. That's a big thing, but I think an even bigger issue is the lack of expertise and time. On lots of occasions you’ll discover institutions where there just aren’t experts on the ground that can manage these cybersecurity risks. They often lean on IT service providers and assume that they’re doing something about cybersecurity, whereas that is not necessarily the case. Budget, expertise and time are big constraints, and I think those issues are causing so many schools to be vulnerable. ... There are plenty of things that can be done with little or no cost. Reviewing all the users, identifying who’s got access and making sure MFA is turned on doesn't carry a significant cost beyond somebody taking the time to do it. That’s going to have a material impact on their posture. Most schools will have an awareness training program, but it's probably a tick box exercise where somebody has to do the course when they join and that’s it. Assigning one person to really own and champion that program could make a material difference to peoples’ awareness.

Daily Tech Digest - October 04, 2025


Quote for the day:

“What seems to us as bitter trials are often blessings in disguise.” -- Oscar Wilde



Autonomous Agents – Redefining Trust and Governance in AI-Driven Software

Agents are no longer confined to code generation. They automate tasks across the full lifecycle: from coding and testing to packaging, deploying, and monitoring. This shift reflects a move from static pipelines to dynamic orchestration. A new developer persona is emerging: the Agentic Engineer. These professionals are not traditional coders or ML practitioners. They are system designers: strategic architects of intelligent delivery systems, fluent in feedback loops, agent behavior, and orchestration across environments. ... To scale agentic AI safely, enterprises must build more than pipelines – they must build platforms of accountability. This requires a System of Record for AI Agents: a unified, persistent layer that treats agents as first-class citizens in the software supply chain. This system must also serve as the foundation for regulatory compliance. As AI regulations evolve globally – covering everything from automated decision-making to data residency and sovereignty – enterprises must ensure that every agent action, dataset, and interaction complies with relevant laws. A well-architected System of Record doesn’t just track activity; it injects governance and compliance into the core of agent workflows, ensuring that AI operates within legal and ethical boundaries from the start.


New AI training method creates powerful software agents with just 78 examples

The problem is that current training frameworks assume that higher agentic intelligence requires a lot of data, as has been shown in the classic scaling laws of language modeling. The researchers argue that this approach leads to increasingly complex training pipelines and substantial resource requirements. Moreover, in many areas, data is not abundant, hard to obtain, and very expensive to curate. However, research in other domains suggests that you don’t necessarily require more data to achieve training objectives in LLM training. ... The LIMI framework demonstrates that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior. Key to the framework is a pipeline for collecting high-quality demonstrations of agentic tasks. Each demonstration consists of two parts: a query and a trajectory. A query is a natural language request from a user, such as a software development requirement or a scientific research goal.  ... “This discovery fundamentally reshapes how we develop autonomous AI systems, suggesting that mastering agency requires understanding its essence, not scaling training data,” the researchers write. “As industries transition from thinking AI to working AI, LIMI provides a paradigm for sustainable cultivation of truly agentic intelligence.”


CISOs advised to rethink vulnerability management as exploits sharply rise

The widening gap between exposure and response makes it impractical for security teams to rely on traditional approaches. The countermeasure is not “patch everything faster,” but “patch smarter” by taking advantage of security intelligence, according to Lefkowitz. Enterprises should evolve beyond reactive patch cycles and embrace risk-based, intelligence-led vulnerability remediation. “That means prioritizing vulnerabilities that are remotely exploitable, actively exploited in the wild, or tied to active adversary campaigns while factoring in business context and likely attacker behaviors,” Lefkowitz says. ... Yüceel adds: “A risk-based approach helps organizations focus on the threats that will most likely affect their infrastructure and operations. This means organizations should prioritize vulnerabilities that can be considered exploitable, while de-prioritizing vulnerabilities that can be effectively mitigated or defended against, even if their CVSS score is rated critical.” ... “Smart organizations are layering CVE data with real-time threat intelligence to create more nuanced and actionable security strategies,” Rana says. Instead of abandoning these trusted sources, effective teams are getting better at using them as part of a broader intelligence picture that helps them stay ahead of the threats that actually matter to their specific environment.


Modernizing Security and Resilience for AI Threats

For IT leaders, there may be concerns about the complexity and the risks of downtime and data loss. Operational leaders typically think of the impacts it will have on staffing demands and disruptions to business continuity. And it’s easy for security and compliance leaders to be worried about meeting regulatory standards without exposing the company’s data to new attacks. Most importantly, executive leadership can tend to be hesitant due to concerns around the total investment costs and disruption to innovation and revenue growth. While each leader may have their valid concerns, the risk of inaction is much greater. ... Fortunately, modernization doesn’t mean you need to take on a massive overhaul of your organization’s operations. Modernizing in place is an alternative solution that can be a sustainable, incremental strategy that improves stability, security, and performance without putting mission-critical systems at risk. When leaders can align on business continuity needs and concerns, they can develop low-risk approaches that still move operations forward while achieving long-term organizational goals. ... A modernization journey can take many forms. From updates to your on-prem system or migrating to a hybrid-cloud environment, modernization is a strategic initiative that can improve and bolster your company’s strength against potential data breaches.


Navigating AI Frontier — Role of Quality Engineering in GenAI

In the GenAI era, the role of Quality Engineering (QE) is under the spotlight like never before. Some whisper that QE may soon be obsolete after all, if developer agents can code autonomously, why not let GenAI-powered QE agents generate test cases from user stories, synthesize test data, and automate regression suites with near-perfect precision? Playwright and its peers are already showing glimpses of this future. In corporate corridors, by the water coolers, and in smoke breaks, the question lingers: Are we witnessing the sunset of QE as a discipline? The reality, however, is far more nuanced. QE is not disappearing it is being reshaped, redefined, and elevated to meet the demands of an AI-driven world. ... if test scripts pose one challenge, test data is an even trickier frontier. For testers, data that mirrors production is a blessing; data that strays too far is a nightmare. Left to itself, a large language model will naturally try to generate test data that looks very close to production. That may be convenient, but here’s the real question: can it stand up to compliance scrutiny? ... What we’ve explored so far only scratches the surface of why LLMs cannot and should not be seen as replacements for Quality Engineering. Yes, they can accelerate certain tasks, but they also expose blind spots, compliance risks, and the limits of context-free automation. 


Are Unified Networks Key to Cyber Resilience?

Fragmentation usually stems from a mix of issues. It can start with well-meaning decisions to buy tools for specific problems. Over time, this creates siloed data, consoles and teams, and it can take a lot of additional work to manage all the information coming from different sources. Ironically, instead of improving security, it can introduce new risks. Another factor is the misalignment of business processes as needs change. As business needs evolve and grow, the pressure to address specific requirements can drive IT and security processes in different directions. And finally, there is shadow IT, where employees attach new devices and applications to the network that haven’t been approved. If IT and security teams can’t keep pace with business initiatives, other teams across the organisation may seek to find their own solutions, sometimes bypassing official processes and adding to fragmentation. ... The bigger issue is that security teams risk becoming the ‘department of no’ instead of business enablers. A unified approach can help address this. By consolidating networking, security and observability into one unified platform, organisations have a single source of truth for managing network security. They can even automate reporting in some platforms, eliminating hours of manual work. With a single view of the entire network instead of putting together puzzle pieces from various applications, security teams see the big picture instantly, allowing them to prioritise what matters, respond faster and avoid burnout.


How CIOs Balance Emerging Technology and Technical Debt

"Technical debt isn't just an IT problem -- it's an innovation roadblock." Briggs pointed to Deloitte data showing 70% of technology leaders cite technical debt as their number one productivity drain. His advice? Take inventory before you innovate. "Know what's working versus what's just barely hanging on, because adding AI to broken processes doesn't fix them, it just breaks them faster," he said. ... "Everything kind of boils down to how the organizations are structured, how your teams are structured, what the goals are per team and what you're delivering," Caiafa said. At SS&C, some teams focus solely on maintaining legacy systems, while others support the integration of newer technologies. But, Caliafa said, the dual structure doesn't eliminate the challenge: Technical debt still accumulates as newer technologies are adopted. He advised CIOs to stay disciplined about prioritizing value. At SS&C, the approach is straightforward: "If it's not going to help us or make a material impact on what we're doing day to day, then it's not going to be an area of focus," he said. ... "Technical debt isn't just legacy code -- it's the accumulation of decisions made without long-term clarity," he said. Profico urged CIOs to embed architectural thinking into every IT initiative, align with business strategy and adopt of new technologies in an incremental manner -- while avoiding "the urge to over-index on shiny tools."


For Banks and Credit Unions, AI Can Be Risky. But What’s Riskier? Falling Behind.

"Over the past 18 months, I have not encountered a single financial services organization that said ‘we don’t need to do anything'" when it comes to AI, said Ray Barata, Director of CX Strategy at TTEC Digital, a global customer experience technology and services company. That said, though many banks and credit unions are highly motivated, and some may have the beginnings of a strategy in mind, they are frozen in place. Conditioned by decades of "garbage-in-garbage-out" data-integration horror stories, these institutions’ leaders have come to believe they must wait until their data architectures are deemed "ready" — a state that never arrives. Meanwhile, compliance and security concerns add more friction. And doubts over return on investment complete the picture. ... Barata emphasized the critical role "sandboxing" plays in the low-risk / high-impact approach — setting up a controlled test environment that mirrors the real conditions operating within the institution, but walled off from its operating environment. This enables experimentation within guardrails. Referring to TTEC Digital’s Sandcastle CX approach, he described this as "building an entire ecosystem in which we can measure performance of individual platform components and data sets" — so that sensitive information stays protected while teams trial AI safely and prove value before scaling.


What is vector search and when should you use it?

Vector search uses specialized language models (not the large LLMs such as ChatGPT, but targeted embedding models) to convert text into numerical representations, known as vectors, which capture the meaning of the text. This enables search engines to make connections between different terminologies. If you search for “car,” the system can also find documents that mention “vehicle” or “motor vehicle,” even if those exact terms do not appear. ... If semantic meaning is crucial, vector search can be a good solution. This is the case when users search for the same information using different words, or when a better search query can lead to increased revenue. A large e-commerce platform could potentially achieve 1 or 2 percent more revenue by applying vector search. The application of vector search is therefore immediately measurable. ... Vector search does add extra complexity. Documents or texts must be divided into chunks, then run through embedding models, and finally indexed efficiently. Elastic uses HNSW (Hierarchical Navigable Small World) indexing for this. To keep things from getting too complex, Elastic has chosen to integrate it into its existing search solution. It is an additional data type that can be stored in a column alongside existing data. This also makes hybrid search much easier. However, this is not so simple with every vector search provider.


Digital friction is where most AI initiatives fail

While the link between digital maturity and AI outcomes plays out across the enterprise, it is clearest in employee-facing use cases. Many AI tools being introduced into the workplace are designed to assist with routine tasks, surface relevant knowledge, or to summarise documents and automate repetitive workflows. ... With DEX maturity, organisations begin to change how they understand and deliver technology. Early efforts often focus narrowly on devices or support tickets. More mature organisations shift their focus toward employees, designing services around user personas, mapping full task journeys across tools and monitoring how those journeys perform in real time. Telemetry moves beyond technical diagnostics, becoming a strategic input for decision-making, investment planning and continuous improvement. Experience data becomes a foundation for IT operations and transformation. ... Where maturity is lacking, AI tends to be misapplied. Automation is aimed at the wrong processes. Recommendations appear in the wrong context. Systems respond to incomplete or misleading signals. The result is friction, not transformation. Organisations that have meaningful visibility into how work actually happens, and where it slows down, can identify where AI would make a measurable difference.
What it means for you

Daily Tech Digest - October 03, 2025


Quote for the day:

"Success is the progressive realization of a worthy goal or ideal." -- Earl Nightingale



AI And The End Of Progress? Why Innovation May Be More Fragile Than We Think

“If progress was inevitable, the first industrial revolution would have happened a lot earlier,” he explained in our recent conversation. “And if progress was inevitable, most countries around the world would be rich and prosperous today.” Many societies have seen periods of intense innovation followed by stagnation or collapse. Ancient cities such as Ephesus once thrived and then disappeared. The Soviet Union industrialized rapidly but failed to keep up when the computer era began. ... Artificial intelligence sits squarely at the center of this fragile transition. Early breakthroughs, from transformers to generative AI, came from open experimentation in universities and small labs. ... Many organizations are using AI primarily for process automation and cost-cutting. Frey believes this will not deliver transformative growth. “If AI means we do email and spreadsheets a bit more efficiently and ease the way we book travel, the transformation is not going to be on par with electricity or the internal combustion engine,” he said. True prosperity comes from creating new industries and doing previously inconceivable things. ... “If you want to thrive as a business in the AI revolution, you need to give people at low levels of the organization more decision-making autonomy to actually implement the improvements they are finding for themselves,” he said.


Why every manager should have trauma literacy

Trauma literacy is the ability to recognize that unhealed past experiences show up in daily behavior and to respond in ways that foster safety and resilience. You don’t need to know someone’s history to be mindful of trauma’s effects. You just need to assume that trauma exists, and that it may be shaping how people show up at work. ... Managers are trained in financial strategy, forecasting, and performance management. But few are trained to recognize the external manifestations of what I felt back in that tech office: the racing heart, the sense of dread, and the silent withdrawal. Most workers are taught to push harder instead of pausing to hold space for emotions. Emotions are messy, and it often feels safer to stick with technical tasks and leave feelings unaddressed. ... Once someone shares something vulnerable, don’t rush to fix it or dismiss it. Just reflect it back: “Thanks for sharing that, I hear you,” or “That makes a lot of sense.” From there, you might ask, “Is there anything you need from me today?” or “Would it help to adjust your workload this week?” ... Trauma literacy isn’t a one-off conversation; it’s a culture. Build in rituals for reflection, adjust workloads proactively, and allocate time and resources toward psychological safety. When resilience is designed into structures, managers don’t have to rely on intuition alone.


Botnets are getting smarter and more dangerous

They don’t stop at automation. Natural language processing can be used to generate convincing phishing emails at scale. Reinforcement learning lets malware adjust strategies based on firewall responses. Image recognition can help bots evade visual CAPTCHAs. These capabilities give attackers a terrifying new playbook, one that relies less on scale and more on sophistication. What makes this trend especially insidious is that botnets can now be smaller and stealthier than ever. Instead of infecting millions of devices to overwhelm a system, an AI-driven botnet might only need a few thousand nodes to carry out highly targeted, surgical operations. That makes detection harder, attribution fuzzier and mitigation more complex. ... A compromised software development kit or node package manager can serve as a delivery mechanism for an AI-powered botnet, enabling it to infiltrate thousands of businesses in a single attack. From there, the botnet doesn’t just wait for instructions; it scouts, learns and adapts. IOT devices remain another massive vulnerability. ... The regulatory angle is becoming more critical as well. As botnet sophistication grows, governments and commercial organizations are being forced to reconsider their cybercrime frameworks. The blurred line between AI research and weaponization is becoming a legal gray zone. Will training a model to bypass CAPTCHA become criminalized? What about selling an AI model that can autonomously scan for zero-day exploits?


From Spend to Strategy: A CISO's View

Company executives view cybersecurity as a core business risk, but CISOs must communicate risk in a similar capacity to other risk functions through heat maps. These heat maps communicate the likelihood of a security incident impacting what matters most to the business - which includes key business capabilities, critical systems and services, and core locations or facilities - and the materiality of such an impact. Using these heat maps, CISOs can and should show the progress made in terms of reducing incident likelihood and impact, the progress expected to be made over the coming reporting period, and gaps that require additional funding to reduce corresponding risks to an acceptable level. From a security spend perspective, this means explaining to leadership how the function will deliver better business outcomes, not only with more budget but also with reallocated funding that can help create better ROI. CISOs must be prepared to answer inbound questions, such as: Haven't we already invested in this? What are you able to deliver with 20% more budget for these new capabilities that you weren't able to deliver before? Staying away from highly technical metrics like vulnerability counts with no direct correlation to business risk must be avoided at all costs. It's about helping executives understand the progress being made and soon to be made, along with gaps tied to reducing risk related to what the business cares about most.


The Future of Data Center Security: What Businesses Must Know

Unlike in the past, when cyberattacks mainly targeted networks, today’s hackers combine online attacks with physical sabotage in what is known as the “dual-attack model.” For example, while a cybercriminal tries to breach a network firewall, another may attempt to disable equipment physically inside the data center building. This coordinated attack can cause far-reaching damage. ... Alongside security, power management is a top priority. Indian data centers face rising energy demands. Reports show rack power consumption is climbing steadily, especially for AI workloads. Mumbai and Hyderabad, leading India’s AI data center growth, are investing in advanced cooling technologies and reliable backup energy systems to ensure smooth operations and prevent downtime. Failures in cooling or power systems can cause major outages that result in millions in losses.  ... Cybersecurity experts also warn that more attacks today are concealed within encrypted network traffic, bypassing traditional firewalls. To counter this, Indian data centers are adopting tools that decrypt, inspect, and then re-encrypt data communications in real time. ... Indian companies must act decisively to implement next-generation security measures. Those that do will benefit from uninterrupted operations, stronger compliance, and gain a competitive edge in an increasingly digital economy.


4 ways to use time to level up your security monitoring

Most security events start small. You notice a few unusual logins, a traffic spike or abnormal activities in a certain system. Where raw log pipelines add parsing or enrichment delays before data is ready for analysis, time series arrives consistently structured and ready for immediate querying. This makes it easier to establish behavioral baselines and even apply statistical models like rolling averages and standard deviations to detect anomalies quickly. ... Detection is only half the battle. Time series systems handle low-latency ingest, allowing alerts and triggers to be fired in real-time as new data points arrive. When a device needs to be quarantined, access tokens revoked or an attacker’s behavior spun up into a forensics workflow to prevent lateral movement, it can do so in real-time. Because most SaaS log platforms batch and index events before they are fully queryable, SIEM-driven responses can lag by minutes, depending on configuration and data volume. Time series systems process data points in real-time, reducing that lag. ... SIEMs remain indispensable, and logs are foundational for investigations and compliance. High-precision time series, continuously ingested and analyzed, enables faster detection, longer retention and real-time response. All without the cost and performance tradeoffs of relying on logs alone.


The Leadership Style That’s Winning in the AI Era

Technology can generate ideas and reinforce existing thinking, but it cannot replace authentic human connection. Quiet leaders understand this instinctively: They build credibility through genuine relationships, not algorithms. These leaders share a common set of principles and practices that guide how they work and show up for their teams ... Respect grows when leaders admit their limitations, take responsibility for mistakes and remain grounded. Employees appreciate leaders who share when they don’t have all the answers and ask others to contribute to solutions. This kind of openness increases their credibility and influence. ... The best leaders treat all conversations as learning opportunities. A curious leader doesn’t jump to conclusions or cut discussions short. They ask thoughtful questions and listen actively, signaling to their teams that their input matters. This kind of curiosity encourages innovation and creates space for better ideas to surface. ... Rather than seeking credit, quiet leaders focus on building organizations that thrive beyond any one individual. They delegate, ensuring that their team can take real ownership of projects and celebrate success together. ... Leaders who engage in the day-to-day work of the business gain credibility and insight. Whether it’s walking the production floor or sitting on customer service calls, this engagement deepens the understanding of the business, the customer experience and the challenges team members face.


How autonomous businesses succeed by engaging with the world

Autonomous machines are designed from the outside in, while conventional machines are designed from the inside out. We are witnessing a fundamental shift in how successful systems are designed, and agentic AI sits at the heart of this revolution. Today, businesses are being designed more and more to resemble machines. ... For companies becoming autonomous machines, this outside-in orientation has profound implications for how they think about customers, markets, and value creation. Traditional companies are often internally focused. They design products based on their capabilities, organize around their processes, and optimize for efficiency. Customers are external entities who hopefully will want what the company produces. The company's internal logic, its org chart, processes, and systems become the center of attention, with customers orbiting around these internal priorities. ... Autonomous companies must be world-oriented rather than center-oriented. Customers represent the primary external environment they need to understand and respond to, but they're not a center to be served; they're part of a dynamic world to be engaged with. Just as a Tesla can't function without sophisticated environmental sensing, an autonomous company can't function without a deep, real-time understanding of customer needs, behaviors, and changing requirements.


Indian factories and automation: The ‘everything bagel’ is here

True competitiveness in manufacturing now hinges on integrating automation right from the design stage and not just on the assembly floor, indicates Krishnamoorthy. “By connecting CAD environments with robots friendly jigs, manufacturers can reduce programming times by 30 per cent, speeding up product launches and boosting agility in responding to market demands.” You can now walk around a plant inside your computer- thanks to the power of modelling technology. ... As attractive and revolutionary this advent of automation is, some holes still remain to be looked into. Like labor replacement, robot taxes, turbulence in brownfield facilities and accidents due to automation changing so much in the factories. Dai avers that automation may displace low-skill jobs but will address labor shortages. As to Robot taxes, they will become a norm in the long term amid the rise of robotics to balance innovation and social disruption. “Robotics governance is becoming increasingly critical to ensure security, privacy, ethics, and regulatory compliance.” He feels. ... “The future of robotics in manufacturing is about more than efficiency gains—it is about reshaping industrial culture, building resilience, and redefining global competitiveness. India, with its rapid adoption and supportive ecosystem, is not just catching up but positioning itself as a potential leader in this next era of intelligent manufacturing.” Captures Krishnamoorthy.


Old-school engineering lessons for AI app developers

Models keep getting smarter; apps keep breaking in the same places. The gap between demo and durable product remains the place where most engineering happens. How are development teams breaking the impasse? By getting back to basics. ... When data agents fail, they often fail silently—giving confident-sounding answers that are wrong, and it can be hard to figure out what caused the failure.” He emphasizes systematic evaluation and observability for each step an agent takes, not just end-to-end accuracy. ... The teams that win treat knowledge as a product. They build structured corpora, sometimes using agents to lift entities and relations into a lightweight graph. They grade their RAG systems like a search engine: on freshness, coverage, and hit rate against a golden set of questions. ... As Valdarrama quips, “Letting AI write all of my code is like paying a sommelier to drink all of my wine.” In other words, use the machine to accelerate code you’d be willing to own; don’t outsource judgment. In practice, this means developers must tighten the loop between AI-suggested diffs and their CI and enforce tests on any AI-generated changes, blocking merges on red builds ... And then there’s security, which in the age of generative AI has taken on a surreal new dimension. The same guardrails we put on AI-generated code must be applied to user input, because every prompt should be treated as potentially hostile.

Daily Tech Digest - October 02, 2025


Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer


AI cost overruns are adding up — with major implications for CIOs

Many organizations appear to be “flying blind” while deploying AI, adds John Pettit, CTO at Google Workspace professional services firm Promevo. If a CIO-led AI project misses budget by a huge margin, it reflects on the CIO’s credibility, he adds. “Trust is your most important currency when leading projects and organizations,” he says. “If your AI initiative costs 50% more than forecast, the CFO and board will hesitate before approving the next one.” ... Beyond creating distrust in IT leadership, missed cost estimates also hurt the company’s bottom line, notes Farai Alleyne, SVP of IT operations at accounts payable software vendor Billtrust. “It is not just an IT spending issue, but it could materialize into an overall business financials issue,” he says. ... enterprise leaders often assume AI coding assistants or no-code/low-code tools can take care of most of the software development needed to roll out a new AI tool. These tools can be used to create small prototypes, but for enterprise-grade integrations or multi-agent systems, the complexity creates additional costs, he says. ... In addition, organizations often underestimate the cost of operating an AI project, he says. Token usage for vectorization and LLM calls can cost tens of thousands of dollars per month, but hosting your own models isn’t cheap, either, with on-premises infrastructure costs potentially running into the thousands of dollars per month.


AI-Powered Digital Transformation: A C-Suite Blueprint For The Future Of Business

At its core, digital transformation is a strategic endeavor, not a technological one. To succeed, it should be at the forefront of the organizational strategy. This means moving beyond simply automating existing processes and instead asking how AI enables new ways of creating value. The shift is from operational efficiency to business model innovation. ... True digital leaders possess a visionary mindset and the critical competencies to guide their teams through change. They must be more than tech-savvy; they must be emotionally intelligent and capable of inspiring trust. This demands an intentional effort to develop leaders who can bridge the gap between deep business acumen and digital fluency. ... With the strategic, cultural and data foundations in place, organizations can focus on building a scalable and secure digital infrastructure. This may involve adopting cloud computing to provide flexible resources needed for big data processing and AI model deployment. It can also mean investing in a range of complementary technologies that, when integrated, create a cohesive and intelligent ecosystem. ... Digital transformation is a complex, continuous journey, not a single destination. This framework provides a blueprint, but its success requires leadership. The challenge is not technological; it's a test of leadership, culture and strategic foresight.


Why Automation Fails Without the Right QA Mindset

Automation alone doesn’t guarantee quality — it is only as effective as the tests it is scripted to run. If the requirements are misunderstood, automated tests may pass while critical issues remain undetected. I have seen failures where teams relied solely on automation without involving proper QA practices, leading to tests that validated incorrect behavior. Automation frequently fails to detect new or unexpected issues introduced by system upgrades. It often misses critical problems such as faulty data mapping, incomplete user interface (UI) testing and gaps in test coverage due to outdated scripts. Lack of adaptability is another common obstacle that I’ve repeatedly seen undermine automation testing efforts. When UI elements are tightly coupled, even minor changes can disrupt test cases. With the right QA mindset, this challenge is anticipated — promoting modular, maintainable automation strategies capable of adapting to frequent UI and logic changes. Automation lacks the critical analysis required to validate business logic and perform true end-to-end testing. From my experience, the human QA mindset proved essential during the testing of a mortgage loan calculation system. While automation handled standard calculations and data validation, it could not assess whether the logic aligned with real-world lending rules.


Stop Feeding AI Junk: A Systematic Approach to Unstructured Data Ingestion

Worse, bad data reduces accuracy. Poor quality data not only adds noise, but it also leads to incorrect outputs that can erode trust in AI systems. The result is a double penalty: wasted money and poor performance. Enterprises must therefore treat data ingestion as a discipline in its own right, especially for unstructured data. Many current ingestion methods are blunt instruments. They connect to a data source and pull in everything, or they rely on copy-and-sync pipelines that treat all data as equal. These methods may be convenient, but they lack the intelligence to separate useful information from irrelevant clutter. Such approaches create bloated AI pipelines that are expensive to maintain and impossible to fine-tune. ... Once data is classified, the next step is to curate it. Not all data is equal. Some information may be outdated, irrelevant, or contradictory. Curating data means deliberately filtering for quality and relevance before ingestion. This ensures that only useful content is fed to AI systems, saving compute cycles and improving accuracy. This also ensures that RAG and LLM solutions can utilize their context windows on tokens for relevant data and not get cluttered up with irrelevant junk. ... Generic ingestion pipelines often lump all data into a central bucket. A better approach is to segment data based on specific AI use cases. 


Five critical API security flaws developers must avoid

Developers might assume that if an API endpoint isn’t publicly advertised, it’s inherently secure, a dangerous myth known as “security by obscurity.” This mistake manifests in a few critical ways: developers may use easily guessable API keys or leave critical endpoints entirely unprotected, allowing anyone to access them without proving their identity. ... You must treat all incoming data as untrusted, meaning all input must be validated on the server-side. Your developers should implement comprehensive server-side checks for data types, formats, lengths, and expected values. Instead of trying to block everything that is bad, it is more secure to define precisely what is allowed. Finally, before displaying or using any data that comes back from the API, ensure it is properly sanitized and escaped to prevent injection attacks from reaching end-users. ... Your teams must adhere to the “only what’s necessary” principle by designing API responses to return only the absolute minimum data required by the consuming application. For production environments, configure systems to suppress detailed error messages and stack traces, replacing them with generic errors while logging the specifics internally for your team. ... Your security strategy must incorporate rate limiting to apply strict controls on the number of requests a client can make within a given timeframe, whether tracked by IP address, authenticated user, or API key.


Disaster recovery and business continuity: How to create an effective plan

If your disaster recovery and business continuity plan has been gathering dust on the shelf, it’s time for a full rebuild from the ground up. Key components include strategies such as minimum viable business (MVB); emerging technologies such as AI and generative AI; and tactical processes and approaches such as integrated threat hunting, automated data discovery and classification, continuous backups, immutable data, and gamified tabletop testing exercises. Backup-as-a-service (BaaS) and disaster recovery-as-a-service (DRaaS) are also becoming more popular, as enterprises look to take advantage of the scalability, cloud storage options, and ease-of-use associated with the “as-a-service” model. ... Accenture’s Whelan says that rather than try to restore the entire business in the event of a disaster, a better approach might be to create a skeletal replica of the business, an MVB, that can be spun up immediately to keep mission-critical processes going while traditional backup and recovery efforts are under way. ... The two additional elements are: one offline, immutable, or air-gapped backup that will enable organizations to get back on their feet in the event of a ransomware attack, and a goal of zero errors. Immutable data is “the gold standard,” Whelan says, but there are complexities associated with proper implementation.


Building Intelligence into the Database Layer

At the core of this evolution is the simple architectural idea of the database as an active intelligence engine. Rather than simply recording and serving historical data, an intelligent database interprets incoming signals, transforms them in real-time, and triggers meaningful actions directly from within the database layer. From a developer’s perspective, it still looks like a database, but under the hood, it’s something more: a programmable, event-driven system designed to act on high-velocity data streams with intense precision in real-time. ... Built-in processing engines unlock features like anomaly detection, forecasting, downsampling, and alerting in true real-time. These embedded engines enable real-time computation directly inside the database. Instead of moving data to external systems for analysis or automation, developers can run logic where the data already lives. ... Active intelligence doesn’t just enable faster reactions; it opens the door to proactive strategies. By continuously analyzing streaming data and comparing it to historical trends, systems can anticipate issues before they escalate. For example, gradual changes in sensor behavior can signal the early stages of a failure, giving teams time to intervene. ... Developers need more than just storage and query, they need tools that think. Embedding intelligence into the database layer represents a shift toward active infrastructure: systems that monitor, analyze, and respond at the edge, in the cloud, and across distributed environments.


AI Cybersecurity Arms Race: Are Companies Ready?

Security operations centers were already overwhelmed before AI became mainstream. Human analysts, drowning in alerts, can’t possibly match the velocity of machine-generated threats. Detection tools, built on static signatures and rules, simply can’t keep up with attacks that mutate continuously. The vendor landscape isn’t much more reassuring. Every security company now claims its product is “AI-powered,” but too many of these features are black boxes, immature, or little more than marketing gloss. ... That doesn’t mean defenders are standing still. AI is beginning to reshape cybersecurity on the defensive side, too, and the potential is enormous. Anomaly detection, fueled by machine learning, is allowing organizations to spot unusual behavior across networks, endpoints, and cloud environments far faster than humans ever could. In security operations centers, agentic AI assistants are beginning to triage alerts, summarize incidents, and even kick off automated remediation workflows. ... The AI arms race isn’t something the CISO can handle alone; it belongs squarely in the boardroom. The challenge isn’t just technical — it’s strategic. Budgets must be allocated in ways that balance proven defenses with emerging AI tools that may not be perfect but are rapidly becoming necessary. Security teams must be retrained and upskilled to govern, tune, and trust AI systems. Policies need to evolve to address new risks such as AI model poisoning or unintended bias.


Agentic AI needs stronger digital certificates

The consensus among practitioners is that existing technologies can handle agentic AI – if, that is, organisations apply them correctly from the start. “Agentic AI fits into well-understood security best practices and paradigms, like zero trust,” Wetmore emphasises. “We have the technology available to us – the protocols and interfaces and infrastructure – to do this well, to automate provisioning of strong identities, to enforce policy, to validate least privilege access.” The key is approaching AI agents with security-by-design principles rather than bolting on protection as an afterthought. Sebastian Weir, executive partner and AI Practice Leader at IBM UK&I, sees this shift happening in his client conversations. ... Perhaps the most critical insight from security practitioners is that managing agentic AI isn’t primarily about new technology – it’s about governance and orchestration. The same platforms and protocols that enable modern DevOps and microservices can support AI agents, but only with proper oversight. “Your ability to scale is about how you create repeatable, controllable patterns in delivery,” Weir explains. “That’s where capabilities like orchestration frameworks come in – to create that common plane of provisioning agents anywhere in any platform and then governance layers to provide auditability and control.”


Learning from the Inevitable

Currently, too many organizations follow a “nuke and pave” approach to IR, opting to just reimage computers because they don’t have the people to properly extract the wisdom from an incident. In the short term, this is faster and cheaper but has a detrimental impact on protecting against future threats. When you refuse to learn from past mistakes, you are more prone to repeating them. Conversely, organizations may turn to outsourcing. Experts in managed security services and IR have realized consulting gives them a broader reach and impact over the problem — but none of these are long-term solutions. This kind of short-sighted IR creates a false sense of security. Organizations are solving the problem for the time being, but what about the future? Data breaches are going to happen, and reliance on reactive problem-solving creates a flimsy IR program that leaves an organization vulnerable to threats. ... Knowledge-sharing is the best way to go about this. Sharing key learnings from previous attacks is how these teams can grow and prevent future disasters. The problem is that while plenty of engineers agree they learn the most when something “breaks” and that incidents are a treasure trove of knowledge for security teams, these conversations are often restricted to need-to-know channels. Openness about incidents is the only way to really teach teams how to address them.