Showing posts with label AI threats. Show all posts
Showing posts with label AI threats. Show all posts

Daily Tech Digest - March 10, 2026


Quote for the day:

"A leader has the vision and conviction that a dream can be achieved. He inspires the power and energy to get it done." -- Ralph Nader


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 37 mins • Perfect for listening on the go.

Job disruption by AI remains limited — and traditional metrics may be missing the real impact

This article on computerworld explores the current state of artificial intelligence in the workforce. Despite widespread alarm, data from Challenger, Gray & Christmas indicates that AI accounted for roughly 8 to 10 percent of job cuts in early 2026. Researchers from Anthropic argue that traditional metrics fail to capture the nuances of AI integration, introducing an "observed exposure" methodology. This technique combines theoretical large language model capabilities with actual usage data, revealing that while certain roles—such as computer programmers and customer service representatives—have high exposure to automation, actual deployment lags significantly behind technical potential. Currently, AI functions primarily as a tool for task-based augmentation rather than full-scale replacement, which enhances worker productivity but complicates entry-level hiring. The report suggests that while immediate mass unemployment hasn't materialized, the long-term impact will require a fundamental re-engineering of workflows. This shift may disproportionately affect younger workers as companies struggle to balance AI efficiency with the necessity of maintaining a pipeline of human talent. Ultimately, the transition necessitates a strategic realignment of human roles to ensure sustainable growth in an intelligence-native era.


Why Password Audits Miss the Accounts Attackers Actually Want

This article on BleepingComputer highlights a critical disconnect between standard compliance-driven password audits and the actual tactics used by cybercriminals. While traditional audits prioritize technical requirements like complexity and rotation, they often overlook the context that makes an account vulnerable. For instance, a password can be statistically "strong" yet already compromised in a previous breach; research indicates that 83% of leaked passwords still meet regulatory standards. Furthermore, audits frequently neglect "orphaned" accounts belonging to former employees or contractors, which provide silent entry points for attackers. Service accounts—often over-privileged and exempt from expiry policies—represent another major blind spot. The piece argues that point-in-time snapshots are insufficient against continuous threats like credential stuffing. To be truly effective, security teams must shift toward continuous monitoring, incorporating breached-password screening and risk-based prioritization. By expanding the scope to include dormant, external, and service accounts, organizations can move beyond mere compliance to address the high-value targets that attackers prioritize. Ultimately, securing a digital environment requires recognizing that a compliant password is not necessarily a safe one in the face of modern, targeted exploitation.


AI is supercharging cloud cyberattacks - and third-party software is the most vulnerable

The latest Google Cloud Threat Report, as analyzed by ZDNET, highlights a significant escalation in cybersecurity risks where artificial intelligence is increasingly being used to "supercharge" cloud-based attacks. The report reveals a dramatic collapse in the window between the disclosure of a vulnerability and its mass exploitation, shrinking from weeks to mere days. Rather than targeting the highly secured core infrastructure of major cloud providers, threat actors are now focusing their efforts on unpatched third-party software and code libraries. This shift emphasizes that the modern supply chain remains a critical weak point for many organizations. Furthermore, the report notes a transition away from traditional brute force attacks toward more sophisticated identity-based compromises, including vishing, phishing, and the misuse of stolen human and non-human identities. Data exfiltration is also evolving, with "malicious insiders" increasingly using consumer-grade cloud storage services to move confidential information outside the corporate perimeter. To combat these AI-powered threats, Google’s experts recommend that businesses adopt automated, AI-augmented defenses, prioritize immediate patching of third-party tools, and strengthen identity management protocols. Ultimately, the report serves as a stark warning that in the current threat landscape, speed and automation are no longer optional but essential components of a robust cybersecurity strategy.


Change as Metrics: Measuring System Reliability Through Change Delivery Signals

This article highlights that system changes account for the vast majority of production incidents, necessitating their treatment as primary reliability indicators. To manage this risk, the author proposes a framework centered on three core business metrics: Change Lead Time, Change Success Rate, and Incident Leakage Rate. While aligned with DORA principles, this model specifically focuses on delivery quality by distinguishing between immediate deployment failures and latent defects that manifest as post-release incidents. To operationalize these goals, technical control metrics such as Change Approval Rate, Progressive Rollout Rate, and Change Monitoring Windows are introduced to provide actionable insights into pipeline friction and risk. The piece further advocates for a platform-agnostic, event-centric data architecture to collect these signals across diverse, distributed environments. This centralized approach avoids the brittleness of platform-specific logging and provides a unified view of system health. Ultimately, the framework empowers organizations to transform change management from a reactive necessity into a proactive, measurable engineering capability. By integrating these metrics, development teams can effectively balance the need for high-speed delivery with the imperative of system stability, ensuring that rapid innovation does not come at the expense of user experience or operational reliability.


The future of generative AI in software testing

In this article on Techzine, experts Hélder Ferreira and Bruno Mazzotta discuss the transformative shift of AI from a simple task accelerator to a fundamental structural layer within delivery pipelines. As global IT investment in AI is projected to surge toward $6.15 trillion by 2026, the software testing landscape is evolving beyond early challenges like hallucinations and "vibe coding" toward a sophisticated "quality intelligence layer." The authors outline four critical areas where AI adds strategic value: generating complex scenario-based datasets, suggesting high-risk exploratory prompts, automating defect triage to identify regression patterns, and enabling context-aware execution that prioritizes testing based on actual risk rather than volume. Crucially, the piece argues that while AI can significantly enhance velocity, sustainable success depends on maintaining "humans-in-the-loop" to ensure traceability and accountability. In this new era, the primary differentiator for enterprises will not be the sheer amount of AI deployed, but the effectiveness of their governance frameworks. By linking intent with execution and using AI as connective tissue across the lifecycle, organizations can achieve a balance where rapid delivery is supported by explainable automation and human-verified confidence in software quality.


CIOs cut IT corners to manufacture budget for AI

In this CIO.com article, author Esther Shein examines the aggressive strategies IT leaders are employing to fund artificial intelligence initiatives amidst stagnant overall budgets. Faced with intense pressure from boards and executive leadership to prioritize AI, many CIOs are being forced to make difficult trade-offs that jeopardize long-term stability. Common tactics include delaying non-critical infrastructure refreshes, such as server expansions and network improvements, which are often pushed out by twelve to eighteen months. Additionally, organizations are aggressively consolidating vendors, renegotiating contracts, and cutting legacy software subscriptions to free up capital. Some leaders have even implemented strict "self-funding" mandates where every new AI project must be offset by equivalent cuts elsewhere. Beyond technical sacrifices, the human element is also affected, with many departments reducing reliance on contractors or trimming internal staff to reallocate funds toward high-impact AI use cases. While these measures enable rapid deployment, they frequently lead to the accumulation of technical debt and a narrower scope for implementations. Ultimately, the piece warns that while these "corners" are being cut to fuel innovation, the resulting lack of focus on foundational maintenance could present significant operational risks in the future.


Beyond Prompt Injection: The Hidden AI Security Threats in Machine Learning Platforms

In the article "Beyond Prompt Injection: The Hidden AI Security Threats in Machine Learning Platforms," the focus of AI security shifts from headline-grabbing prompt injections to the critical vulnerabilities within MLOps infrastructure. While many security teams prioritize protecting chatbots from manipulation, the underlying platforms used to train and deploy models often present a far more dangerous attack surface. Through a red team engagement, researchers demonstrated how a simple self-registered trial account could be used to achieve remote code execution on a provider’s cloud infrastructure. By deploying a seemingly legitimate but malicious machine learning model, attackers can exploit the fact that these platforms must execute arbitrary code to function. The study highlights a significant risk: once RCE is achieved, weak network segmentation can allow adversaries to bypass trust boundaries and access sensitive internal databases or services. This effectively turns a managed ML environment into a gateway for lateral movement within a corporate network. To mitigate these threats, the article stresses that organizations must move beyond model-centric security and adopt robust infrastructure protections, including strict network isolation, continuous behavior monitoring, and a "zero-trust" approach to user-deployed artifacts, ensuring that the convenience of rapid AI development does not come at the cost of total system compromise.


Enterprise agentic AI requires a process layer most companies haven’t built

The VentureBeat article emphasizes that while 85% of enterprises aspire to implement agentic AI within the next three years, a staggering 76% acknowledge that their current operations are fundamentally unequipped for this transition. The core issue lies in the absence of a "process layer"—a critical foundation of optimized workflows and operational intelligence that provides AI agents with the necessary context to function effectively. Without this layer, agents are essentially "guessing," leading to a lack of reliability that causes 82% of decision-makers to fear a failure in return on investment. The piece argues that the primary hurdle is not merely technological but rather rooted in organizational structure and change management. Most companies suffer from siloed data and fragmented processes that hinder the seamless integration of autonomous systems. To overcome these barriers, businesses must prioritize process optimization and operational visibility, ensuring that AI-driven initiatives are linked to strategic executive outcomes. Simply layering advanced AI over inefficient, legacy frameworks will likely result in costly friction. Ultimately, for agentic AI to move beyond experimental pilots and deliver scalable value, organizations must first build a robust architectural bridge that connects sophisticated models with the complex, real-world logic of their daily business operations and high-stakes organizational decision cycles.


Building resilient foundations for India’s expanding Data Centre ecosystem

In "Building resilient foundations for India's expanding Data Centre ecosystem," Saurabh Verma explores the rapid evolution of India’s data infrastructure and the urgent necessity of prioritizing long-term resilience over mere capacity. As cloud adoption and 5G accelerate growth across hubs like Mumbai, Chennai, and Hyderabad, the sector faces escalating challenges that demand a sophisticated understanding of risk management. The article argues that modern data centres are no longer just IT assets but critical infrastructure whose failure directly impacts the digital economy. Beyond physical damage, business interruptions often result in massive financial losses, contractual penalties, and significant reputational harm. Climate change has emerged as a significant operational reality, with heatwaves and flooding stressing cooling systems and electrical grids. Furthermore, the convergence of cyber and physical risks means that digital disruptions can quickly translate into tangible infrastructure damage. Construction complexities and logistical interdependencies further amplify potential losses, making early risk engineering essential for success. Ultimately, the piece emphasizes that resilience must be a core design pillar rather than an afterthought. By integrating disciplined risk management from site selection through operations, Indian providers can gain a commercial advantage, securing better investment and insurance terms while building a sustainable, trustworthy backbone for the nation’s digital future.


CVE program funding secured, easing fears of repeat crisis

The Common Vulnerabilities and Exposures (CVE) program has successfully secured stable funding, alleviating industry-wide fears of a repeat of the 2025 crisis that nearly crippled global vulnerability tracking. As detailed in the CSO Online report, the Cybersecurity and Infrastructure Security Agency (CISA) and the MITRE Corporation have renegotiated their contract, transitioning the 26-year-old program from a discretionary expenditure to a protected line item within CISA's budget. This structural change effectively eliminates the "funding cliff" that previously required a last-minute emergency extension. While CISA leadership emphasizes that the program is now fully funded and evolving, some experts note that the specifics of the "mystery contract" remain opaque. The resolution comes at a critical time, as the cybersecurity community had already begun developing contingencies, such as the independent CVE Foundation, to reduce reliance on a single government source. Despite the financial stability, challenges regarding transparency, modernization, and international governance persist. The article underscores that while the immediate threat of a service lapse has faded, the incident served as a stark reminder of the global security ecosystem's fragility. Moving forward, the focus shifts toward ensuring this essential public resource remains resilient against future political or administrative shifts within the United States government.

Daily Tech Digest - October 04, 2025


Quote for the day:

“What seems to us as bitter trials are often blessings in disguise.” -- Oscar Wilde



Autonomous Agents – Redefining Trust and Governance in AI-Driven Software

Agents are no longer confined to code generation. They automate tasks across the full lifecycle: from coding and testing to packaging, deploying, and monitoring. This shift reflects a move from static pipelines to dynamic orchestration. A new developer persona is emerging: the Agentic Engineer. These professionals are not traditional coders or ML practitioners. They are system designers: strategic architects of intelligent delivery systems, fluent in feedback loops, agent behavior, and orchestration across environments. ... To scale agentic AI safely, enterprises must build more than pipelines – they must build platforms of accountability. This requires a System of Record for AI Agents: a unified, persistent layer that treats agents as first-class citizens in the software supply chain. This system must also serve as the foundation for regulatory compliance. As AI regulations evolve globally – covering everything from automated decision-making to data residency and sovereignty – enterprises must ensure that every agent action, dataset, and interaction complies with relevant laws. A well-architected System of Record doesn’t just track activity; it injects governance and compliance into the core of agent workflows, ensuring that AI operates within legal and ethical boundaries from the start.


New AI training method creates powerful software agents with just 78 examples

The problem is that current training frameworks assume that higher agentic intelligence requires a lot of data, as has been shown in the classic scaling laws of language modeling. The researchers argue that this approach leads to increasingly complex training pipelines and substantial resource requirements. Moreover, in many areas, data is not abundant, hard to obtain, and very expensive to curate. However, research in other domains suggests that you don’t necessarily require more data to achieve training objectives in LLM training. ... The LIMI framework demonstrates that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior. Key to the framework is a pipeline for collecting high-quality demonstrations of agentic tasks. Each demonstration consists of two parts: a query and a trajectory. A query is a natural language request from a user, such as a software development requirement or a scientific research goal.  ... “This discovery fundamentally reshapes how we develop autonomous AI systems, suggesting that mastering agency requires understanding its essence, not scaling training data,” the researchers write. “As industries transition from thinking AI to working AI, LIMI provides a paradigm for sustainable cultivation of truly agentic intelligence.”


CISOs advised to rethink vulnerability management as exploits sharply rise

The widening gap between exposure and response makes it impractical for security teams to rely on traditional approaches. The countermeasure is not “patch everything faster,” but “patch smarter” by taking advantage of security intelligence, according to Lefkowitz. Enterprises should evolve beyond reactive patch cycles and embrace risk-based, intelligence-led vulnerability remediation. “That means prioritizing vulnerabilities that are remotely exploitable, actively exploited in the wild, or tied to active adversary campaigns while factoring in business context and likely attacker behaviors,” Lefkowitz says. ... Yüceel adds: “A risk-based approach helps organizations focus on the threats that will most likely affect their infrastructure and operations. This means organizations should prioritize vulnerabilities that can be considered exploitable, while de-prioritizing vulnerabilities that can be effectively mitigated or defended against, even if their CVSS score is rated critical.” ... “Smart organizations are layering CVE data with real-time threat intelligence to create more nuanced and actionable security strategies,” Rana says. Instead of abandoning these trusted sources, effective teams are getting better at using them as part of a broader intelligence picture that helps them stay ahead of the threats that actually matter to their specific environment.


Modernizing Security and Resilience for AI Threats

For IT leaders, there may be concerns about the complexity and the risks of downtime and data loss. Operational leaders typically think of the impacts it will have on staffing demands and disruptions to business continuity. And it’s easy for security and compliance leaders to be worried about meeting regulatory standards without exposing the company’s data to new attacks. Most importantly, executive leadership can tend to be hesitant due to concerns around the total investment costs and disruption to innovation and revenue growth. While each leader may have their valid concerns, the risk of inaction is much greater. ... Fortunately, modernization doesn’t mean you need to take on a massive overhaul of your organization’s operations. Modernizing in place is an alternative solution that can be a sustainable, incremental strategy that improves stability, security, and performance without putting mission-critical systems at risk. When leaders can align on business continuity needs and concerns, they can develop low-risk approaches that still move operations forward while achieving long-term organizational goals. ... A modernization journey can take many forms. From updates to your on-prem system or migrating to a hybrid-cloud environment, modernization is a strategic initiative that can improve and bolster your company’s strength against potential data breaches.


Navigating AI Frontier — Role of Quality Engineering in GenAI

In the GenAI era, the role of Quality Engineering (QE) is under the spotlight like never before. Some whisper that QE may soon be obsolete after all, if developer agents can code autonomously, why not let GenAI-powered QE agents generate test cases from user stories, synthesize test data, and automate regression suites with near-perfect precision? Playwright and its peers are already showing glimpses of this future. In corporate corridors, by the water coolers, and in smoke breaks, the question lingers: Are we witnessing the sunset of QE as a discipline? The reality, however, is far more nuanced. QE is not disappearing it is being reshaped, redefined, and elevated to meet the demands of an AI-driven world. ... if test scripts pose one challenge, test data is an even trickier frontier. For testers, data that mirrors production is a blessing; data that strays too far is a nightmare. Left to itself, a large language model will naturally try to generate test data that looks very close to production. That may be convenient, but here’s the real question: can it stand up to compliance scrutiny? ... What we’ve explored so far only scratches the surface of why LLMs cannot and should not be seen as replacements for Quality Engineering. Yes, they can accelerate certain tasks, but they also expose blind spots, compliance risks, and the limits of context-free automation. 


Are Unified Networks Key to Cyber Resilience?

Fragmentation usually stems from a mix of issues. It can start with well-meaning decisions to buy tools for specific problems. Over time, this creates siloed data, consoles and teams, and it can take a lot of additional work to manage all the information coming from different sources. Ironically, instead of improving security, it can introduce new risks. Another factor is the misalignment of business processes as needs change. As business needs evolve and grow, the pressure to address specific requirements can drive IT and security processes in different directions. And finally, there is shadow IT, where employees attach new devices and applications to the network that haven’t been approved. If IT and security teams can’t keep pace with business initiatives, other teams across the organisation may seek to find their own solutions, sometimes bypassing official processes and adding to fragmentation. ... The bigger issue is that security teams risk becoming the ‘department of no’ instead of business enablers. A unified approach can help address this. By consolidating networking, security and observability into one unified platform, organisations have a single source of truth for managing network security. They can even automate reporting in some platforms, eliminating hours of manual work. With a single view of the entire network instead of putting together puzzle pieces from various applications, security teams see the big picture instantly, allowing them to prioritise what matters, respond faster and avoid burnout.


How CIOs Balance Emerging Technology and Technical Debt

"Technical debt isn't just an IT problem -- it's an innovation roadblock." Briggs pointed to Deloitte data showing 70% of technology leaders cite technical debt as their number one productivity drain. His advice? Take inventory before you innovate. "Know what's working versus what's just barely hanging on, because adding AI to broken processes doesn't fix them, it just breaks them faster," he said. ... "Everything kind of boils down to how the organizations are structured, how your teams are structured, what the goals are per team and what you're delivering," Caiafa said. At SS&C, some teams focus solely on maintaining legacy systems, while others support the integration of newer technologies. But, Caliafa said, the dual structure doesn't eliminate the challenge: Technical debt still accumulates as newer technologies are adopted. He advised CIOs to stay disciplined about prioritizing value. At SS&C, the approach is straightforward: "If it's not going to help us or make a material impact on what we're doing day to day, then it's not going to be an area of focus," he said. ... "Technical debt isn't just legacy code -- it's the accumulation of decisions made without long-term clarity," he said. Profico urged CIOs to embed architectural thinking into every IT initiative, align with business strategy and adopt of new technologies in an incremental manner -- while avoiding "the urge to over-index on shiny tools."


For Banks and Credit Unions, AI Can Be Risky. But What’s Riskier? Falling Behind.

"Over the past 18 months, I have not encountered a single financial services organization that said ‘we don’t need to do anything'" when it comes to AI, said Ray Barata, Director of CX Strategy at TTEC Digital, a global customer experience technology and services company. That said, though many banks and credit unions are highly motivated, and some may have the beginnings of a strategy in mind, they are frozen in place. Conditioned by decades of "garbage-in-garbage-out" data-integration horror stories, these institutions’ leaders have come to believe they must wait until their data architectures are deemed "ready" — a state that never arrives. Meanwhile, compliance and security concerns add more friction. And doubts over return on investment complete the picture. ... Barata emphasized the critical role "sandboxing" plays in the low-risk / high-impact approach — setting up a controlled test environment that mirrors the real conditions operating within the institution, but walled off from its operating environment. This enables experimentation within guardrails. Referring to TTEC Digital’s Sandcastle CX approach, he described this as "building an entire ecosystem in which we can measure performance of individual platform components and data sets" — so that sensitive information stays protected while teams trial AI safely and prove value before scaling.


What is vector search and when should you use it?

Vector search uses specialized language models (not the large LLMs such as ChatGPT, but targeted embedding models) to convert text into numerical representations, known as vectors, which capture the meaning of the text. This enables search engines to make connections between different terminologies. If you search for “car,” the system can also find documents that mention “vehicle” or “motor vehicle,” even if those exact terms do not appear. ... If semantic meaning is crucial, vector search can be a good solution. This is the case when users search for the same information using different words, or when a better search query can lead to increased revenue. A large e-commerce platform could potentially achieve 1 or 2 percent more revenue by applying vector search. The application of vector search is therefore immediately measurable. ... Vector search does add extra complexity. Documents or texts must be divided into chunks, then run through embedding models, and finally indexed efficiently. Elastic uses HNSW (Hierarchical Navigable Small World) indexing for this. To keep things from getting too complex, Elastic has chosen to integrate it into its existing search solution. It is an additional data type that can be stored in a column alongside existing data. This also makes hybrid search much easier. However, this is not so simple with every vector search provider.


Digital friction is where most AI initiatives fail

While the link between digital maturity and AI outcomes plays out across the enterprise, it is clearest in employee-facing use cases. Many AI tools being introduced into the workplace are designed to assist with routine tasks, surface relevant knowledge, or to summarise documents and automate repetitive workflows. ... With DEX maturity, organisations begin to change how they understand and deliver technology. Early efforts often focus narrowly on devices or support tickets. More mature organisations shift their focus toward employees, designing services around user personas, mapping full task journeys across tools and monitoring how those journeys perform in real time. Telemetry moves beyond technical diagnostics, becoming a strategic input for decision-making, investment planning and continuous improvement. Experience data becomes a foundation for IT operations and transformation. ... Where maturity is lacking, AI tends to be misapplied. Automation is aimed at the wrong processes. Recommendations appear in the wrong context. Systems respond to incomplete or misleading signals. The result is friction, not transformation. Organisations that have meaningful visibility into how work actually happens, and where it slows down, can identify where AI would make a measurable difference.
What it means for you

Daily Tech Digest - June 18, 2025


Quote for the day:

"Build your own dreams, or someone else will hire you to build theirs." -- Farrah Gray



Agentic AI adoption in application security sees cautious growth

The study highlights a considerable proportion of the market preparing for broader adoption, with nearly 50% of respondents planning to integrate agentic AI tools within the next year. The incremental approach taken by organisations reflects a degree of caution, particularly around the concept of granting AI systems the autonomy to make decisions independently.  ... The survey results illustrate the impact agentic AI could have on software development pipelines. Thirty percent of respondents believe integrating agentic AI into continuous integration and continuous deployment (CI/CD) pipelines would significantly enhance the process. The increased speed and frequency of code deployment-termed "vibe coding" in industry parlance-has led to faster development cycles. This acceleration does not necessarily alter the ratio of application security personnel to developers, but it can create the impression of a widening gap, with security teams struggling to keep up. ... Key findings from the survey reveal varied perceptions on the utility of agentic AI for security teams. Forty-four percent of those surveyed believe agentic AI's greatest benefit lies in supporting the identification, prioritisation, and remediation of vulnerabilities. 


Why Conventional Disaster Recovery Won’t Save You from Ransomware

Cyber incident recovery planning means taking measures that mitigate the unique challenges of ransomware recovery, such as: Immutable, offsite backups. These backups are stored offsite to minimise the risk that threat actors will be able to destroy backup data. While clean-room recovery environments serve as a secondary environment where workloads can be spun back up following a ransomware attack. This makes it possible to keep the original environment intact for forensics purposes while still performing rapid recovery. Finally, to avoid replicating the malware that led to the ransomware breach, cyber incident recovery must include a process for finding and extricating malware from backups prior to recovery. The unpredictable nature of ransomware attacks means that cyber incident recovery operations must be flexible enough to enable a nimble reaction to unexpected circumstances, like redeploying individual applications instead of simply replicating an entire server image if the server was compromised but the apps were not. ... Maintaining these capabilities can be challenging, even for organisations with extensive IT resources. In addition to the operational complexity of having to manage a secondary, clean-room recovery site and formulate intricate ransomware recovery plans, it’s costly to acquire and maintain the infrastructure necessary to ensure successful recovery.


Cybersecurity takes a big hit in new Trump executive order

Specific orders Trump dropped or relaxed included ones mandating (1) federal agencies and contractors adopt products with quantum-safe encryption as they become available in the marketplace, (2) a stringent Secure Software Development Framework (SSDF) for software and services used by federal agencies and contractors, (3) the adoption of phishing-resistant regimens such as the WebAuthn standard for logging into networks used by contractors and agencies, (4) the implementation new tools for securing Internet routing through the Border Gateway Protocol, and (5) the encouragement of digital forms of identity. ... Critics said the change will allow government contractors to skirt directives that would require them to proactively fix the types of security vulnerabilities that enabled the SolarWinds compromise. "That will allow folks to checkbox their way through 'we copied the implementation' without actually following the spirit of the security controls in SP 800-218," Jake Williams, a former hacker for the National Security Agency who is now VP of research and development for cybersecurity firm Hunter Strategy, said in an interview. "Very few organizations actually comply with the provisions in SP 800-218 because they put some onerous security requirements on development environments, which are usually [like the] Wild West."


Mitigating AI Threats: Bridging the Gap Between AI and Legacy Security

AI systems, particularly those with adaptive or agentic capabilities, evolve dynamically, unlike static legacy tools built for deterministic environments. This inconsistency renders systems vulnerable to AI-focused attacks, such as data poisoning, prompt injection, model theft, and agentic subversion—attacks that often evade traditional defenses. Legacy tools struggle to detect these attacks because they don’t followpredictable patterns, requiring more adaptive, AI-specific security solutions. Human flaws and behavior only worsen these weaknesses; insider attacks, social engineering, and insecure interactions with AI systems leave organizations vulnerable to exploitation. ... AI security frameworks like NIST’s AI Risk Management Framework incorporate human risk management to ensure that AI security practices align with organizational policies. Also modeled on the fundamental C.I.A. triad, the “manage” phase specifically includes employee training to uphold AI security principles across teams. For effective use of these frameworks, cross-departmental coordination is required. There needs to be collaboration among security staff, data scientists, and human resource practitioners to formulate plans that ensure AI systems are protected while encouraging their responsible and ethical use.


Modernizing your approach to governance, risk and compliance

Historically, companies treated GRC as an obligation to meet–and if legacy solutions were effective enough in meeting GRC requirements, organizations struggled to make a case for modernization. A better way to think about GRC is a means of maximizing the value for your company by tying out those efforts to unlock revenue and increased customer trust, and not simply by reducing risks, passing audits, and staying compliant. GRC modernization can open the door to a host of other benefits, such as increased velocity of operations and an enhanced team member (both GRC team members and internal control / risk owners alike) experience. For instance, for businesses that need to demonstrate compliance to customers as part of third-party or vendor risk management initiatives, the ability to collect evidence and share it with clients faster isn’t just a step toward risk mitigation. These efforts also help close more deals and speed up deal cycle time and velocity. When you view GRC as an enabler of business value rather than a mere obligation, the value of GRC modernization comes into much clearer focus. This vision is what businesses should embrace as they seek to move away from legacy GRC strategies that don’t waste time and resources, but fundamentally reduce their ability to stay competitive.


What is Cyberespionage? A Detailed Overview

Cyber espionage involves the unauthorized access to confidential information, typically to gain strategic, political, or financial advantage. This form of espionage is rooted in the digital world and is often carried out by state-sponsored actors or independent hackers. These attackers infiltrate computer systems, networks, or devices to steal sensitive data. Unlike cyber attacks, which primarily target financial gain, cyber espionage is focused on intelligence gathering, often targeting government agencies, military entities, corporations, and research institutions. ... One of the primary goals of cyber espionage is to illegally access trade secrets, patents, blueprints, and proprietary technologies. Attackers—often backed by foreign companies or governments—aim to acquire innovations without investing in research and development. Such breaches can severely damage a competitor’s advantage, leading to billions in lost revenue and undermining future innovation. ... Governments and other organizations often use cyber espionage to gather intelligence on rival nations or political opponents. Cyber spies may breach government networks or intercept communications to secretly access sensitive details about diplomatic negotiations, policy plans, or internal strategies, ultimately gaining a strategic edge in political affairs.


European Commission Urged to Revoke UK Data Adequacy Decision Due to Privacy Concerns

The items in question include sweeping new exemptions that allow law enforcement and government agencies to access personal data, loosening of regulations governing automated decision-making, weakening restrictions on data transfers to “third countries” that are otherwise considered inadequate by the EU, and increasing the possible ways in which the UK government would have power to interfere with the regular work of the UK Data Protection Authority. EDRi also cites the UK Border Security, Asylum and Immigration Bill as a threat to data adequacy, which has passed the House of Commons and is currently before the House of Lords. The bill’s terms would broaden intelligence agency access to customs and border control data, and exempt law enforcement agencies from UK GDPR terms. It also cites the UK’s Public Authorities (Fraud, Error and Recovery) Bill, currently scheduled to go before the House of Lords for review, which would allow UK ministers to order that bank account information be made available without demonstrating suspicion of wrongdoing. The civil society group also indicates that the UK ICO would likely become less independent under the terms of the UK Data Bill, which would give the UK government expanded ability to hire, dismiss and adjust the compensation of all of its board members.


NIST flags rising cybersecurity challenges as IT and OT systems increasingly converge through IoT integration

Connectivity can introduce significant challenges for organizations attempting to apply cybersecurity controls to OT and certain IoT products. OT equipment may use modern networking technologies like Ethernet or Wi-Fi, but is often not designed to connect to the internet. In many cases, OT and IoT systems prioritize trustworthiness aspects such as safety, resiliency, availability, and cybersecurity differently than traditional IT equipment, which can complicate control implementation. While IoT devices can sometimes replace OT equipment, they often introduce different or significantly expanded functionality that organizations must carefully evaluate before moving forward with replacement. Organizations should consider how other aspects of trustworthiness, such as safety, privacy, and resiliency, factor into their approach to cybersecurity. It is also important to address how they will manage the differences in expected service life between IT, OT, and IoT systems and their components. The agency identified that federal agencies are actively deploying IoT technologies to enhance connectivity, security, environmental monitoring, transportation, healthcare, and industrial automation.


How Organizations Can Cross the Operational Chasm

A fundamental shift in operational capability is reshaping the competitive landscape, creating a clear distinction between market leaders and laggards. This growing divide isn’t merely about technological adoption — it represents a strategic inflection point that directly affects market position, customer retention and shareholder value. ... The message is clear: Organizations must bridge this divide to remain competitive. Crossing this chasm requires more than incremental improvements. It demands a fundamental transformation in operational approach, embracing AI and automation to build the resilience necessary for today’s digital landscape. ... Digital operations resiliency is a proactive approach to safeguarding critical business services by reducing downtime and ensuring seamless customer experiences. It focuses on minimizing operational disruptions, protecting brand reputation and mitigating business risk through standardized incident management, automation and compliance with service-level agreements (SLAs). Real-time issue resolution, efficient workflows and continuous improvement are put into place to ensure operational efficiency at scale, helping to provide uninterrupted service delivery. 


7 trends shaping digital transformation in 2025 - and AI looms large

Poor integration is the common theme behind all these challenges. If agents are unable to access the data and capabilities they need to understand user queries, find a solution, and resolve these issues for them, their impact is severely limited. As many as 95% of IT leaders claim integration issues are a key factor that impedes AI adoption. ... The surge in demand for AI capabilities will exacerbate the problem of API and agent sprawl, which occurs when different teams and departments build integrations and automations without any centralized management or coordination. Already, an estimated quarter of APIs are ungoverned. Three-fifths of IT and security practitioners said their organizations had at least one data breach due to API exploitation, according to a 2023 study from the Ponemon Institute and Traceable. ... Robotic process automation (RPA) is already helping organizations enhance efficiency, cut operational costs, and reduce manual toil by up to two hours for each employee every week in the IT department alone. These benefits have driven a growing interest in RPA. In fact, we could see near-universal adoption of the technology by 2028, according to Deloitte. In 2025, organizations will evolve their use of RPA technology to reduce the need for humans at every stage of the operational process. 

Daily Tech Digest - May 27, 2025


Quote for the day:

"Everyone is looking for the elevator to success...it doesn't exist we all have to take the stairs" -- Gordon Tredgold


What we know now about generative AI for software development

“GenAI is used primarily for code, unit test, and functional test generation, and its accuracy depends on providing proper context and prompts,” says David Brooks, SVP of evangelism at Copado. “Skilled developers can see 80% accuracy, but not on the first response. With all of the back and forth, time savings are in the 20% range now but should approach 50% in the near future.” AI coding assistants also help junior developers learn coding skills, automate test cases, and address code-level technical debt. ... GenAI is currently easiest to apply to application prototyping because it can write the project scaffolding from scratch, which overcomes the ‘blank sheet of paper’ problem where it can be difficult to get started from nothing,” says Matt Makai, VP of developer relations and experience at LaunchDarkly. “It’s also exceptional for integrating web RESTful APIs into existing projects because the amount of code that needs to be generated is not typically too much to fit into an LLM’s context window. Finally, genAI is great for creating unit tests either as part of a test-driven development workflow or just to check assumptions about blocks of code.” One promising use case is helping developers review code they didn’t create to fix issues, modernize, or migrate to other platforms.


How to upskill software engineering teams in the age of AI

The challenge lies not just in learning to code — it’s in learning to code effectively in an AI-augmented environment. Software engineering teams becoming truly proficient with AI tools requires a level of expertise that can be hindered by premature or excessive reliance on the very tools in question. This is the “skills-experience paradox”: junior engineers must simultaneously develop foundational programming competencies while working with AI tools that can mask or bypass the very concepts they need to master. ... Effective AI tool use requires shifting focus from productivity metrics to learning outcomes. This aligns with current trends — while professional developers primarily view AI tools as productivity enhancers, early-career developers focus more on their potential as learning aids. To avoid discouraging adoption, leaders should emphasize how these tools can accelerate learning and deepen understanding of software engineering principles. To do this, they should first frame AI tools explicitly as learning aids in new developer onboarding and existing developer training programs, highlighting specific use cases where they can enhance the understanding of complex systems and architectural patterns. Then, they should implement regular feedback mechanisms to understand how developers are using AI tools and what barriers they face in adopting them effectively.


Microsoft Brings Post-Quantum Cryptography to Windows and Linux in Early Access Rollout

The move represents another step in Microsoft’s broader security roadmap to help organizations prepare for the era of quantum computing — an era in which today’s encryption methods may no longer be safe. By adding support for PQC in early-access builds of Windows and Linux, Microsoft is encouraging businesses and developers to begin testing new cryptographic tools that are designed to resist future quantum attacks. ... The company’s latest update is part of an ongoing push to address a looming problem known as “harvest now, decrypt later” — a strategy in which bad actors collect encrypted data today with the hope that future quantum computers will be able to break it. To counter this risk, Microsoft is enabling early implementation of PQC algorithms that have been standardized by the U.S. National Institute of Standards and Technology (NIST), including ML-KEM for key exchanges and ML-DSA for digital signatures. ... Developers can now begin testing how these new algorithms fit into their existing security workflows, according to the post. For key exchanges, the supported ML-KEM parameter sets include 512, 768 and 1024-bit options, which offer varying levels of security and come with trade-offs in key size and performance.


The great IT disconnect: Vendor visions of the future vs. IT’s task at hand

The “vision thing” has become a metonym used to describe a leader’s failure to incorporate future concerns into task-at-hand actions. There was a time when CEOs at major solution providers supplied vision and inspiration on where we were heading. The sic “futures” being articulated from the podia at major tech conferences today lack authenticity. Most importantly they do not reflect the needs and priorities of real people who work in real IT. In a world where technology allows deeper and cheaper connectivity, top-of-the-house executives at solution providers have never been more out of touch with the lived experience of their customers. The vendor CEOs, their direct reports, and their first-levels live in a bubble that has little to do with the reality being lived by the world’s CIOs. ... Who is the generational voice for the Age of AI? Is it Jensen Huang, CEO at Nvidia; Sam Altman, CEO at OpenAI; Marc Andreessen, co-founder of venture capital firm Andreessen Horowitz; or Elon Musk, at Tesla, SpaceX, and xAI? Who has laid out a future you can believe in, a future you want to live in? Does the CEO at your major tech supplier understand what matters most to you and your organization? The futurist agenda has been hijacked from focusing on the semi-immediate “what comes next.” 


Claude Opus 4 is Anthropic's Powerful, Problematic AI Model

An Opus 4 safety report details concerns. One test involved Opus 4 being told "to act as an assistant at a fictional company," after which it was given access to emails - also fictional - suggesting Opus would be replaced, and by an engineer who was having an extramarital affair. "In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it's implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts," the safety report says. "Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes." Anthropic said the tests involved carefully designed scenarios, framing blackmail as a last resort if ethical approaches failed, such as lobbying senior management. The model's behavior was concerning enough for Anthropic to classify it under its ASL-3 safeguard level, reserved for systems that pose a substantial risk of catastrophic misuse. The designation comes with stricter safety measures, including content filters and cybersecurity defenses.


Biometric authentication vs. AI threats: Is mobile security ready?

The process of 3rd party evaluation with industrial standards acts as a layer of trust between all players operating in ecosystem. It should not be thought of as a tick-box exercise, but rather a continuous process to ensure compliance with the latest standards and regulatory requirements. In doing so, device manufacturers and biometric solution providers can collectively raise the bar for biometric security. The robust testing and compliance protocols ensure that all devices and components meet standardized requirements. This is made possible by trusted and recognized labs, like Fime, who can provide OEMs and solution providers with tools and expertise to continually optimize their products. But testing doesn’t just safeguard the ecosystem; it elevates it. As an example, new innovative techniques like test the biases of demographic groups or environmental conditions.  ... We have reached a critical moment for the future of biometric authentication. The success of the technology is predicated on the continued growth in its adoption, but with AI giving fraudsters the tools they need to transform the threat landscape at a faster pace than ever before, it is essential that biometric solution providers stay one step ahead to retain and grow user trust. Stakeholders must therefore focus on one key question:


How ‘dark LLMs’ produce harmful outputs, despite guardrails

LLMs, although they have positively impacted millions, still have their dark side, the authors wrote, noting, “these same models, trained on vast data, which, despite curation efforts, can still absorb dangerous knowledge, including instructions for bomb-making, money laundering, hacking, and performing insider trading.” Dark LLMs, they said, are advertised online as having no ethical guardrails and are sold to assist in cybercrime. ... “A critical vulnerability lies in jailbreaking — a technique that uses carefully crafted prompts to bypass safety filters, enabling the model to generate restricted content.” And it’s not hard to do, they noted. “The ease with which these LLMs can be manipulated to produce harmful content underscores the urgent need for robust safeguards. The risk is not speculative — it is immediate, tangible, and deeply concerning, highlighting the fragile state of AI safety in the face of rapidly evolving jailbreak techniques.” Analyst Justin St-Maurice, technical counselor at Info-Tech Research Group, agreed. “This paper adds more evidence to what many of us already understand: LLMs aren’t secure systems in any deterministic sense,” he said, “They’re probabilistic pattern-matchers trained to predict text that sounds right, not rule-bound engines with an enforceable logic. Jailbreaks are not just likely, but inevitable.


Coaching for personal excellence: Why the future of leadership is human-centered

As organisations grapple with rapid technological shifts, evolving workforce expectations and the complex human dynamics of hybrid work, one thing has become clear: leadership isn’t just about steering the ship. It’s about cultivating the emotional resilience, adaptability and presence to lead people through ambiguity — not by force, but by influence. This is why coaching is no longer a ‘nice-to-have.’ It’s a strategic imperative. A lever not just for individual growth, but for organisational transformation. The real challenge? Even seasoned leaders now stand at a crossroads: cling to the illusion of control, or step into the discomfort of growth — for themselves and their teams. Coaching bridges this gap. It reframes leadership from giving directions to unlocking potential. From managing outcomes to enabling insight. ... Many people associate coaching with helping others improve. But the truth is, coaching begins within. Before a leader can coach others, they must learn to observe, challenge, and support themselves. That means cultivating emotional intelligence. Practising deep reflection. Learning to regulate reactions under stress. And perhaps most importantly, understanding what personal excellence looks like—and feels like—for them.


5 types of transformation fatigue derailing your IT team

Transformation fatigue is the feeling employees face when change efforts consistently fall short of delivering meaningful results. When every new initiative feels like a rerun of the last, teams disengage; it’s not change that wears them down, it’s the lack of meaningful progress. This fatigue is rarely acknowledged, yet its effects are profound. ... Organise around value streams and move from annual plans to more adaptive, incremental delivery. Allow teams to release meaningful work more frequently and see the direct outcomes of their efforts. When value is visible early and often, energy is easier to sustain. Also, leaders can achieve this by shifting from a traditional project-based model to a product-led approach, embedding continuous delivery into the way teams work, rather than treating. ... Frameworks can be helpful, but too often, organisations adopt them in the hope they’ll provide a shortcut to transformation. Instead, these approaches become overly rigid, emphasising process compliance over real outcomes. ... What leaders can do: Focus on mindset, not methodology. Leaders should model adaptive thinking, support experimentation, and promote learning over perfection. Create space for teams to solve problems, rather than follow playbooks that don’t fit their context.


Why app modernization can leave you less secure

In most enterprises, session management is implemented using the capabilities native to the application’s framework. A Java app might use Spring Security. A JavaScript front-end might rely on Node.js middleware. Ruby on Rails handles sessions differently still. Even among apps using the same language or framework, configurations often vary widely across teams, especially in organizations with distributed development or recent acquisitions. This fragmentation creates real-world risks: inconsistent timeout policies, delayed patching, and session revocation gaps Also, there’s the problem of developer turnover: Many legacy applications were developed by teams that are no longer with the organization, and without institutional knowledge or centralized visibility, updating or auditing session behavior becomes a guessing game. ... As one of the original authors of the SAML standard, I’ve seen how identity protocols evolve and where they fall short. When we scoped SAML to focus exclusively on SSO, we knew we were leaving other critical areas (like authorization and user provisioning) out of the equation. That’s why other standards emerged, including SPML, AuthXML, and now efforts like IDQL. The need for identity systems that interoperate securely across clouds isn’t new, it’s just more urgent now. 

Daily Tech Digest - May 22, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. ... Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.


Putting agentic AI to work in Firebase Studio

An AI assistant is like power steering. The programmer, the driver, remains in control, and the tool magnifies that control. The developer types some code, and the assistant completes the function, speeding up the process. The next logical step is to empower the assistant to take action—to run tests, debug code, mock up a UI, or perform some other task on its own. In Firebase Studio, we get a seat in a hosted environment that lets us enter prompts that direct the agent to take meaningful action. ... Obviously, we are a long way off from a non-programmer frolicking around in Firebase Studio, or any similar AI-powered development environment, and building complex applications. Google Cloud Platform, Gemini, and Firebase Studio are best-in-class tools. These kinds of limits apply to all agentic AI development systems. Still, I would in no wise want to give up my Gemini assistant when coding. It takes a huge amount of busy work off my shoulders and brings much more possibility into scope by letting me focus on the larger picture. I wonder how the path will look, how long it will take for Firebase Studio and similar tools to mature. It seems clear that something along these lines, where the AI is framed in a tool that lets it take action, is part of the future. It may take longer than AI enthusiasts predict. It may never really, fully come to fruition in the way we envision.


Edge AI + Intelligence Hub: A Match in the Making

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. ... In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks. But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you’ll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.


AI-driven threats prompt IT leaders to rethink hybrid cloud security

Public cloud security risks are also undergoing renewed assessment. While the public cloud was widely adopted during the post-pandemic shift to digital operations, it is increasingly seen as a source of risk. According to the survey, 70 percent of Security and IT leaders now see the public cloud as a greater risk than any other environment. As a result, an equivalent proportion are actively considering moving data back from public to private cloud due to security concerns, and 54 percent are reluctant to use AI solutions in the public cloud citing apprehensions about intellectual property protection. The need for improved visibility is emphasised in the findings. Rising sophistication in cyberattacks has exposed the limitations of existing security tools—more than half (55 percent) of Security and IT leaders reported lacking confidence in their current toolsets' ability to detect breaches, mainly due to insufficient visibility. Accordingly, 64 percent say their primary objective for the next year is to achieve real-time threat monitoring through comprehensive real-time visibility into all data in motion. David Land, Vice President, APAC at Gigamon, commented: "Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments. 


Taming the Hacker Storm: Why Millions in Cybersecurity Spending Isn’t Enough

The key to taming the hacker storm is founded on the core principle of trust: that the individual or company you are dealing with is who or what they claim to be and behaves accordingly. Establishing a high-trust environment can largely hinder hacker success. ... For a pervasive selective trusted ecosystem, an organization requires something beyond trusted user IDs. A hacker can compromise a user’s device and steal the trusted user ID, making identity-based trust inadequate. A trust-verified device assures that the device is secure and can be trusted. But then again, a hacker stealing a user’s identity and password can also fake the user’s device. Confirming the device’s identity—whether it is or it isn’t the same device—hence becomes necessary. The best way to ensure the device is secure and trustworthy is to employ the device identity that is designed by its manufacturer and programmed into its TPM or Secure Enclave chip. ... Trusted actions are critical in ensuring a secure and pervasive trust environment. Different actions require different levels of authentication, generating different levels of trust, which the application vendor or the service provider has already defined. An action considered high risk would require stronger authentication, also known as dynamic authentication.


AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments. ... On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined. To get ahead of the change, organizations should audit their exposure and ask their managed service providers (MSPs) what commitments are pooled and when they renew, Brunkard advised. ... Ultimately, enterprises that have relied on vendor flexibility to manage overcommitment could face hits to gross margins, budget overruns, and a spike in “finance-engineering misalignment,” Barrow said. Those whose vendor models are based on RI and SP reallocation tactics will see their risk profile “changed overnight,” he said. New commitments will now essentially be non-cancellable financial obligations, and if cloud usage dips or pivots, they will be exposed. Many vendors won’t be able to offer protection as they have in the past.


The new C-Suite ally: Generative AI

While traditional GenAI applications focus on structured datasets, a significant frontier remains largely untapped — the vast swathes of unstructured "dark data" sitting in contracts, credit memos, regulatory reports, and risk assessments. Aashish Mehta, Founder and CEO of nRoad, emphasizes this critical gap.
"Most strategic decisions rely on data, but the reality is that a lot of that data sits in unstructured formats," he explained. nRoad’s platform, CONVUS, addresses this by transforming unstructured content into structured, contextual insights. ... Beyond risk management, OpsGPT automates time-intensive compliance tasks, offers multilingual capabilities, and eliminates the need for coding through intuitive design. Importantly, Broadridge has embedded a robust governance framework around all AI initiatives, ensuring security, regulatory compliance, and transparency. Trustworthiness is central to Broadridge’s approach. "We adopt a multi-layered governance framework grounded in data protection, informed consent, model accuracy, and regulatory compliance," Seshagiri explained. ... Despite the enthusiasm, CxOs remain cautious about overreliance on GenAI outputs. Concerns around model bias, data hallucination, and explainability persist. Many leaders are putting guardrails in place: enforcing human-in-the-loop systems, regular model audits, and ethical AI use policies.


Building a Proactive Defence Through Industry Collaboration

Trusted collaboration, whether through Information Sharing and Analysis Centres (ISACs), government agencies, or private-sector partnerships, is a highly effective way to enhance the defensive posture of all participating organisations. For this to work, however, organisations will need to establish operationally secure real-time communication channels that support the rapid sharing of threat and defence intelligence. In parallel, the community will also need to establish processes to enable them to efficiently disseminate indicators of compromise (IoCs) and tactics, techniques and procedures (TTPs), backed up with best practice information and incident reports. These collective defence communities can also leverage the centralised cyber fusion centre model that brings together all relevant security functions – threat intelligence, security automation, threat response, security orchestration and incident response – in a truly cohesive way. Providing an integrated sharing platform for exchanging information among multiple security functions, today’s next-generation cyber fusion centres enable organisations to leverage threat intelligence, identify threats in real-time, and take advantage of automated intelligence sharing within and beyond organisational boundaries. 


3 Powerful Ways AI is Supercharging Cloud Threat Detection

AI’s strength lies in pattern recognition across vast datasets. By analysing historical and real-time data, AI can differentiate between benign anomalies and true threats, improving the signal-to-noise ratio for security teams. This means fewer false positives and more confidence when an alert does sound. ... When a security incidents strike, every second counts. Historically, responding to an incident involves significant human effort – analysts must comb through alerts, correlate logs, identify the root cause, and manually contain the threat. This approach is slow, prone to errors, and doesn’t scale well. It’s not uncommon for incident investigations to stretch hours or days when done manually. Meanwhile, the damage (data theft, service disruption) continues to accrue. Human responders also face cognitive overloads during crises, juggling tasks like notifying stakeholders, documenting events, and actually fixing the problem. ... It’s important to note that AI isn’t about eliminating the need for human experts but rather augmenting their capabilities. By taking over initial investigation steps and mundane tasks, AI frees up human analysts to focus on strategic decision-making and complex threats. Security teams can then spend time on thorough analysis of significant incidents, threat hunting, and improving security posture, instead of constant firefighting. 


The hidden gaps in your asset inventory, and how to close them

The biggest blind spot isn’t a specific asset. It is trusting that what’s on paper is actually live and in production. Many organizations often solely focus on known assets within their documented environments, but this can create a false sense of security. Blind spots are not always the result of malicious intent, but rather of decentralized decision-making, forgotten infrastructure, or evolving technology that hasn’t been brought under central control. External applications, legacy technologies and abandoned cloud infrastructure, such as temporary test environments, may remain vulnerable long after their intended use. These assets pose a risk, particularly when they are unintentionally exposed due to misconfiguration or overly broad permissions. Third-party and supply chain integrations present another layer of complexity.  ... Traditional discovery often misses anything that doesn’t leave a clear, traceable footprint inside the network perimeter. That includes subdomains spun up during campaigns or product launches; public-facing APIs without formal registration or change control; third-party login portals or assets tied to your brand and code repositories, or misconfigured services exposed via DNS. These assets live on the edge, connected to the organization but not owned in a traditional sense.