Showing posts with label misconfiguration. Show all posts
Showing posts with label misconfiguration. Show all posts

Daily Tech Digest - May 17, 2026


Quote for the day:

“In tech, leadership isn’t about predicting the future — it’s about creating the conditions where your teams can build it.” -- Unknown



Scale ‘autonomous intelligence’ for real growth

In an interview with Ryan Daws, Prakul Sharma, the AI and Insights Practice Leader at Deloitte Consulting LLP, explains that modern enterprises must look beyond the localized productivity gains of generative AI to scale "autonomous intelligence" for real business growth. Sharma describes an intelligence maturity curve transitioning from assisted and artificial intelligence into autonomous intelligence, where systems independently execute actions within predefined boundaries. To unlock true economic value, organizations must integrate these autonomous agents directly into critical, costly workflows like enterprise procurement. However, scaling successfully faces significant technical and structural hurdles. First, enterprises frequently lack decision-grade data, which means real-time, traceable information required for binding transactions, relying instead on outdated reporting-grade data. Second, the production gap and governance debt often stall live deployments, because shortcuts taken during small pilots become major barriers for corporate legal and compliance teams. Sharma advises leaders to conduct thorough decision audits of existing workflows to uncover operational bottlenecks and data gaps. By building pilots from the very outset as reusable platforms equipped with proper identity verification, continuous model evaluations, and robust risk frameworks, enterprises can securely transition from experimental testing to successful, widespread live deployment.


6 Technical Red Flags Product Managers Should Never Ignore

In the article "6 Technical Red Flags Product Managers Should Never Ignore," Seyifunmi Olafioye emphasizes that product managers must recognize signs of underlying technical instability, as it directly impacts delivery, scalability, and customer trust. The author identifies six major red flags that product managers should never overlook: a lack of clear understanding among the team regarding how the system works, new feature development consistently taking much longer than estimated, and resolved bugs repeatedly resurfacing in production. Additionally, product managers should be concerned if operational teams must rely heavily on manual workarounds to keep the platform functioning, if the entire project suffers from an over-reliance on a single engineer's institutional knowledge, or if internal errors are only discovered after users report them due to a lack of proper monitoring. While no system is entirely flawless, ignoring these persistent warning signs can lead to severe operational issues. The article concludes that product managers should not dictate technical fixes; instead, they must proactively initiate honest conversations with engineering leadership, ask challenging questions during planning, and prioritize long-term technical health alongside new features to ensure sustainable growth and protect the user experience.
In this article, Ed Leavens argues that Quantum Day, known as Q-Day, is the precise moment when quantum computers become advanced enough to break existing asymmetric encryption standards like RSA and ECC, presenting a far greater threat than Y2K. While Y2K had a definitive deadline and a known remedy, Q-Day has no set timeline and introduces the insidious risk of "harvest now, decrypt later" (HNDL) tactics. Under HNDL, adversaries secretly exfiltrate and stockpile encrypted data today, waiting to decrypt it once sufficiently powerful quantum technology becomes available. Furthermore, this threat compounds daily due to modern data sprawl across multiple environments. To counter this impending crisis, organizations must look beyond traditional encryption upgrades and adopt data-layer protection strategies like vaulted tokenization. This quantum-resilient approach mathematically separates original sensitive data from its representation by replacing it with non-sensitive, format-preserving tokens. Because tokens share no reversible mathematical connection with the underlying information, quantum algorithms cannot decipher them, effectively neutralizing the value of stolen payloads. Implementing vaulted tokenization requires comprehensive data discovery, strict access governance, and cross-functional organizational alignment. Ultimately, Leavens emphasizes that enterprises must act immediately to secure their data directly, rendering harvested information useless before quantum-powered breaches materialize.


The AI infrastructure bottleneck is becoming a CIO problem

The article by Madeleine Streets explores how the expanding ambitions of artificial intelligence are colliding with physical infrastructure limitations, shifting the AI bottleneck from a general tech industry challenge into a critical problem for Chief Information Officers (CIOs). While billions of dollars continue pouring into AI development, physical realities like power grid limitations, data center construction delays, permitting hurdles, and cooling requirements are struggling to match software demand. This mismatch threatens to create a more constrained operating environment where AI access becomes expensive, delayed, or regionally uneven. Consequently, this pressure exposes "AI sprawl" within organizations where uncoordinated and disconnected AI initiatives compete for the same resources without centralized governance. To mitigate these risks, experts suggest that CIOs treat AI capacity as a core operational resilience and business continuity issue. IT leaders must introduce disciplined governance by tiering AI workloads into critical, important, and experimental categories, or utilizing smaller, local models to reduce compute reliance. Furthermore, CIOs must demand greater transparency from vendors regarding capacity guarantees, regional availability, and workload prioritization during peak demand. Ultimately, enterprise AI strategies can no longer assume infinite compute availability and must instead realign their deployment ambitions with physical operational constraints.


How AI Is Repeating Familiar Shadow IT Security Risks

The rapid adoption of artificial intelligence across the corporate enterprise is triggering new governance and security risks that closely mirror past technological shifts, such as the initial emergence of shadow IT and unauthorized software as a service platform usage. Modern organizations currently face three primary vectors of vulnerability, starting with employees inadvertently leaking proprietary intellectual property, corporate source code, and confidential financial records by pasting this data into public generative AI platforms. Furthermore, software developers frequently introduce hidden backdoors or compromised dependencies into production systems by integrating unverified open source models and components that circumvent traditional software supply chain scrutiny. Compounding these operational issues is the sudden rise of autonomous AI agents that operate with dynamic decision making authority but completely lack explicitly defined ownership or documented permission boundaries within internal corporate networks. To successfully mitigate these vulnerabilities, blanket restrictive policies are typically ineffective; instead, companies must establish robust frameworks that ensure absolute visibility, accountability, and adaptive identity controls. As detailed in the SANS Institute’s new AI Security Maturity Model, managing these continuous threats requires treating artificial intelligence not as an isolated software application, but as a critical operational layer demanding proactive lifecycle validation and verification.


Six priorities reshaping the MENA boardroom in 2026

The EY report details how the 2026 macroeconomic landscape in the Middle East and North Africa (MENA) region requires corporate boardrooms to transition from traditional, periodic oversight toward integrated, forward-looking strategic leadership. Driven by overlapping pressures across geopolitics, rapid technological innovation, sustainability demands, and complex governance regulations, MENA boards face a highly volatile operating environment. To navigate this uncertainty and secure long-term value, directors must actively address six central boardroom priorities. First, boards need to develop geopolitical foresight, embedding regional shifts directly into strategic scenario planning. Second, they must manage the expanding technology and cyber assurance landscape, ensuring ethical artificial intelligence governance and robust defenses against escalating digital threats. Third, strengthening corporate integrity, fraud prevention, and independent investigation oversight remains essential for maintaining stakeholder trust. Fourth, elevating climate resilience and sustainability governance helps mitigate critical environmental risks while driving resource efficiency. Fifth, achieving financial excellence requires rigorous cost optimization and aligning internal controls across financial and sustainability reporting frameworks. Finally, adopting mature, behavioral-based board evaluations over mere procedural assessments fosters deep accountability. Ultimately, orchestrating these interconnected priorities empowers MENA leaders to fortify institutional trust and transform market disruptions into sustainable growth.


The software supply chain is the new ground zero for enterprise cyber risk. Don’t get caught short

In this article, Matias Madou highlights the rising vulnerabilities within the software supply chain as the new ground zero for enterprise cyber risks, heavily exacerbated by the rapid adoption of artificial intelligence tools. Recent highly sophisticated breaches, such as the TeamPCP supply chain attacks, have aggressively weaponized critical security and developer platforms like Checkmarx and the open-source library LiteLLM. By embedding highly obfuscated, multistage credential stealers into these trusted systems, attackers successfully moved laterally through development pipelines and Kubernetes clusters to exfiltrate highly sensitive enterprise data. Madou warns that traditional, reactive security measures are entirely insufficient against fast-moving, AI-driven threats. To mitigate these expanding dangers, organizations must redefine AI middleware as critical infrastructure, implementing rigorous monitoring of application programming interface keys and environment variables that constantly flow through these abstraction layers. Furthermore, security leaders must modernize risk management strategies by locking down dependency pipelines, enforcing strict least-privilege access, and gaining visibility into autonomous Model Context Protocol agents. Ultimately, the author urges modern enterprises to establish comprehensive internal AI governance frameworks and continuously upskill developers in secure coding standards rather than waiting for formal government legislation, thereby proactively shielding their operational workflows from devastating, cascading supply-chain compromises.


World Bank, African DPAs outline formula for trusted digital identity, DPI

During the ID4Africa 2026 Annual General Meeting, a key World Bank presentation emphasized that establishing public trust is vital for the success of digital public infrastructure and national identity systems across Africa. Experts noted that even mature digital identity networks remain vulnerable to operational failures and public mistrust due to weak data collection safeguards, frequent data breaches, and expanding cyberattack surfaces. To address these vulnerabilities, data protection authorities from nations like Liberia, Benin, and Mauritius highlighted that digital forensics, cybersecurity, and rigorous data governance must operate collectively. Although these under-resourced regulatory bodies often struggle to fund large population-scale awareness campaigns, they are pioneering localized solutions. For example, Mauritius leverages chief data officers and amicable dispute resolution mechanisms to efficiently settle compliance breaches without lengthy prosecution, while Benin relies on specialized government liaisons to ensure proper database compliance across different agencies. Furthermore, regional frameworks like the East African Community body facilitate international knowledge-sharing and joint investigative capabilities. Ultimately, achieving an ecosystem worthy of citizen and business trust requires a comprehensive formula blending careful system architecture, strictly enforced data protection, robust cybersecurity defenses, and transparent communication that effectively helps citizens understand their rights within the broader data lifecycle.


When configuration becomes a vulnerability: Exploitable misconfigurations in AI apps

The rapid deployment of artificial intelligence and agentic applications on cloud-native platforms, particularly Kubernetes clusters, often compromises cybersecurity in favor of operational speed. According to the Microsoft Defender Security Research Team, this trend has led to an increase in exploitable misconfigurations, which are scenarios where public internet access is paired with absent or weak authentication mechanisms. Rather than relying on sophisticated zero-day vulnerabilities, threat actors can leverage these low-effort attack paths to achieve high-impact compromises, including remote code execution, credential exfiltration, and unauthorized access to sensitive internal data. Microsoft identified these specific dangers across several popular AI platforms: Model Context Protocol servers frequently permitted unauthenticated interaction with corporate tools, Mage AI default setups enabled internet-accessible administrative shells, and frameworks like kagent and AutoGen Studio leaked plaintext API keys or allowed unauthorized workload deployments. To mitigate these pervasive security gaps, organizations must treat AI systems as high-impact workloads. Security teams should enforce strong authentication across all endpoints, apply strict least-privilege principles, and continuously audit infrastructure configurations. Furthermore, cloud protection tools like Microsoft Defender for Cloud can actively detect exposed services, helping defenders remediate dangerous oversights before malicious adversaries can exploit them.


Tokenized assets face trust infrastructure test, Cardano chief says

The article, titled "Tokenized assets face trust infrastructure test, Cardano chief says," by Jeff Pao, outlines a pivotal shift in the digital assets sector as financial institutions transition from tentative pilot projects to scaled, production-level tokenization. According to Cardano’s leadership, the primary challenges facing this widespread adoption are no longer the core blockchain mechanisms themselves, but rather the underlying hurdles of verification, identity, and robust auditability. These elements form a critical "trust infrastructure" that remains essential for creating compliant, institutional-grade financial networks. As real-world asset tokenization expands rapidly across global markets, traditional financial institutions require secure mechanisms like decentralized identifiers and privacy-preserving verifiable credentials to interact safely with public ledgers. By embedding accountability directly into the network architecture, digital trust frameworks turn complex compliance into seamless operational coordination, enabling institutions to efficiently manage counterparty exposure and automated settlement risks without exposing sensitive transactional data. Ultimately, the piece underscores that the long-term survival of decentralized finance relies heavily on resolving these identity and legal infrastructure gaps. Establishing a standardized trust layer will determine whether tokenized finance achieves mature stability or succumbs to institutional fragility and unresolved regulatory friction, marking a major turning point for future global capital flows.

Daily Tech Digest - March 08, 2026


Quote for the day:

"How was your day? If your answer was "fine," then I don't think you were leading" -- Seth Godin



Technical debt is the tax killing AI ambition

In this article, Rebecca Fox argues that while artificial intelligence offers game-changing productivity, most organizations remain fundamentally ill-prepared for its full-scale adoption due to legacy technical and data debt. She compares technical debt to financial debt, where deferred maintenance acts as high-interest payments that stifle agility and increase operational costs. The article emphasizes that AI functions as a high-speed spotlight, amplifying "garbage in, garbage out" scenarios; without robust data governance and simplified information architecture, AI initiatives inevitably plateau or produce confidently incorrect results. Furthermore, the tension between AI ambition and economic reality is heightened by CFOs who are increasingly wary of large-scale investments with uncertain returns. Fox contends that instead of seeking a "magic wand" solution, leaders must use the current excitement surrounding AI as a catalyst to finally address unglamorous foundational work. This involves simplifying core platforms, reducing integration sprawl, and prioritizing data quality across the business. Ultimately, AI cannot fix technical debt on its own, but it serves as a critical reason to resolve it, ensuring that organizations can scale effectively without being crushed by the compounding costs of their own legacy systems and fragmented data estates.


Why Executive Presence Is A Hard Asset (Not A Soft Skill)

The article argues that executive presence is a tangible, measurable business driver rather than an abstract personality trait. By linking trust directly to revenue performance and organizational stability, the author highlights how leaders serve as the primary conduits for corporate credibility. In an era increasingly dominated by AI-driven skepticism and the complexities of hybrid work, authentic presence provides essential reassurance to stakeholders. The piece emphasizes that executive presence functions as a shorthand for judgment, influencing how investors, employees, and customers evaluate a leader's ability to deliver results. It identifies specific components of this asset, including vocal delivery, media training, and disciplined messaging, noting that perception is heavily influenced by nonverbal cues like tone and pitch. Furthermore, the article suggests that a comprehensive public relations strategy is necessary to sustain this presence over time. Ultimately, investing in executive presence is presented as a strategic move that creates durable value, strengthens leadership effectiveness, and offers a steadying force during periods of uncertainty. Rather than being a "soft" addition, it is a critical hard asset that determines long-term success and reputational resilience in a competitive landscape.


NIST Urged to Go Deep in OT Security Guidance

The National Institute of Standards and Technology (NIST) is currently updating its foundational operational technology (OT) security guidance, Special Publication 800-82, for its fourth iteration. In response to NIST’s call for input, cybersecurity experts and major vendors like Claroty, Armis, and Dragos are advocating for more granular, actionable advice that reflects the maturing nature of the field. These specialists emphasize that traditional IT security practices are often inadequate or even hazardous when applied to sensitive industrial environments. Key recommendations include moving beyond binary "scan or don’t scan" dilemmas by establishing passive assessment baselines and adopting risk-based frameworks for controlled active scanning. Furthermore, there is a strong push for NIST to harmonize its guidelines with global technical standards, such as ISA/IEC 62443, to reduce regulatory burdens on operators. Experts also suggest shifting static appendices into dynamic, machine-readable web resources to better address evolving threats. By focusing on asset criticality and multidimensional vulnerability scoring rather than just static CVSS data, the updated guidance could provide the technical depth necessary for modern industrial automation. Ultimately, the goal is to provide clear, specific instructions that leave less room for ambiguity in securing critical infrastructure.


Signals Show Heightened Stress on Workplace Cultures

The NAVEX 2025 Whistleblowing and Incident Management Benchmark Report, as detailed on JD Supra, highlights a significant rise in workplace culture stressors, particularly regarding workplace civility. This category, which includes disrespectful behaviors that do not necessarily meet legal definitions of harassment, now accounts for nearly 18% of global reports. The data reveals a notable regional divergence; while North America saw a slight decrease, reports increased across Europe, APAC, and South America, signaling maturing reporting cultures that now treat "soft" cultural issues as formal compliance matters. Furthermore, workplace conduct issues dominate over half of all global reports, serving as a critical early warning system for broader ethical failures. The report also notes a concerning uptick in retaliation fears and imminent threat reports, the latter of which boasts a 90% substantiation rate. These trends suggest that unresolved interpersonal tensions can escalate into serious safety risks and compliance breaches. To mitigate these risks in 2026, organizations are urged to elevate workplace civility to a strategic priority, strengthen anti-retaliation protections, and improve investigation transparency. Ultimately, the findings underscore that psychological safety is foundational to effective whistleblowing systems and overall organizational resilience in an increasingly volatile global landscape.


Backup strategies are working, and ransomware gangs are responding with data theft

According to the 2026 Cyber Claims Report from Coalition, business email compromise (BEC) and funds transfer fraud (FTF) dominated the cyber insurance landscape in 2025, accounting for 58% of all claims. While BEC frequency rose by 15%, faster detection helped reduce the average loss per incident. Conversely, ransomware frequency remained flat, but initial demands surged by 47% to exceed $1 million on average. This shift highlights a strategic change among attackers: as organizations improve their backup strategies, ransomware gangs are increasingly pivoting toward dual extortion, which involves both data encryption and theft. In fact, 70% of ransomware claims now involve this dual-threat tactic. The report identifies Akira as the most frequent ransomware variant, while RansomHub carried the highest average demand at over $2.3 million. Despite these aggressive tactics, 86% of victims refused to pay, and those who did often utilized professional negotiators to reduce costs by an average of 65%. Technically, VPNs emerged as the most targeted technology, appearing in 59% of ransomware incidents. Security experts emphasize that organizations must prioritize data minimization and hardened, immutable backups to combat these evolving threats effectively while securing public-facing login panels and critical infrastructure. These findings highlight the urgent need for robust defenses.


Only 30 minutes per quarter on cyber risk: Why CISO-board conversations are falling short

The article "Only 30 minutes per quarter on cyber risk: Why CISO-board conversations are falling short" explores a widening communication gap between Chief Information Security Officers (CISOs) and corporate boards. Despite the escalating threat of AI-driven cyberattacks, research from IANS and Artico Search indicates that three-quarters of security leaders are limited to just 30 minutes per quarter for board presentations. These interactions are frequently superficial, prioritizing status metrics over strategic risk discussions or emerging threats. Consequently, only 30% of boards describe their relationship with CISOs as strong and collaborative, while many others perceive these interactions as merely functional. The report further notes that boards often remain passive, with fewer than half participating in active exercises like tabletop simulations or crisis drills. To address this divide, the article suggests that CISOs must transition from technical specialists into business-minded leaders who can effectively contextualize cybersecurity within the broader landscape of organizational risk and ROI. By cultivating deeper engagement and offering predictive insights—particularly regarding disruptive technologies like AI—CISOs can evolve these brief updates into substantive strategic partnerships that enhance long-term organizational resilience in an increasingly volatile and complex global digital threat environment.


Ask the Experts: CIOs say they wouldn’t pull workloads back from the cloud

The InformationWeek article, "Ask the Experts: CIOs Say They Wouldn’t Pull Workloads Back from the Cloud," explores the phenomenon of cloud repatriation versus the steadfast commitment of leading IT executives to cloud environments. While data from Flexera suggests that roughly 21% of organizations are returning some workloads to on-premises infrastructure due to costs and security concerns, experts Josh Hamit and Sue Bergamo argue that the cloud remains the ultimate destination for modern innovation. Hamit, CIO of Altra Federal Credit Union, attributes his success to a deliberate, gradual migration strategy and the use of experienced partners, noting that the cloud provides unmatched scalability and essential tie-ins for artificial intelligence. Similarly, Bergamo, a veteran CIO and CISO, contends that with proper architectural configuration, the cloud offers security and performance levels that rival or exceed traditional data centers. She emphasizes that perceived drawbacks like latency and overage charges are typically results of poor planning rather than inherent flaws in the cloud model itself. Both leaders conclude that the agility, global reach, and innovative potential of cloud computing make it an indispensable asset, asserting they would not reverse their digital transformations if given the chance to start over today.


The cybersecurity blind spot in data center building systems

This article argues that the rapid expansion of data centers, fueled by the global AI revolution, has introduced a critical vulnerability in Operational Technology (OT). While digital security often focuses on data protection, the physical systems controlling power, cooling, and access are increasingly susceptible to remote exploitation. Modern facilities are marvels of automation, frequently managed via remote networks with minimal on-site staff, which inadvertently creates prime targets for sophisticated adversaries. Drawing parallels to historical breaches like the Stuxnet attack and the Ukrainian power grid incident, the piece warns that similar tactics could be used to manipulate environmental controls, causing power surges or overheating that could permanently damage sensitive GPUs. Furthermore, the integration of AI into facility management creates new entry points; if corrupted, the same algorithms intended to optimize performance could be weaponized to sabotage operations. The author contends that existing safeguards, such as periodic stress tests, are insufficient in this evolving threat landscape. Ultimately, investors and operators are urged to prioritize OT security through rigorous due diligence and proactive questioning to ensure that these essential infrastructure components do not remain a dangerous oversight in the rush to build.


Technical Debt Is Eating Your Firmware Alive: 3 Steps to Fight Back

In the article "Technical Debt Is Eating Your Firmware Alive: 3 Steps to Fight Back," Jacob Beningo explains how firmware technical debt accumulates when deadline pressures force developers to take shortcuts, resulting in tangled architectures and global variable "glue." Beningo identifies this as a leadership challenge, noting that organizations often prioritize immediate feature delivery over long-term code health. The symptoms of high debt include plummeting feature velocity, extended bug-fix times, and constant firefighting, leading to maintenance costs that are two to four times higher than clean codebases. To reverse this trend, Beningo outlines three practical steps for teams to implement immediately. First, make debt visible by measuring objective metrics like coupling and cyclomatic complexity. Second, institute lightweight, fifteen-minute code reviews focused on maintaining module boundaries rather than just finding bugs. Third, reclaim one specific architectural boundary at a time to prevent total paralysis. By enforcing even a single interface, teams can begin restoring order to their repository. Ultimately, Beningo argues that firmware must be treated as a valuable asset rather than a liability. Proactive management of technical debt ensures that long-lived embedded products remain maintainable and profitable without necessitating costly, high-risk rewrites later on.


Misconfigured Microsoft 365 leaves big firms exposed

According to recent research from CoreView, nearly half of large organizations experienced security or compliance incidents over the past year due to Microsoft 365 misconfigurations. The study, which surveyed 500 IT leaders and analyzed data from 1.6 million users, highlights that 82% of professionals consider managing the platform a severe operational burden, with many finding it nearly impossible to secure at scale. Significant visibility gaps persist, as 45% of organizations lack full control over their environments, while 90% struggle with basic security hygiene like enforcing password policies. Critical vulnerabilities are also evident in authentication practices; remarkably, 87% of organizations have administrators operating without multi-factor authentication. Furthermore, governance issues have led to failed or delayed audits for 43% of firms because of manual reporting processes. While 70% of IT leaders recognize the potential value of AI-driven administration, over half have already reversed AI-implemented changes due to governance fears. CoreView warns that deploying AI into these misconfigured environments without established guardrails only accelerates risk rather than solving underlying structural problems. Consequently, firms must prioritize strengthening their governance foundations and basic security controls before expanding automation across their increasingly complex Microsoft 365 ecosystems to prevent cascading data exposure.

Daily Tech Digest - January 22, 2026


Quote for the day:

"Lost money can be found. Lost time is lost forever. Protect what matters most." -- @ValaAfshar



PTP is the New NTP: How Data Centers Are Achieving Real-Time Precision

Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronization accuracy. ... Keeping network time in sync is important on any network. But it’s especially critical in data centers, which are typically home to large numbers of network-connected devices, and where small inconsistencies in network timing could snowball into major network synchronization problems. ... NTP works very well in situations where networks can tolerate timing inconsistencies of up to a few milliseconds (meaning thousandths of a second). But beyond this, NTP-based time syncing is less reliable due to limitations ... Unlike NTP, PTP doesn’t rely solely on a server-client model for syncing time across networked devices. Instead, it uses time servers in conjunction with a method called hardware timestamping on client devices. Hardware timestamping involves specialized hardware components, usually embedded in network interface cards (NICs), to track time. Central time servers still exist under PTP. But rather than having software on servers connect to the time servers, hardware devices optimized for the task do this work. The devices also include built-in clocks, allowing them to record time data faster than they could if they had to forward it to the generic clock on a server.


Why AI adoption requires a dedicated approach to cyber governance

Today enterprises are facing unprecedented internal pressure to adopt AI tools at speed. Business units are demanding AI solutions to remain competitive, drive efficiency, and innovate faster. But existing cyber governance and third-party risk management processes were never designed to operate at this pace. ... Without modernized cyber governance and AI-ready risk management capabilities, organizations are forced to choose between speed and safety. To truly enable the business, governance frameworks must evolve to match the speed, scale, and dynamism of AI adoption – transforming security from a gatekeeper into a business enabler. ... What’s more, compliance doesn’t guarantee security. DORA, NIS2, and other regulatory frameworks set only minimum requirements and rely on reporting at specific points in time. While these reports are accurate when submitted, they capture only a snapshot of the organization’s security posture, so gaps such as human errors, legacy system weaknesses, or risks from fourth- and Nth-party vendors can still emerge afterward. What’s more, human weakness is always present, and legacy systems can fail at crucial moments. ... While there’s no magic wand, there are tried-and-tested approaches that resolve and mitigate the risks of AI vendors and solutions. Mapping the flow of data around the organization helps reveal how it’s used and resolve blind spots. Requiring AI tools to include references for their outputs ensures that risk decisions are trustworthy and reliable.


What CIOs get wrong about integration strategy and how to fix it

As Gartner advises, business and IT should be equal partners in the definition of integration strategy, representing a radical departure from the traditional IT delivery and business “project sponsorship” model. This close collaboration and shared accountability result in dramatically higher success rates ... A successful integration strategy starts by aligning with the organization’s business drivers and strategic objectives while identifying the integration capabilities that need to be developed. Clearly defining the goals of technology implementation, establishing governance frameworks and decision-making authority and setting standards and principles to guide integration choices are essential. Success metrics should be tied to business outcomes, and the integration approach should support broader digital transformation initiatives. ... Create cross-functional data stewardship teams with authority to make binding decisions about data standards and quality requirements. Document what data needs to be shared between systems, which applications are the “source of truth.” Define and document any regulatory or performance requirements to guide your technical planning. ... Integrations that succeed in production are designed with clear system-of-record rules, traceable transactions, explicit recovery paths and well-defined operational ownership. Preemptive integration is not about reacting faster — it’s about ensuring failures never reach the business.


CFOs are now getting their own 'vibe coding' moment thanks to Datarails

For the modern CFO, the hardest part of the job often isn't the math—it's the storytelling. After the books are closed and the variances calculated, finance teams spend days, sometimes weeks, manually copy-pasting charts into PowerPoint slides to explain why the numbers moved. ... Datarails’ new agents sit on top of a unified data layer that connects these disparate systems. Because the AI is grounded in the company’s own unified internal data, it avoids the hallucinations common in generic LLMs while offering a level of privacy required for sensitive financial data. "If the CFO wants to leverage AI on the CFO level or the organization data, they need to consolidate the data," explained Datarails CEO and co-founder Didi Gurfinkel in an interview with VentureBeat. By solving that consolidation problem first, Datarails can now offer agents that understand the context of the business. "Now the CFO can use our agents to run analysis, get insights, create reports... because now the data is ready," Gurfinkel said. ... "Very soon, the CFO and the financial team themselves will be able to develop applications," Gurfinkel predicted. "The LLMs become so strong that in one prompt, they can replace full product runs." He described a workflow where a user could simply prompt: "That was my budget and my actual of the past year. Now build me the budget for the next year."


The internet’s oldest trust mechanism is still one of its weakest links

Attackers continue to rely on domain names as an entry point into enterprise systems. A CSC domain security study finds that large organizations leave this part of their attack surface underprotected, even as attacks become more frequent. ... Large companies continue to add baseline protections, though adoption remains uneven. Email authentication shows the most consistent improvement, driven by phishing activity and regulatory pressure. Organizations still leave email domains partially protected, which allows spoofing to persist. Other protections see much slower uptake. ... Consumer oriented registrars tend to emphasize simplicity and cost. Organizations that rely on them often lack access to protections that limit the impact of account compromise or social engineering. Risk increases as domain portfolios grow and change. ... Brand impersonation through domain spoofing remains widespread. Lookalike domains tied to major brands are often owned by third parties. Some appear inactive while still supporting email activity. Inactive domains with mail records allow attackers to send phishing messages that appear associated with trusted brands. Others are parked with advertising networks or held for later use. A smaller portion hosts malicious content, though dormant domains can be activated quickly. ... Gaps appear in infrastructure related areas. DNS redundancy and registry lock adoption lag, and many unicorns rely on consumer grade registrars. These limitations become more pronounced as operations scale.


Misconfigured demo environments are turning into cloud backdoors to the enterprise

Internal testing, product demonstrations, and security training are critical practices in cybersecurity, giving defenders and everyday users the tools and wherewithal to prevent and respond to enterprise threats. However, according to new research from Pentera Labs, when left in default or misconfigured states, these “test” and “demo” environments are yet another entry point for attackers — and the issue even affects leading security companies and Fortune 500 companies that should know better. ... After identifying an exposed instance of Hackazon, a free, intentionally vulnerable test site developed by Deloitte, during a routine cloud security assessment for a client, Yaffe performed a five-step hunt for exposed apps. His team uncovered 1,926 “verified, live, and vulnerable applications,” more than half of which were running on enterprise-owned infrastructure on AWS, Azure, and Google Cloud platforms. They then discovered 109 exposed credential sets, many accessible via a low-priority lab environment, tied to overly-privileged identity access management (IAM) roles. These often granted “far more access” than a ‘training’ app should, Yaffe explained, and provided attackers:Administrator-level access to cloud accounts, as well as full access to S3 buckets, GCS, and Azure Blob Storage; The ability to launch and destruct compute resources and read and write to secrets managers; Permissions to interact with container registries where images are stored, shared, and deployed.


Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore

“We’re now entering a new API boom. The previous wave was driven by cloud adoption, mobile apps, and microservices. Now, the rise of AI agents is fueling a rapid proliferation of APIs, as these systems generate massive, dynamic, and unpredictable requests across enterprise applications and cloud services,” comments Jacob Ideskog ... The growing use of agentic AI systems and the way they act autonomously, making decisions and triggering workflows, is ballooning the number of APIs in play. “It isn’t just ‘I expose one billing API’,” he continues, “now there are dozens of APIs that feed data to LLMs or AI agents, accept decisions from AI agents, facilitate orchestration between services and micro-apps, and potentially expose ‘agentic’ endpoints ... APIs have been a major attack surface for years – the problem is ongoing. Starting in 2025 and accelerating through 2026 and beyond, the rapid escalation of enterprise agentic AI deployments will multiply the number of APIs and increase the attack surface. That alone suggests that attacks against APIs will grow in 2026. But the attacks themselves will scale and be more effective through adversaries’ use of their own agentic AI. Barr explains: “Agentic AI means that bad actors can automate reconnaissance, probe API endpoints, chain API calls, test business-logic abuse, and execute campaigns at machine scale. Possession of an API endpoint, particularly a self-service, unconstrained one, becomes a lucrative target. And AI can generate payloads, iterate quickly, bypass simple heuristics, and map dependencies between APIs.”


Complex VoidLink Linux Malware Created by AI

An advanced cloud-first malware framework targeting Linux systems was created almost entirely by artificial intelligence (AI), a move that signals significant evolution in the use of the technology to develop advanced malware. VoidLink — comprised of various cloud-focused capabilities and modules and designed to maintain long-term persistent access to Linux systems — is the first case of wholly original malware being developed by AI, according to Check Point Research, which discovered and detailed the malware framework last week. While other AI-generated malware exists, it's typically "been linked to inexperienced threat actors, as in the case of FunkSec, or to malware that largely mirrored the functionality of existing open-source malware tools," ... The malware framework, linked to a suspected, unspecified Chinese actor, includes custom loaders, implants, rootkits, and modular plug-ins. It also automates evasion as much as possible by profiling a Linux environment and intelligently choosing the best strategy for operating without detection. Indeed, as Check Point researchers tracked VoidLink in real time, they watched it transform quickly from what appeared to be a functional development build into a comprehensive, modular framework that became fully operational in a short timeframe. However, while the malware itself was high-functioning out of the gate, VoidLink's creator proved to be somewhat sloppy in their execution.


What’s causing the memory shortage?

Right now, the industry is suffering the worst memory shortage in history, and that’s with three core suppliers: Micron Technology, SK Hynix, and Samsung. TrendForce, a Taipei-based market researcher that specializes in the memory market, recently said it expects average DRAM memory prices to rise between 50% and 55% this quarter compared to the fourth quarter of 2025. Samsung recently issued a similar warning. So what caused this? Two letters: AI. The rush to build AI-oriented data centers has resulted in virtually all of the memory supply being consumed by data centers. AI requires massive amounts of memory to process its gigantic data sets. A traditional server would usually come with 32 GB to 64 GB of memory, while AI servers have 128 GB or more. ... There are other factors at play here, too, of course. The industry is in a transition period between DDR4 and DDR5, as DDR5 comes online and DDR4 fades away. These transitions to a new memory format are never quick or easy, and it usually take years to make a full shift. There has also been increased demand from both client and server sides. With Microsoft ending support for Windows 10, a whole lot of laptops are being replaced with Windows 11 systems, and new laptops come with DDR5 memory — the same memory used in an AI server. ... “What’s likely to happen, from a market perspective, is we’ll see the market grow less in ’26 than we had anticipated, but ASPs are likely to stay or increase. ...” he said.


OpenAI CFO Comments Signal End of AI Hype Cycle

By focusing on “practical adoption,” OpenAI can close the gap between what AI now makes possible and how people, companies, and countries are using it day to day. “The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes,” she noted. “Infrastructure expands what we can deliver,” she continued. “Innovation expands what intelligence can do. Adoption expands who can use it. Revenue funds the next leap. This is how intelligence scales and becomes a foundation for the global economy.” The framing reflects a shift from big-picture AI promise to day-to-day deployment and measurable results. ... There’s also a gap between what AI can do and how people are actually using it in daily life, noted Natasha August, founder of RM11, a content monetization platform for creators in Carrollton, Texas. “AI tools are incredibly powerful, but for many people and businesses, it’s still unclear how to turn that power into something practical like saving time, making money, or improving how they work,” she told TechNewsWorld. In business, the gap lies between AI’s raw analytical capabilities and its ability to drive tangible, repeatable business outcomes, maintained Nithin Mummaneni ... “The winning play is less ‘AI that answers’ and more ‘AI that completes tasks safely and predictably,'” he continued. “Adoption happens when AI becomes part of the workflow, not a separate destination.”

Daily Tech Digest - November 07, 2025


Quote for the day:

"The best teachers are those who don't tell you how to get there but show the way." -- @Pilotspeaker



AI spending may slow down as ROI remains elusive

Some AI experts agree with Forrester that an AI market correction is on the way. Microsoft founder Bill Gates recently talked about the existence of an AI bubble, and industry observers have noted that some AI excitement is dimming. Many don’t see an AI bubble that will burst in the near future, but it’s deflating a bit. Still others don’t see much of a slowdown in the near term. ... Some organizations are not achieving the accuracy they need from AI tools, and others are not finding their data to be easily accessible or properly structured, says Sam Ferrise, CTO of IT consulting firm Trinetix. “Many organizations are realizing that their expectations for AI accuracy and performance don’t always align with the level of investment they’re willing — or able — to make,” he says. “The key is calibrating expectations relative to both the investment and the use case.” In other cases, enterprises deploying AI are running into privacy or security problems, he adds. “Many teams successfully prove a use case with clear ROI, only to realize later that they must harden the solution before it can safely move into production,” Ferrise says. “When that alignment isn’t there, it’s natural for organizations to pause or delay spending until they can justify the value.” The prospect of a bubble bursting may be an overly dramatic scenario, although not impossible, he adds. It’s been easy for organizations to overlook intangible costs such as training, compliance, and governance.


Why can’t enterprises get a handle on the cloud misconfiguration problem?

“Microsoft, Google, and Amazon have handed us a problem,” says Andrew Wilder, CSO at Vetcor, a national network of more than 900 veterinary hospitals. “By default, everything is insecure, and you have to put security on top of it. It would be much better if they just gave us out-of-the-box secure stuff. Would you buy a car that doesn’t have locks? They wouldn’t even sell that car.” This security gap is what allows third-party vendors to exist, he says. “You should be building products — and I’m talking to you, Google, Microsoft, and Amazon — that are secure by design, so you don’t have to get a third-party tool. They should be out of the box secure.” ... When administrators or users make changes to cloud configurations in the cloud management consoles, it’s difficult to track those changes and to revert them if something goes wrong. Plus, humans can easily make mistakes. The solution experts advise is to adopt the principle of “infrastructure as code” and use configuration management tools so that all changes are checked against policies, tracked and audited, and can easily be rolled back. ... Companies will often have monitoring for major cloud services, but shadow IT deployments are left in the dark. This is less a technology problem than a management one and can be addressed by better communications with business units and a more disciplined approach to deploying technology on an enterprise-wide level. 


The Supply Chain Blind Spot: Protecting Data in Expanding IT Ecosystems

Data growth is no longer linear, it is exponential. The rise of AI, automation, and digital platforms has transformed how information is created, stored, and shared. In India, this acceleration is particularly visible. The country’s data centre industry has grown from 590 MW in 2019 to 1.4 GW in 2024, a 139% jump, and is projected to reach 3 GW by 2030, driven by cloud adoption, AI demand, and data localisation initiatives. This infrastructure boom, while positive, brings new operational realities. Most enterprises now operate across hybrid environments, combining on-premises, public cloud and SaaS-based data stores. Without unified oversight, these fragmented environments risk becoming silos. True resilience depends not just on protecting data but understanding where it lives, how it moves, and who controls it. ... Globally, enterprises are reframing resilience as a core business capability. This approach requires integrating resilience principles into decision-making: from procurement and architecture design to crisis response. Simulated attacks, failover testing and dependency audits are becoming part of daily operational culture, not annual exercises. For Indian organizations, this mindset shift is vital. RBI’s ICT risk management directives and the DPDP Act establish the baseline; the differentiator lies in how proactively organizations operationalize these expectations. 


The power of low-tech in a high-tech world

Our high-tech society is impressive in the collective. But it robs individuals of skills. Most kids now can’t write cursive. And they can’t read it, either. They can’t read an analog clock or a paper map. The acceleration of technological innovation also accelerates the rate at which we lose skills. Videogames, smartphones, and dating apps — aided and abetted by the trauma of the COVID-19 lockdowns a few years ago — have left many young people alone without the skills to meet and connect with anyone, leading to a loneliness epidemic among the young. But losing old-fashioned skills and old-school tech knowledge is a choice we don’t have to make. ... Thousands of scientific reports all lead us to the same conclusion: Over-reliance on advanced technologies dulls critical thinking, weakens memory, reduces problem-solving skills, limits creativity, erodes attention spans, and fosters passive dependence on automated systems. ... What all these old-school approaches have in common is that they’re harder and take longer — and they leave you smarter and better connected. In other words, if you strategically cultivate the skills, habits, discipline and practice of older tech, you’ll be much more successful in your career and your life. And here’s one final point: The more high-tech our culture becomes, the more impactful old-school tech will be. So yes, by all means become brilliantly skilled at AI chatbot prompt engineering.


Why Leaders Cannot Outsource Communication

When communication is delegated to a proxy, that signal weakens. Employees notice the gap between what the leader says or doesn’t say, and what the organization does. This is why communication has an outsized impact on engagement. Gallup finds that 70% of the variance in employee engagement is explained by managers and leaders, not perks or policies. When leaders own the message, they create psychological safety: the sense that it’s safe to commit, speak up and take risks. When they don’t, that safety erodes. ... Delegating communication is tempting. Leaders are busy. They hire communications officers and agencies to manage the message. These roles are valuable, but they can’t substitute for the leader’s voice. A speechwriter can shape phrasing and a PR team can guide timing, but only the leader can deliver authenticity. As Murphy has written, “Leaders are accountable to employees: Candor about bad news as well as the good, and feedback that aligns with expectations.” Authenticity requires candor, even when the message is difficult. When communication comes from anyone else, it’s interpreted as institutional rather than personal. And people follow people, not institutions. ... The Operator Economy demands a new kind of scale, one built not on capital or code, but on human alignment. Communication is infrastructure. The CEO becomes the signal source around which all systems calibrate. When leaders “scale themselves” through clarity and consistency, they convert trust into throughput. 


Breaking the Burnout Cycle: How Smart Automation and ASPM Can Restore Developer Joy

Smart automation can rescue developers from repetitive drudgery by using AI to handle routine tasks like test writing, bug fixing, and documentation. Modern application security posture management (ASPM) platforms exemplify this approach by providing contextualized risk assessments rather than overwhelming vulnerability dumps, helping security teams first understand which issues actually matter and then giving developers actionable info on the risk and how it should be fixed. These platforms excel at managing the volume and unpredictability of AI-generated code, turning what was once a blind spot into manageable, prioritized work. ... Technology alone isn't enough. Organizations must also prioritize developer growth by creating opportunities for experimentation, architectural decisions, and end-to-end project ownership while automation handles routine tasks. This means shifting from measuring output volume to focusing on meaningful metrics like code quality and developer satisfaction. AI represents an opportunity for developers to gain expertise in an emerging technology.  ... The developer talent crisis is solvable. While AI has introduced new complexities to the software development and security landscape, it also presents unprecedented opportunities for organizations willing to rethink how they support their development teams.


The CIO’s Role In Data Democracy: Empowering Teams Without Losing Control

The modern CIO is at a point where they can choose between innovation and control. In the past, IT departments were thought of as people who took care of infrastructure and enforced strict regulations about who could access data. The CIO needs to reassess this way of doing things today. They shouldn’t prohibit access; instead, they should make it safe by building frameworks. The job has changed from saying “no” to making sure that when the company says “yes,” it does it smartly. The CIO is now both an architect and a guardian. They create systems that make data easy to get to, understand, and act on, all while keeping security and compliance in mind. ... The CIO is no longer a gatekeeper; they are instead a designer of trust. The goal is to make governance a part of systems such that it is seamless, automatic, and easy to use. This change lets companies keep an eye on things and stay in control without making decisions take longer. Unified data taxonomies are the first step in building this framework. This means that all departments use the same naming standards and definitions. When everyone uses the same “data language,” there is less confusion and more cooperation. ... Effective governance demands collaboration between IT, compliance, and business leaders. The CIO must champion cross-functional alignment where all parties share responsibility for data integrity and use.


What keeps phishing training from fading over time

Employees who want to be helpful or appear responsive can become easier targets than those reacting to fear or haste. For CISOs, this reinforces the need to teach users about manipulation through trust and cooperation, not just the warning signs of urgent or threatening messages. ... Dubniczky said maintaining employee engagement over time is a major challenge for most organizations. “In contrast with other research in the area, a key contribution of ours was a mandatory training after each failed phishing attack,” he explained. “This strikes a good balance between not needlessly bothering careful employees with monthly or quarterly trainings while making sure that the highest risk individuals are constantly trained.” He recommended that organizations vary their phishing simulations to keep users alert. “We’d recommend performing monthly penetration tests on smaller groups of people in diverse departments of the organization with a seemingly random pattern, and making re-training mandatory in case of successful attacks,” he said. “It’s also difficult to generalize on this, but this approach seems much more effective than periodic presentation-style trainings.” ... One of the most striking findings involves the timing of feedback. When employees clicked a phishing link and then received an immediate explanation and training prompt, they were far less likely to repeat the behavior. Around seven in ten employees who failed once did not do so again.


The new QA playbook: Leveraging AI to amplify expertise, not replace it

Many quality teams have been part of the AI journey from the very beginning, contributing from concept to implementation and helping evaluate large language models to ensure quality and reliability. However, many AI features are not developed by QA practitioners, so it is essential to evaluate them through a QA lens. First, ensure the system can produce what your teams actually use, whether that is step lists, BDD-style scenarios, or free text that fits your templates and automation. Next, map the full data journey. Know whether prompts or results are kept, how encryption and minimization are applied, and where any content is stored. Finally, require fine-grained controls so you can limit usage by environment, project, and role. Regulated teams require an audit trail and clear accountability, which means governance must keep pace with adoption, or speed will outpace safety. Once review-first habits are in place, build on them. True oversight requires more than simply checking AI outputs; it demands deeper knowledge and understanding than the AI itself to spot gaps, inaccuracies, or misleading information. That’s what separates a passive reviewer from an effective human in the loop. ... Real gains from AI will not come from automation alone but from people who know how to guide it with clarity, context, and care. The future of testing depends on professionals who can combine technical fluency with critical thinking, ethical judgment, and a sense of ownership over quality.


Your outage costs more than you think – so design with resilience in mind

Service providers are under strain to deliver the rapid speeds and constant network uptime that modern life demands, with areas like remote working, financial transactions, cloud access and streaming services expected to work seamlessly as part of the daily lives of many end users. For many enterprises, their business depends on this connectivity. Even a single hour of network disruption can cost an organisation more than $300,000, and the long-term damage to customer trust often exceeds any immediate financial loss. Despite this, many organisations still rely on outdated infrastructure that cannot support the requirements of today’s end users. Legacy environments struggle with explosive data growth, the soaring demands of AI, and the complexity of distributed, cloud-first applications. At the same time, power limitations, infrastructure strain and inconsistent service levels put businesses at risk of falling behind. The gap between what service providers and enterprises need, and what their infrastructure can deliver, is widening. ... For service providers, investing in robust colocation and high-performance networking is not just about upgrading infrastructure, but enabling customers and partners worldwide to thrive in today’s fast-paced digital landscape. By offering resilient and scalable connectivity, providers can differentiate their service offering, attract high-value enterprise clients, and create new revenue streams based on reliability and performance.

Daily Tech Digest - September 05, 2025


Quote for the day:

"Little minds are tamed and subdued by misfortune; but great minds rise above it." -- Washington Irving


Understanding Context Engineering: Principles, Practices, and Its Distinction from Prompt Engineering

Context engineering is the strategic design, management, and delivery of relevant information—or “context”—to AI systems in order to guide, constrain, or enhance their behavior. Unlike prompt engineering, which primarily focuses on crafting effective input prompts to direct model outputs, context engineering involves curating, structuring, and governing the broader pool of information that surrounds and informs the AI’s decision-making process. In practice, context engineering requires an understanding of not only what the AI should know at a given moment but also how information should be prioritized, retrieved, and presented. It encompasses everything from assembling relevant documents and dialogue history to establishing policies for data inclusion and exclusion. ...  While there is some overlap between the two domains, context engineering and prompt engineering serve distinct purposes and employ different methodologies. Prompt engineering is concerned with the formulation of the specific text—the “prompt”—that is provided to the model as an immediate input. It is about phrasing questions, instructions, or commands in a way that elicits the desired behavior or output from the AI. Successful prompt engineering involves experimenting with wording, structure, and sometimes even formatting to maximize the performance of the language model on a given task.


How AI and Blockchain Are Transforming Tenant Verification in India

While artificial intelligence provides both intelligence and speed, Blockchain technology provides the essential foundation of trust and security. Blockchain functions as a permanent digital record – meaning that once information is set, it can’t be changed or deleted by third parties. This feature is particularly groundbreaking for ensuring a safe and clear rental history. Picture this: the rental payments and lease contracts of your tenants could all be documented as ‘smart contracts’ using Blockchain technology. ... The combination of AI and Blockchain signifies a groundbreaking transformation, enabling tenants to create ‘self-sovereign identities’ on the Blockchain — digital wallets that hold their verified credentials, which they fully control. When searching for rental properties, tenants can conveniently provide prospective landlords with access to certain details about themselves, such as their history of timely payments and police records. AI leverages secure and authentic Blockchain data to produce an immediate risk score for landlords to assess, ensuring a quick and reliable evaluation. This cohesive approach guarantees that AI outcomes are both rapid and trustworthy, while the decentralized nature of Blockchain safeguards tenant privacy by removing the necessity for central databases that may become susceptible over time.


Adversarial AI is coming for your applications

New research from Cato Networks threat intelligence report, revealed how threat actors can use a large language model jailbreak technique, known as an immersive world attack, to get AI to create infostealer malware for them: a threat intelligence researcher with absolutely no malware coding experience managed to jailbreak multiple large language models and get the AI to create a fully functional, highly dangerous, password infostealer to compromise sensitive information from the Google Chrome web browser. The end result was malicious code that successfully extracted credentials from the Google Chrome password manager. Companies that create LLMs are trying to put up guardrails, but clearly GenAI can make malware creation that much easier. AI-generated malware, including polymorphic malware, essentially makes signature-based detections nearly obsolete. Enterprises must be prepared to protect against hundreds, if not thousands, of malware variants. ... Enterprises can increase their protection by embedding security directly into applications at the build stage: this involves investing in embedded security that is mapped to OWASP controls; such as RASP, advanced Whitebox cryptography, and granular threat intelligence. IDC research shows that organizations protecting mobile apps often lack a solution to test them efficiently and effectively. 


Top Pitfalls to Avoid When Responding to Cyber Disaster

Moving too quickly following an attack can also prompt staff to respond to an intrusion without first fully understanding the type of ransomware that was used. Not all ransomware is created equal and knowing if you were a victim of locker ransomware, double extortion, ransomware-as-a-service, or another kind of attack can make all the difference in how to respond because the goal of the attacker is different for each. ... The first couple hours after a ransomware incident is identified are critical. In those immediate hours, work quickly to identify and isolate affected systems and disconnect compromised devices from the network to prevent the ransomware from spreading further. Don’t forget to also preserve forensic evidence as you go, such as screenshots, relevant logs, anything to inform future law enforcement investigations or legal action. Once that has been done, notify the key stakeholders and the cyber insurance provider. ... After the dust settles, analyze how the attack was able to occur and put in place fixes to keep it from happening again. Identify the initial access point and method, and map how the threat actor moved through the network. What barriers were they able to move past, and which held them back? Are there areas where more segmentation is needed to reduce the attack surface? Do any security workflows or policies need to be modified?


How to reclaim control over your online shopping data

“While companies often admit to sharing user data with third parties, it’s nearly impossible to track every recipient. That lack of control creates real vulnerabilities in data privacy management. Very few organizations thoroughly vet their third-party data-sharing practices, which raises accountability concerns and increases the risk of breaches,” said Ian Cohen, CEO of LOKKER. The criminal marketplace for stolen data has exploded in recent years. In 2024, over 6.8 million accounts were listed for sale, and by early 2025, nearly 2.5 million stolen accounts were available at one point. ... Even limited purchase information can prove valuable to criminals. A breach exposing high-value transactions, for example, may suggest a buyer’s financial status or lifestyle. When combined with leaked addresses, that data can help criminals identify and target individuals more precisely, whether for fraud, identity theft, or even physical theft. ... One key mechanism is the right to be forgotten, a legal principle allowing individuals to request the removal of their personal data from online platforms. The European Union’s GDPR is the strongest example of this principle in action. While not as comprehensive as the GDPR, the US has some privacy protections, such as the California Consumer Privacy Act (CCPA), which allow residents to access or delete their personal data.


Mind the Gap: Agentic AI and the Risks of Autonomy

The ink is barely dry on generative AI and AI agents, and now we have a new next big thing: agentic AI. Sounds impressive. By the time this article comes out, there’s a good chance that agentic AI will be in the rear-view mirror and we’ll all be chasing after the next new big thing. Anyone for autonomous generative agentic AI agent bots? ... Some things on the surface seem more irresponsible than others, but for some, agentic AI apparently not so much. Debugging large language models, AI agents, and agentic AI, as well as implementing guardrails are topics for another time, but it’s important to recognize that companies are handing over those car keys. Willingly. Enthusiastically. Would you put that eighth grader in charge of your marketing department? Of autonomously creating collateral that goes out to your customers without checking it first? Of course not. ... We want AI agents and agentic AI to make decisions, but we must be intentional about the decisions they are allowed to make. What are the stakes personally, professionally, or for the organization? What is the potential liability when something goes wrong? And something will go wrong. Something that you never considered going wrong will go wrong. And maybe think about the importance of the training data. Isn’t that what we say when an actual person does something wrong? “They weren’t adequately trained.” Same thing here.


How software engineers and team leaders can excel with artificial intelligence

As long as software development and AI designers continue to fall prey to the substitution myth, we’ll continue to develop systems and tools that, instead of supposedly making humans lives easier/better, will require unexpected new skills and interventions from humans that weren’t factored into the system/tool design ... Software development covers a lot of ground, from understanding requirements, architecting, designing, coding, writing tests, code review, debugging, building new skills and knowledge, and more. AI has now reached a point where it can automate or speed up almost every part of the process. This is an exciting time to be a builder. A lot of the routine, repetitive, and frankly boring parts of the job, the "cognitive grunt work", can now be handled by AI. Developers especially appreciate the help in areas like generating test cases, reviewing code, and writing documentation. When those tasks are off our plate, we can spend more time on the things that really add value: solving complex problems, designing great systems, thinking strategically, and growing our skills. ... The elephant in the room is "whether AI will take over my job one day?". Until this year, I always thought no, but the recent technological advancements and new product offerings in this space are beginning to change my mind. The reality is that we should be prepared for AI to change the software development role as we know it.


6 browser-based attacks all security teams should be ready for in 2025

Phishing tooling and infrastructure has evolved a lot in the past decade, while the changes to business IT means there are both many more vectors for phishing attack delivery, and apps and identities to target. Attackers can deliver links over instant messenger apps, social media, SMS, malicious ads, and using in-app messenger functionality, as well as sending emails directly from SaaS services to bypass email-based checks. Likewise, there are now hundreds of apps per enterprise to target, with varying levels of account security configuration. ... Like modern credential and session phishing, links to malicious pages are distributed over various delivery channels and using a variety of lures, including impersonating CAPTCHA, Cloudflare Turnstile, simulating an error loading a webpage, and many more. The variance in lure, and differences between different versions of the same lure, can make it difficult to fingerprint and detect based on visual elements alone. ... Preventing malicious OAuth grants being authorized requires tight in-app management of user permissions and tenant security settings. This is no mean feat when considering the 100s of apps in use across the modern enterprise, many of which are not centrally managed by IT and security teams


JSON Config File Leaks Azure ActiveDirectory Credentials

"The critical risk lies in the fact that this file was publicly accessible over the Internet," according to the post. "This means anyone — from opportunistic bots to advanced threat actors — could harvest the credentials and immediately leverage them for cloud account compromise, data theft, or further intrusion." ... To exploit the flaw, an attacker can first use the leaked ClientId and ClientSecret to authenticate against Azure AD using the OAuth2 Client Credentials flow to acquire an access token. Once this is acquired, the attacker then can send a GET request to the Microsoft Graph API to enumerate users within the tenant. This allows them to collect usernames and emails; build a list for password spraying or phishing; and/or identify naming conventions and internal accounts, according to the post. The attacker also can query the Microsoft Graph API to enumerate OAuth2 permission grants within the tenant, revealing which applications have been authorized and what scopes, or permissions, they hold. Finally, the acquired token allows an attacker to use group information to identify privilege clusters and business-critical teams, thus exposing organizational structure and identifying key targets for compromise, according to the post. ... "What appears to be a harmless JSON configuration file can in reality act as a master key to an organization’s cloud kingdom," according to the post.


Data centers are key to decarbonizing tech’s AI-fuelled supply chain

Data center owners and operators are uniquely positioned to step up and play a larger, more proactive role in this by pushing back on tech manufacturers in terms of the patchy emissions data they provide, while also facilitating sustainable circular IT product lifecycle management/disposal solutions for their users and customers. ... The hard truth, however, is that any data center striving to meet its own decarbonization goals and obligations cannot do so singlehandedly. It’s largely beholden to the supply chain stakeholders upstream. At the same time, their customers/users tend to accept ever shortening usage periods as the norm. Often, they overlook the benefits of achieving greater product longevity and optimal cost of ownership through the implementation of product maintenance, refurbishment, and reuse programmes. ... As a focal point for the enablement of the digital economy, data centers are ideally placed to take a much more active role: by lobbying manufacturers, educating users and customers about the necessity and benefits of changing conventional linear practices in favour of circular IT lifecycle management and recycling solutions. Such an approach will not only help decarbonize data centers themselves but the entire tech industry supply chain – by reducing emissions.