Showing posts with label incident response. Show all posts
Showing posts with label incident response. Show all posts

Daily Tech Digest - March 23, 2026


Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 23 mins • Perfect for listening on the go.


Testing autonomous agents (Or: how I learned to stop worrying and embrace chaos)

The VentureBeat article "Testing autonomous agents (Or: how I learned to stop worrying and embrace chaos)" explores the critical shift from simple chatbots to autonomous AI agents that function more like independent employees. As agents gain the power to execute actions without human confirmation, the authors argue that "plausible" reasoning is no longer sufficient; systems must instead be engineered for graceful failure and absolute reliability. To achieve this, a four-layered architecture is proposed: high-quality model selection, deterministic guardrails using traditional validation logic, confidence quantification to identify ambiguity, and comprehensive observability for auditing reasoning chains. Reliability is further reinforced by defining clear permission, semantic, and operational boundaries to limit the "blast radius" of potential errors. The article emphasizes that traditional software testing is inadequate for probabilistic systems, advocating instead for simulation environments, red teaming, and "shadow mode" deployments where agents’ decisions are compared against human actions. Ultimately, building enterprise-grade autonomy requires a risk-based investment in safeguards and a rethink of organizational accountability, ensuring that human-in-the-loop patterns remain a central safety mechanism as these systems navigate the complex, often unpredictable reality of production environments.


NIST updates its DNS security guidance for the first time in over a decade

NIST has released Special Publication 800-81r3, the Secure Domain Name System Deployment Guide, marking its first significant update to DNS security standards in over twelve years. This comprehensive revision addresses the modern threat landscape by focusing on three critical pillars: utilizing DNS as an active security control, securing protocols, and hardening infrastructure. A central theme is the implementation of protective DNS (PDNS), which empowers organizations to analyze queries and block access to malicious domains proactively. The guide provides technical advice on deploying encrypted DNS protocols like DNS over TLS, HTTPS, and QUIC to ensure data privacy and integrity. Furthermore, it modernizes DNSSEC recommendations by favoring efficient cryptographic algorithms like ECDSA and Edwards-curve over legacy RSA methods. Organizational hygiene is also prioritized, with strategies to mitigate risks like dangling CNAME records and lame delegations that lead to domain hijacking. By advocating for the separation of authoritative and recursive functions and geographic dispersal, NIST aims to bolster the resilience of network connections. This updated framework serves as an essential roadmap for cybersecurity leaders and technical teams tasked with maintaining secure, future-proof DNS environments in an increasingly complex digital ecosystem.


The insider threat rises again

The article "The Insider Threat Rises Again" examines the escalating risks posed by internal actors in modern organizations. Driven by evolving technologies and shifting work dynamics, insider incidents have become increasingly frequent and costly, with 42% of organizations reporting a rise in both malicious and negligent cases over the past year. The financial impact is staggering, averaging $13.1 million per incident. Today's threat landscape is multifaceted, encompassing deliberate sabotage, inadvertent errors, and the emergence of "coerced insiders" targeted via social media or the dark web. Remote work has exacerbated these risks by lowering psychological barriers to data exfiltration, while AI enables data theft at an unprecedented scale. Furthermore, the article highlights sophisticated tactics like North Korean operatives posing as fake IT workers to gain persistent network access. To combat these threats, experts argue that traditional perimeter security is no longer sufficient. Organizations must instead adopt adaptive controls that monitor high-risk actions in real-time and create friction at the point of data access. Moving beyond managing human behavior, effective security now requires meeting users at the point of risk to identify and block suspicious activity regardless of the actor's credentials.


25 Years of the Agile Manifesto, and the End of the Road for AppSec?

In the article "25 Years of the Agile Manifesto and the End of the Road for AppSec," the author reflects on how the evolution of software development has rendered traditional Application Security (AppSec) models obsolete. Since the inception of the Agile Manifesto, the industry has shifted from slow, monolithic release cycles to rapid, continuous delivery. The core argument is that conventional AppSec—often characterized by "gatekeeping," manual reviews, and siloed security teams—cannot keep pace with the velocity of modern DevOps. This friction creates a bottleneck that developers frequently bypass to meet deadlines, ultimately compromising security. The piece suggests that we have reached the "end of the road" for security as a separate, reactionary phase. Instead, the future lies in "shifting left" and "shifting everywhere," where security is fully integrated into the CI/CD pipeline through automation and developer-centric tools. By empowering developers to take ownership of security within their existing workflows, organizations can achieve the speed promised by Agile without sacrificing safety. Ultimately, the article calls for a cultural and technical transformation where AppSec evolves from a final checkpoint into an invisible, continuous component of the software development lifecycle, ensuring resilience in an increasingly fast-paced digital landscape.


The era of cheap technology could be over

The article suggests that the long-standing era of affordable consumer and enterprise technology is drawing to a close, primarily driven by an unprecedented global shortage of critical hardware components. This shift is largely attributed to the explosive growth of artificial intelligence, which has created an insatiable demand for high-performance processors, memory, and solid-state storage. Manufacturers are increasingly prioritizing high-margin AI-specific hardware over commodity components used in PCs, smartphones, and servers, leading to significant price hikes. Market analysts predict a dramatic surge in DRAM and SSD prices, with some estimates suggesting a 130% increase by the end of the year. Consequently, shipments for personal computers and mobile devices are expected to decline as manufacturing costs become prohibitive. Beyond the AI boom, the crisis is exacerbated by post-pandemic market cycles and geopolitical tensions that continue to destabilize global supply chains. To navigate this new landscape, IT leaders are being forced to rethink procurement strategies, opting for data cleansing, tiered storage solutions, and extending the lifecycle of existing hardware. Ultimately, while these shortages strain budgets, they may encourage more disciplined data management practices as businesses adapt to a more expensive technological environment.


The AI era of incident response: What autonomous operations mean for enterprise IT

The article explores the transformative shift in enterprise IT as it moves toward an era of autonomous operations driven by artificial intelligence. Traditionally, incident response has been a reactive, manual process, leaving IT teams overwhelmed by a constant deluge of alerts and complex troubleshooting tasks. However, as modern environments grow increasingly intricate across cloud and hybrid infrastructures, manual intervention is no longer sustainable. The author argues that AI and machine learning are revolutionizing this landscape by enabling proactive monitoring and automated remediation. These AIOps tools can analyze massive datasets in real-time to identify patterns, pinpoint root causes, and resolve issues before they escalate into significant outages. This transition significantly reduces the Mean Time to Repair (MTTR) and shifts the focus of IT staff from constant firefighting to higher-value strategic initiatives. While human oversight remains essential, the role of IT professionals is evolving into one of managing intelligent systems rather than performing repetitive manual labor. Ultimately, embracing autonomous operations allows organizations to achieve greater system reliability, operational efficiency, and a superior developer experience, marking a definitive end to the limitations of legacy incident management frameworks.


Securing Automation: Why the Specification Stage Is the Right Time to Embed OT Cybersecurity

Manufacturers today are rapidly adopting automation to meet rising demand, yet a significant gap remains in cybersecurity investment, often leaving operational technology (OT) vulnerable. This article argues that the most effective remedy is to embed security requirements directly into the initial specification phase of projects. By integrating specific, testable criteria into Requests for Proposals (RFPs), security becomes a contractually enforceable deliverable rather than a costly afterthought. Effective requirements must adhere to six key attributes: they should be achievable, unambiguous, concise, complete, singular, and verifiable. This structured approach allows for rigorous validation during Factory Acceptance Testing (FAT) and Site Acceptance Testing (SAT), ensuring systems are hardened before they go live. Beyond technical specifications, the author emphasizes a holistic strategy encompassing people and processes, such as developing OT-specific security policies and conducting regular incident-response drills. Resilience is also highlighted through the implementation of immutable backups and "safe-state" logic to maintain production during disruptions. Ultimately, establishing an OT governance board ensures that security remains a continuous, executive-level priority, safeguarding automation investments while maintaining the speed and efficiency essential for modern industrial competitiveness.


The Illusion of Managed Data Products

In "The Illusion of Managed Data Products," Dr. Jarkko Moilanen explores the critical gap between perceiving data as a managed asset and the operational reality of true control. He argues that many organizations mistake visibility—achieved through data catalogs and dashboards—for actual management. While these tools identify existing products and track performance, they often fail to trigger meaningful action when issues arise. This creates an illusion of order where structure and metadata exist, but ownership remains static and metrics lack consequences. Moilanen identifies "diffusion of responsibility" and "latency" as key barriers, where signals are observed but not systematically tied to accountability or execution. To overcome this, the author advocates for a shift from mere observation to an active operating model. This involves creating a closed loop where every signal leads to a defined owner, a triggered action, and subsequent verification. By integrating business outcomes with governance and leveraging AI to bridge the gap between detection and response, organizations can move beyond descriptive catalogs toward a system of coordinated execution. Ultimately, managing data products requires more than just better visualization; it demands a structural transformation that prioritizes responsiveness and ensures that every data insight results in tangible business momentum.


Resilience by Design: How Axis Bank is redefining cybersecurity for the AI-driven banking era

The article titled "Resilience by Design: How Axis Bank is redefining cybersecurity for the AI-driven banking era" features Vinay Tiwari, CISO of Axis Bank, and his vision for securing modern financial services. As banking transitions into an AI-driven landscape, Tiwari emphasizes "resilience by design," a strategy that integrates security into the core of every digital initiative rather than treating it as an afterthought. The bank’s approach is anchored by three critical domains: robust cyber risk governance, secured data architecture, and continuous threat analysis. A central pillar of this transformation is the implementation of Zero Trust Architecture, which replaces implicit trust with continuous verification across all network interactions. Furthermore, Axis Bank leverages advanced AI/ML-powered threat intelligence and automated security operations to detect anomalies and mitigate risks proactively. Beyond technology, Tiwari stresses that true resilience stems from a human-centered culture. By launching comprehensive awareness programs, the bank empowers employees to recognize social engineering and phishing threats. Ultimately, this multifaceted strategy—combining hybrid-cloud protection, preemptive defense, and unified compliance—aims to build digital trust. This ensures that as Axis Bank scales, its security posture remains robust enough to counter the evolving complexities of the modern cyber threat landscape.


Why Data Governance Keeps Falling Short and 6 Actions to Fix It

In this article, Malcolm Hawker explores why data governance initiatives often fail to deliver their promised value, attributing the shortfall to a combination of human, cultural, and organizational barriers. A primary issue is the conceptual misunderstanding where leadership views data governance as a technical IT responsibility rather than a fundamental enterprise capability. This results in an overreliance on technology and a lack of genuine executive engagement beyond mere "buy-in." Furthermore, many organizations struggle to quantify the business benefits of governance, leading it to be perceived as a cost center rather than a value generator. To overcome these obstacles, Hawker proposes six strategic actions aimed at realigning governance with business goals. These include educating leadership to foster a data-driven culture, documenting clear business value, and acknowledging that governance is a cross-functional business issue rather than an IT problem. Additionally, he emphasizes the need to define the true value of data, cover the entire data supply chain, and integrate governance more closely with core business operations. By shifting focus from technological tools to people, leadership, and value quantification, organizations can transform data governance from a stagnant administrative burden into a dynamic driver of competitive advantage and regulatory compliance.

Daily Tech Digest - March 14, 2026


Quote for the day:

"Leadership is practices not so much in words as in attitude and in actions." -- Harold Geneen


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


Tech nationalism is reshaping CIO infrastructure strategy

The article "Tech Nationalism is Reshaping CIO Infrastructure Strategy" explores how rising geopolitical tensions and stringent data sovereignty laws are forcing IT leaders to dismantle traditional "borderless" cloud deployments. This shift, driven by nations prioritizing domestic technology control and national security, requires CIOs to navigate a fragmented digital landscape where regional mandates dictate exactly where workloads can reside. Consequently, infrastructure strategy is moving away from centralized global platforms toward distributed, localized architectures that leverage "sovereign cloud" solutions. These sovereign models allow organizations to maintain strict local control over their data while still benefiting from cloud scalability, effectively bridging the gap between operational efficiency and legal compliance. Beyond meeting regulatory requirements like GDPR, this trend addresses critical supply chain vulnerabilities and minimizes the risk of being caught in trade disputes or international sanctions. For modern technology executives, the challenge lies in balancing the cost benefits of global standardization with the necessity of national alignment and data protection. Ultimately, success in this polarized era requires a "sovereign-first" mindset, transforming IT infrastructure into a vital component of geopolitical risk management. As digital borders tighten, CIOs must prioritize regional agility and resilience over simple centralization to ensure their organizations remain both secure and globally competitive.


How leaders can give tough feedback without damaging trust

In the People Matters article, HR leader Ritu Anand highlights that modern performance discussions are increasingly complex, requiring leaders to balance radical candor with deep empathy to maintain organizational trust. The shift from backward-looking evaluations to future-oriented direction means feedback must be developmental, continuous, and grounded in objective data rather than subjective perceptions. Anand argues that many managers suffer from "nice person" syndrome, delaying difficult conversations to avoid emotional friction; however, this avoidance ultimately undermines alignment. To deliver effective "tough" feedback without damaging professional relationships, leaders must separate individual empathy from performance accountability, focusing strictly on observable behaviors and their impacts rather than personal traits. Furthermore, the dialogue should be tailored to an employee's career stage—offering supportive direction for early-career associates and strategic influence coaching for senior professionals. Trust serves as the vital foundation for these interactions; if a leader is consistently fair and genuinely invested in an employee's success, even corrective feedback is received constructively. Ultimately, the quality of these conversations reflects leadership maturity, necessitating a cultural shift toward real-time, purposeful dialogue that prioritizes human respect alongside high standards of performance output and accountability.


Account Recovery Becomes a Major Source of Workforce Identity Breaches

In the article "Account Recovery Becomes a Major Source of Workforce Identity Breaches" on TechNewsWorld, Mike Engle explains how traditional security measures are being bypassed through structurally weak account recovery workflows. While many organizations have successfully hardened initial login procedures with multi-factor authentication and phishing-resistant controls, attackers have shifted their focus to the "backdoor" of password resets and MFA re-enrollment. These recovery paths, often managed by under-pressure help desk personnel, rely on human judgment and low-friction processes that are easily exploited through sophisticated social engineering and AI-assisted impersonation. High-profile breaches in 2025 involving major retailers demonstrate that even policy-compliant accounts are vulnerable if the identity re-establishment process is compromised. The core issue is that identity assurance is often treated as disposable after onboarding, leading to the use of weaker signals during recovery. Engle argues that for organizations to truly secure their workforce, they must move away from relying on static knowledge or human intuition at the service desk. Instead, they need to implement verifiable identity evidence that can be reasserted during recovery events, treating resets as high-risk activities rather than routine administrative tasks. This shift is essential to prevent attackers from circumventing strong authentication without ever needing to confront it directly.


The Oil and Water Moment in AI Architecture

The article "The Oil and Water Moment in AI Architecture" by Shweta Vohra explores the fundamental tension emerging as deterministic software systems are forced to integrate with non-deterministic artificial intelligence. This "oil and water" moment signifies a paradigm shift where traditional architectural assumptions of predictable, procedural execution are challenged by probabilistic outputs and dynamic agentic behaviors. Vohra argues that standard guardrails, such as static input validation or fixed API contracts, are insufficient for AI-enabled systems where agents may synthesize context or chain tools in unforeseen sequences. Consequently, the role of the architect is evolving from managing explicit code paths to orchestrating intent under non-determinism. To navigate this complexity, the author introduces the "Architect’s V-Impact Canvas," a structured framework comprising three critical layers: Architectural Intent, Design Governance, and Impact and Value. These layers encourage architects to anchor systems in clear principles, manage the trade-offs of agent autonomy, and ensure measurable business outcomes. Ultimately, the article emphasizes that while models and tools will continue to improve, the enduring responsibility of the architect remains the preservation of human trust and system integrity. By prioritizing systems thinking and explicit intent, practitioners can transform technical ambiguity into organizational clarity in an increasingly probabilistic digital landscape.


The AI coding hangover

n the article "The AI Coding Hangover" on InfoWorld, David Linthicum explores the sobering reality facing enterprises that rushed to replace developers with Large Language Models (LLMs). While the initial pitch—that AI could generate code faster and cheaper than humans—led to widespread boardroom excitement, the "morning after" has revealed a landscape of brittle systems and unpriced technical debt. Linthicum argues that treating AI as a replacement for engineering judgment rather than an amplifier has resulted in bloated, inefficient, and often unmaintainable codebases. This "hangover" manifests as skyrocketing cloud bills, security vulnerabilities, and logic sprawl that no human author truly understands or can easily fix. The lack of shared memory and consistent rationale in AI-generated systems makes operational maintenance and refactoring a specialized, costly form of "technical surgery." Ultimately, the article warns that the illusion of speed is being paid for with long-term instability and operational drag. To recover, organizations must pivot toward pairing developers with AI tools under a framework of rigorous platform discipline, prioritizing human-led architectural integrity and operational excellence over the sheer quantity of automated output. Success in the AI era requires treating models as power tools, not autonomous employees, ensuring software remains stewarded rather than just produced.


Hybrid resilience: Designing incident response across on-prem, cloud and SaaS without losing your mind

The article "Hybrid Resilience: Designing incident response across on-prem, cloud, and SaaS without losing your mind" on CSO Online addresses the inherent fragility of fragmented digital environments. Author Shalini Sudarsan argues that hybrid incident response often fails at the "seams" between different ownership models, where on-premises, cloud, and SaaS teams operate in silos. To overcome this, organizations must move beyond an obsession with tool consolidation and instead prioritize "seam management" through a unified incident contract. This contract enforces a shared language, a single incident commander, and one coordinated timeline to prevent parallel war rooms and conflicting narratives during a crisis. The piece outlines three foundational pillars for resilience: portable telemetry, unified signaling, and engineered escalation. By focusing on end-to-end user journey metrics rather than individual component health, teams can cut through domain bias and identify the shared failure point. Furthermore, the article suggests standardizing correlation IDs and maintaining a centralized change table to bridge the visibility gap between disparate stacks. Finally, resilience is bolstered by documenting "time-to-human" targets and escalation cards for critical vendors, ensuring that decision-making remains predictable under pressure. By aligning these signals and protocols before an outage occurs, security leaders can maintain operational sanity and ensure rapid recovery in complex, multi-provider ecosystems.


Why M&A technology integrations are harder than expected. Here’s what you should look for early

In the article "Why M&A technology integrations are harder than expected," Thai Vong explains that while strategic growth often drives mergers, the "under the hood" technical complexities frequently turn promising deals into operational nightmares. Technology rarely determines if a deal is signed, but it dictates the post-close integration difficulty and ultimate value realization. Vong emphasizes that CIOs must be involved early in due diligence to uncover hidden risks like undocumented system dependencies, misaligned data models, and significant technical debt. Common pitfalls include legacy platforms, inconsistent security controls, and over-reliance on managed service providers in smaller firms. He argues that due diligence must go beyond simple inventory to evaluate system supportability and compliance readiness. Successful integration requires building "integration muscle" through refined playbooks and realistic timelines grounded in past experience. Furthermore, aligning technology teams with business process leaders ensures that systems are not just connected but operationally synchronized. As AI becomes more prevalent, evaluating its governance within a target environment adds a new layer of necessary scrutiny. Ultimately, the success of a merger is decided during the integration phase, making early visibility into the target’s technical landscape a strategic imperative for any acquiring organization.


Why Enterprise Architecture Drifts and What Leaders Must Watch For

In the article "Why Enterprise Architecture Drifts and What Leaders Must Watch For" on CDO Magazine, Moataz Mahmoud explores the quiet, incremental evolution of architecture drift—the widening gap between a company's planned IT framework and its actual implementation. Drift typically occurs through "micro-decisions" made by teams prioritizing tactical speed over enterprise alignment, leading to inconsistent data behavior and increased operational friction. Leaders are cautioned to watch for red flags such as slower delivery times, heightened integration efforts, and diverging system interpretations across different domains. These symptoms often indicate that a "once-a-year" blueprint has failed to account for real-world operational pressures and shifting regulations. To combat this, the piece advocates for treating architecture as a living business capability rather than a static technical artifact. It emphasizes the need for a "continuous alignment loop" that uses shared language and lightweight governance to catch small variations before they compound into systemic complexity. By fostering proactive communication between technical teams and business stakeholders, organizations can ensure that local innovations do not create unintended divergence. Ultimately, maintaining architectural integrity is framed as a leadership imperative essential for sustaining a coordinated, scalable system that can responsibly adopt emerging technologies like AI.


NB-IoT: How Narrowband IoT Supports Massive Connected Devices

The article "NB-IoT: How Narrowband IoT Supports Massive Connected Devices" from IoT Business News explains the vital role of Narrowband IoT (NB-IoT) as a specialized cellular technology designed for large-scale Internet of Things (IoT) deployments. Unlike traditional networks optimized for high-speed data, NB-IoT is an energy-efficient, low-power wide-area networking (LPWAN) solution tailored for devices that transmit small packets of data over long periods. Standardized by 3GPP, it operates within licensed spectrum—either in-band, within guard bands, or as a standalone deployment—allowing mobile operators to leverage existing LTE infrastructure through simple software upgrades. Key features like Power Saving Mode (PSM) and Extended Discontinuous Reception (eDRX) enable devices, such as smart meters and environmental sensors, to achieve battery lives exceeding ten years. While NB-IoT offers superior indoor coverage and cost-effective module complexity, it is restricted by low throughput and higher latency, making it unsuitable for high-mobility or real-time applications. Despite these limits, its ability to support massive device density makes it a cornerstone for smart cities, utilities, and industrial monitoring. As a critical component of the broader cellular IoT evolution alongside LTE-M and 5G, NB-IoT provides a reliable and scalable foundation for the future of connected infrastructure.


The Quiet Death of Enterprise Architecture

In the article "The Quiet Death of Enterprise Architecture," Eetu Niemi, Ph.D., explores the subtle and often unnoticed decline of the Enterprise Architecture (EA) function within modern organizations. Unlike a sudden departmental shutdown, this "quiet death" occurs as high initial enthusiasm gradually devolves into repetitive routine, eventually leading to neglect and total irrelevance. Niemi explains that EA initiatives typically begin with ambitious goals to resolve organizational fragmentation and provide a coherent view of complex systems through detailed modeling and governance frameworks. However, once these initial assets are established, the practice often settles into a mundane operational phase. This shift is dangerous because it causes stakeholders to view architecture as a bureaucratic hurdle rather than a strategic driver, leading to a state where critical business decisions are increasingly made without architectural input. The irony, as Niemi notes, is that "success"—where EA becomes a standard part of the organizational workflow—can inadvertently become the catalyst for its decline if it fails to consistently demonstrate tangible strategic breakthroughs. To avoid this fate, the article argues that architects must transcend routine documentation and maintain a proactive, value-oriented focus that aligns technical complexity with evolving business priorities, ensuring the practice remains a vital and influential pillar of organizational transformation.

Daily Tech Digest - February 13, 2026


Quote for the day:

"If you want teams to succeed, set them up for success—don’t just demand it." -- Gordon Tredgold



Hackers turn bossware against the bosses

Huntress discovered two incidents using this tactic, one late in January and one early this month. Shared infrastructure, overlapping indicators of compromise, and consistent tradecraft across both cases make Huntress strongly believe a single threat actor or group was behind this activity. ... CSOs must ensure that these risks are properly catalogued and mitigated,” he said. “Any actions performed by these agents must be monitored and, if possible, restricted. The abuse of these systems is a special case of ‘living off the land’ attacks. The attacker attempts to abuse valid existing software to perform malicious actions. This abuse is often difficult to detect.” ... Huntress analyst Pham said to defend against attacks combining Net Monitor for Employees Professional and SimpleHelp, infosec pros should inventory all applications so unapproved installations can be detected. Legitimate apps should be protected with robust identity and access management solutions, including multi-factor authentication. Net Monitor for Employees should only be installed on endpoints that don’t have full access privileges to sensitive data or critical servers, she added, because it has the ability to run commands and control systems. She also noted that Huntress sees a lot of rogue remote management tools on its customers’ IT networks, many of which have been installed by unwitting employees clicking on phishing emails. This points to the importance of security awareness training, she said. 


Why secure OT protocols still struggle to catch on

“Simply having ‘secure’ protocol options is not enough if those options remain too costly, complex, or fragile for operators to adopt at scale,” Saunders said. “We need protections that work within real-world constraints, because if security is too complex or disruptive, it simply won’t be implemented.” ... Security features that require complex workflows, extra licensing, or new infrastructure often lose out to simpler compensating controls. Operators interviewed said they want the benefits of authentication and integrity checks, particularly message signing, since it prevents spoofing and unauthorized command execution. ... Researchers identified cost as a primary barrier to adoption. Operators reported that upgrading a component to support secure communications can cost as much as the original component, with additional licensing fees in some cases. Costs also include hardware upgrades for cryptographic workloads, training staff, integrating certificate management, and supporting compliance requirements. Operators frequently compared secure protocol deployment costs with segmentation and continuous monitoring tools, which they viewed as more predictable and easier to justify. ... CISA’s recommendations emphasize phased approaches and operational realism. Owners and operators are advised to sign OT communications broadly, apply encryption where needed for sensitive data such as passwords and key exchanges, and prioritize secure communication on remote access paths and firmware uploads.


SaaS isn’t dead, the market is just becoming more hybrid

“It’s important to avoid overgeneralizing ‘SaaS,’” Odusote emphasized . “Dev tools, cybersecurity, productivity platforms, and industry-specific systems will not all move at the same pace. Buyers should avoid one-size-fits-all assumptions about disruption.” For buyers, this shift signals a more capability-driven, outcomes-focused procurement era. Instead of buying discrete tools with fixed feature sets, they’ll increasingly be able to evaluate and compare platforms that are able to orchestrate agents, adapt workflows, and deliver business outcomes with minimal human intervention. ... Buyers will likely have increased leverage in certain segments due to competitive pressure among new and established providers, Odusote said. New entrants often come with more flexible pricing, which obviously is an attraction for those looking to control costs or prove ROI. At the same time, traditional SaaS leaders are likely to retain strong positions in mission-critical systems; they will defend pricing through bundled AI enhancements, he said. So, in the short term, buyers can expect broader choice and negotiation leverage. “Vendors can no longer show up with automatic annual price increases without delivering clear incremental value,” Odusote pointed out. “Buyers are scrutinizing AI add-ons and agent pricing far more closely.”


When algorithms turn against us: AI in the hands of cybercriminals

Cybercriminals are using AI to create sophisticated phishing emails. These emails are able to adapt the tone, language, and reference to the person receiving it based on the information that is publicly available about them. By using AI to remove the red flag of poor grammar from phishing emails, cybercriminals will be able to increase the success rate and speed with which the stolen data is exploited. ... An important consideration in the arena of cyber security (besides technical security) is the psychological manipulation of users. Once visual and audio “cues” can no longer be trusted, there will be an erosion of the digital trust pillar. The once-recognizable verification process is now transforming into multi-layered authentication which expands the amount of time it takes to verify a decision in a high-pressure environment. ... AI’s misuse is a growing problem that has created a paradox. Innovation cannot stop (nor should it), and AI is helping move healthcare, finance, government and education forward. However, the rate at which AI has been adopted has surpassed the creation of frameworks and/or regulations related to ethics or security. As a result, cyber security needs to transition from a reactive to a predictive stance. AI must be used to not only react to attacks, but also anticipate future attacks. 


Those 'Summarize With AI' Buttons May Be Lying to You

Put simply, when a user visits a rigged website and clicks a "Summarize With AI" button on a blog post, they may unknowingly trigger a hidden instruction embedded in the link. That instruction automatically inserts a specially crafted request into the AI tool before the user even types anything. ... The threat is not merely theoretical. According to Microsoft, over a 60-day period, it observed 50 unique instances of prompt-based AI memory poisoning attempts for promotional purposes. ... AI recommendation poisoning is a sort of drive-by technique with one-click interaction, he notes. "The button will take the user — after the click — to the AI domain relevant and specific for one of the AI assistants targeted," Ganacharya says. To broaden the scope, an attacker could simply generate multiple buttons that prompt users to "summarize" something using the AI agent of their choice, he adds. ... Microsoft had some advice for threat hunting teams. Organizations can detect if they have been affected by hunting for links pointing to AI assistant domains and containing prompts with certain keywords like "remember," "trusted source," "in future conversations," and "authoritative source." The company's advisory also listed several threat hunting queries that enterprise security teams can use to detect AI recommendation poisoning URLs in emails and Microsoft Teams Messages, and to identify users who might have clicked on AI recommendation poisoning URLs.


EU Privacy Watchdogs Pan Digital Omnibus

The commission presented its so-called "Digital Omnibus" package of legal changes in November, arguing that the bloc's tech rules needed streamlining. ... Some of the tweaks were expected and have been broadly welcomed, such as doing away with obtrusive cookie consent banners in many cases, and making it simpler for companies to notify of data breaches in a way that satisfies the requirements of multiple laws in one go. But digital rights and consumer advocates are reacting furiously to an unexpected proposal for modifying the General Data Protection Regulation. ... "Simplification is essential to cut red tape and strengthen EU competitiveness - but not at the expense of fundamental rights," said EDPB chair Anu Talus in the statement. "We strongly urge the co-legislators not to adopt the proposed changes in the definition of personal data, as they risk significantly weakening individual data protection." ... Another notable element of the Digital Omnibus is the proposal to raise the threshold for notifying all personal data breaches to supervisory authorities. As the GDPR currently stands, organizations must notify a data protection authority within 72 hours of becoming aware of the breach. If amended as the commission proposes, the obligation would only apply to breaches that are "likely to result in a high risk" to the affected people's rights - the same threshold that applies to the duty to notify breaches to the affected data subjects themselves - and the notification deadline would be extended to 96 hours.


The Art of the Comeback: Why Post-Incident Communication is a Secret Weapon

Although technical resolutions may address the immediate cause of an outage, effective communication is essential in managing customer impact and shaping public perception—often influencing stakeholders’ views more strongly than the issue itself. Within fintech, a company's reputation is not built solely on product features or interface design, but rather on the perceived security of critical assets such as life savings, retirement funds, or business payrolls. In this high-stakes environment, even brief outages or minor data breaches are perceived by clients as threats to their financial security. ... While the natural instinct during a crisis (like a cyber breach or operational failure) is to remain silent to avoid liability, silence actually amplifies damage. In the first 48 hours, what is said—or not said—often determines how a business is remembered. Post-incident communication (PIC) is the bridge between panic and peace of mind. Done poorly, it looks like corporate double-speak. Done well, it demonstrates a level of maturity and transparency that your competitors might lack. ... H2H communication acknowledges the user’s frustration rather than just providing a technical error code. It recognizes the real-world impact on people, not just systems. Admitting mistakes and showing sincere remorse, rather than using defensive, legalistic language, makes a company more relatable and trustworthy. Using natural, conversational language makes the communication feel sincere rather than like an automated, cold response.


Why AI success hinges on knowledge infrastructure and operational discipline

Many organisations assume that if information exists, it is usable for GenAI, but enterprise content is often fragmented, inconsistently structured, poorly contextualised, and not governed for machine consumption. During pilots, this gap is less visible because datasets are curated, but scaling exposes the full complexity of enterprise knowledge. Conflicting versions, missing context, outdated material, and unclear ownership reduce performance and erode confidence, not because models are incapable, but because the knowledge they depend on is unreliable at scale. ... Human-in-the-loop processes struggle to keep pace with scale. Successful deployments treat HITL as a tiered operating structure with explicit thresholds, roles, and escalation paths. Pilot-style broad review collapses under volume; effective systems route only low-confidence or high-risk outputs for human intervention. ... Learning compounds over time as every intervention is captured and fed back into the system, reducing repeated manual review. Operationally, human-in-the-loop teams function within defined governance frameworks, with explicit thresholds, escalation paths, and direct integration into production workflows to ensure consistency at scale. In short, a production-grade human-in-the-loop model is not an extension of BPO but an operating capability combining domain expertise, governance, and system learning to support intelligent systems reliably.


Why short-lived systems need stronger identity governance

Consider the lifecycle of a typical microservice. In its journey from a developer’s laptop to production, it might generate a dozen distinct identities: a GitHub token for the repository, a CI/CD service account for the build, a registry credential to push the container, and multiple runtime roles to access databases, queues and logging services. The problem is not just volume; it is invisibility. When a developer leaves, HR triggers an offboarding process. Their email is cut, their badge stops working. But what about the five service accounts they hardcoded into a deployment script three years ago? ... In reality, test environments are often where attackers go first. It is the path of least resistance. We saw this play out in the Microsoft Midnight Blizzard attack. The attackers did not burn a zero-day exploit to break down the front door; they found a legacy test tenant that nobody was watching closely. ... Our software supply chain is held together by thousands of API keys and secrets. If we continue to rely on long-lived static credentials to glue our pipelines together, we are building on sand. Every static key sitting in a repo—no matter how private you think it is—is a ticking time bomb. It only takes one developer to accidentally commit a .env file or one compromised S3 bucket to expose the keys to the kingdom. ... Paradoxically, by trying to control everything with heavy-handed gates, we end up with less visibility and less control. The goal of modern identity governance shouldn’t be to say “no” more often; it should be to make the secure path the fastest path.


India's E-Rupee Leads the Secure Adoption of CBDCs

India has the e-rupee, which will eventually be used as a legal tender for domestic payments as well as for international transactions and cross-border payments. Ever since RBI launched the e-rupee, or digital rupee, in December 2022, there has been between INR 400 to 500 crore - or $44 to $55 million - in circulation. Many Indian banks are participating in this pilot project. ... Building broad awareness of CBDCs as a secure method for financial transactions is essential. Government and RBI-led awareness campaigns highlighting their security capability can strengthen user confidence and drive higher adoption and transaction volumes. People who have lost money due to QR code scams, fake calls, malicious links and other forms of payment fraud need to feel confident about using CBDCs. IT security companies are also cooperating with RBI to provide data confidentiality, transaction confidentiality and transaction integrity. E-transactions will be secured by hashing, digital signing and [advanced] encryption standards such as AES-192. This can ensure that the transaction data is not tampered with or altered. ... HSMs use advanced encryption techniques to secure transactions and keys. The HSM hardware [boxes] act as cryptographic co-processors and accelerate the encryption and decryption processes to minimize latency in financial transactions. 


Daily Tech Digest - January 30, 2026


Quote for the day:

"In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it." -- Jane Smiley



Crooks are hijacking and reselling AI infrastructure: Report

In a report released Wednesday, researchers at Pillar Security say they have discovered campaigns at scale going after exposed large language model (LLM) and MCP endpoints – for example, an AI-powered support chatbot on a website. “I think it’s alarming,” said report co-author Ariel Fogel. “What we’ve discovered is an actual criminal network where people are trying to steal your credentials, steal your ability to use LLMs and your computations, and then resell it.” ... How big are these campaigns? In the past couple of weeks alone, the researchers’ honeypots captured 35,000 attack sessions hunting for exposed AI infrastructure. “This isn’t a one-off attack,” Fogel added. “It’s a business.” He doubts a nation-state it behind it; the campaigns appear to be run by a small group. ... Defenders need to treat AI services with the same rigor as APIs or databases, he said, starting with authentication, telemetry, and threat modelling early in the development cycle. “As MCP becomes foundational to modern AI integrations, securing those protocol interfaces, not just model access, must be a priority,” he said.  ... Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. “Do not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI/LLM in a safe way that benefits the business,” he advised.


AI-Powered DevSecOps: Automating Security with Machine Learning Tools

Here's the uncomfortable truth: AI is both causing and solving the same problem. A Snyk survey from early 2024 found that 77% of technology leaders believe AI gives them a competitive advantage in development speed. That's great for quarterly demos and investor decks. It's less great when you realize that faster code production means exponentially more code to secure, and most organizations haven't figured out how to scale their security practice at the same rate. ... Don't try to AI-ify your entire security stack at once. Pick one high-pain problem — maybe it's the backlog of static analysis findings nobody has time to triage, or maybe it's spotting secrets accidentally committed to repos — and deploy a focused tool that solves just that problem. Learn how it behaves. Understand its failure modes. Then expand. ... This is non-negotiable, at least for now. AI should flag, suggest, and prioritize. It should not auto-merge security fixes or automatically block deployments without human confirmation. I've seen two different incidents in the past year where an overzealous ML system blocked a critical hotfix because it misclassified a legitimate code pattern as suspicious. Both cases were resolved within hours, but both caused real business impact. The right mental model is "AI as junior analyst." ... You need clear policies around which AI tools are approved for use, who owns their output, and how to handle disagreements between human judgment and AI recommendations.


AI & the Death of Accuracy: What It Means for Zero-Trust

The basic idea is that as the signal quality degrades over time through junk training data, models can remain fluent and fully interact with the user while becoming less reliable. From a security standpoint, this can be dangerous, as AI models are positioned to generate confident-yet-plausible errors when it comes to code reviews, patch recommendations, app coding, security triaging, and other tasks. More critically, model degradation can erode and misalign system guardrails, giving attackers the opportunity exploit the opening through things like prompt injection. ... "Most enterprises are not training frontier LLMs from scratch, but they are increasingly building workflows that can create self-reinforcing data stores, like internal knowledge bases, that accumulate AI-generated text, summaries, and tickets over time," she tells Dark Reading.  ... Gartner said that to combat the potential impending issue of model degradation, organizations will need a way to identify and tag AI-generated data. This could be addressed through active metadata practices (such as establishing real-time alerts for when data may require recertification) and potentially appointing a governance leader that knows how to responsibly work with AI-generated content. ... Kelley argues that there are pragmatic ways to "save the signal," namely through prioritizing continuous model behavior evaluation and governing training data.


The Friction Fix: Change What Matters

Friction is the invisible current that sinks every transformation. Friction isn’t one thing, it’s systemic. Relationships produce friction: between the people, teams and technology. ... When faced with a systemic challenge, our human inclination is to blame. Unfortunately, we blame the wrong things. We blame the engineering team for failing to work fast enough or decide the team is too small, rather than recognize that our Gantt chart was fiction, which is an oversimplification of a complex dynamic. ... The fix is to pause and get oriented. Begin by identifying the core domain, the North Star. What is the goal of the system? For Fedex, it is fast package delivery. Chances are, when you are experiencing counterintuitive behavior, it is because people are navigating in different directions while using the same words. ... Every organization trying to change has that guy: the gatekeeper, the dungeon master, the self-proclaimed 10x engineer who knows where the bodies are buried. They also wield one magic word: No. ... It’s easy to blame that guy’s stubborn personality. But he embodies behavior that has been rewarded and reinforced. ... Refusal to change is contagious. When that guy shuts down curiosity, others drift towards a fixed mindset. Doubt becomes the focus, not experimentation. The organization can’t balance avoiding risk with trying something new. The transformation is dead in the water.


From devops to CTO: 8 things to start doing now

Devops leaders have the opportunity to make a difference in their organization and for their careers. Lead a successful AI initiative, deploy to production, deliver business value, and share best practices for other teams to follow. Successful devops leaders don’t jump on the easy opportunities; they look for the ones that can have a significant business impact. ... Another area where devops engineers can demonstrate leadership skills is by establishing standards for applying genAI tools throughout the software development lifecycle (SDLC). Advanced tools and capabilities require effective strategies to extend best practices beyond early adopters and ensure that multiple teams succeed. ... If you want to be recognized for promotions and greater responsibilities, a place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. Devops engineers can position themselves for a leadership role by focusing on initiatives that deliver business value. ... One of the hardest mindset transitions for CTOs is shifting from being the technology expert and go-to problem-solver to becoming a leader facilitating the conversation about possible technology implementations. If you want to be a CTO, learn to take a step back to see the big picture and engage the team in recommending technology solutions.


The stakes rise for the CIO role in 2026

The CIO's days as back-office custodian of IT are long gone, to be sure, but that doesn't mean the role is settled. Indeed, Seewald and others see plenty of changes still underway. In 2026, the CIO's role in shaping how the business operates and performs is still expanding. It reflects a nuanced change in expectations, according to longtime CIOs, analysts and IT advisors -- and one that is showing up in many ways as CIOs become more directly involved in nailing down competitive advantage and strategic success across their organizations. ... "While these core responsibilities remain the same, the environment in which CIOs operate has become far more complex," Tanowitz added. Conal Gallagher, CIO and CISO at Flexera, said the CIO in 2026 is now "accountable for outcomes: trusted data, controlled spend, managed risk and measurable productivity. "The deliverable isn't a project plan," Gallagher said. "It's proof that the business runs faster, safer and more cost-disciplined because of the operating model IT enables." ... In 2026, the CIO role is less about being the technology owner and more about being a business integrator, Hoang said. At Commvault, that shift places greater emphasis on governance and orchestration across ecosystems. "We're operating in a multicloud, multivendor, AI-infused environment," she said. "A big part of my job is building guardrails and partnerships that enable others to move fast -- safely," she said. 


Inside the Shift to High-Density, AI-Ready Data Centres

As density increases, design philosophy must evolve. Power infrastructure, backup systems, and cooling can no longer be treated as independent layers; they have to be tightly integrated. Our facilities use modular and scalable power and cooling architectures that allow us to expand capacity without disrupting live environments. Rated-4 resilience is non-negotiable, even under continuous, high-density AI workloads. The real focus is flexibility. Customers shouldn’t be forced into an all-or-nothing transition. Our approach allows them to move gradually to higher densities while preserving uptime, efficiency, and performance. High-density AI infrastructure is less about brute force and more about disciplined engineering that sustains reliability at scale. ... The most common misconception is that AI data centres are fundamentally different entities. While AI workloads do increase density, power, and cooling demands, the core principles of reliability, uptime, and efficiency remain unchanged. AI readiness is not about branding; it’s about engineering and operations. Supporting AI workloads requires scalable and resilient power delivery, precision cooling, and flexible designs that can handle GPUs and accelerators efficiently over sustained periods. Simply adding more compute without addressing these fundamentals leads to inefficiency and risk. The focus must remain on mission-critical resilience, cost-effective energy management, and sustainability. 


Software Supply Chain Threats Are on the OWASP Top Ten—Yet Nothing Will Change Unless We Do

As organizations deepen their reliance on open-source components and embrace AI-enabled development, software supply chain risks will become more prevalent. In the OWASP survey, 50% of respondents ranked software supply chain failures number one. The awareness is there. Now the pressure is on for software manufacturers to enhance software transparency, making supply chain attacks far less likely and less damaging. ... Attackers only need one forgotten open-source component from 2014 that still lives quietly inside software to execute a widespread attack. The ability to cause widespread damage by targeting the software supply chain makes these vulnerabilities alluring for attackers. Why break into a hardened product when one outdated dependency—often buried several layers down—opens the door with far less effort? The SolarWinds software supply chain attack that took place in 2020 demonstrated the access adversaries gain when they hijack the build process itself. ... “Stable” legacy components often go uninspected for years. These aging libraries, firmware blocks, and third-party binaries frequently contain memory-unsafe constructs and unpatched vulnerabilities that could be exploited. Be sure to review legacy code and not give it the benefit of the doubt. ... With an SBOM in hand, generated at every build, you can scan software for vulnerabilities and remediate issues before they are exploited. 


What the first 24 hours of a cyber incident should look like

When a security advisory is published, the first question is whether any assets are potentially exposed. In the past, a vendor’s claim of exploitation may have sufficed. Given the precedent set over the past year, it is unwise to rely solely on a vendor advisory for exploited-in-the-wild status. Too often, advisories or exploitation confirmations reach teams too late or without the context needed to prioritise the response. CISA’s KEV, trusted third-party publications, and vulnerability researchers should form the foundation of any remediation programme. ... Many organisations will leverage their incident response (IR) retainers to assess the extent of the compromise or, at a minimum, perform a rudimentary threat hunt for indicators of compromise (IoCs) before involving the IR team. As with the first step, accurate, high-fidelity intelligence is critical. Simply downloading IoC lists filled with dual-use tools from social media will generate noise and likely lead to inaccurate conclusions. Arguably, the cornerstone of the initial assessment is ensuring that intelligence incorporates decay scoring to validate command-and-control (C2) infrastructure. For many, the term ‘threat hunt’ translates to little more than a log search on external gateways. ... The approach at this stage will be dependent on the results of the previous assessments. There is no default playbook here; however, an established decision framework that dictates how a company reacts is key.


NIST’s AI guidance pushes cybersecurity boundaries

For CISOs, what should matter is that NIST is shifting from a broad, principle-based AI risk management framework toward more operationally grounded expectations, especially for systems that act without constant human oversight. What is emerging across NIST’s AI-related cybersecurity work is a recognition that AI is no longer a distant or abstract governance issue, but a near-term security problem that the nation’s standards-setting body is trying to tackle in a multifaceted way. ... NIST’s instinct to frame AI as an extension of traditional software allows organizations to reuse familiar concepts — risk assessment, access control, logging, defense in depth — rather than starting from zero. Workshop participants repeatedly emphasized that many controls do transfer, at least in principle. But some experts argue that the analogy breaks down quickly in practice. AI systems behave probabilistically, not deterministically, they say. Their outputs depend on data that may change continuously after deployment. And in the case of agents, they may take actions that were not explicitly scripted in advance. ... “If you were a consumer of all of these documents, it was very difficult for you to look at them and understand how they relate to what you are doing and also understand how to identify where two documents may be talking about the same thing and where they overlap.”

Daily Tech Digest - January 26, 2026


Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki



Stop Choosing Between Speed and Stability: The Art of Architectural Diplomacy

In contemporary business environments, Enterprise Architecture (EA) is frequently misunderstood as a static framework—merely a collection of diagrams stored digitally. In fact, EA functions as an evolving discipline focused on effective conflict management. It serves as the vital link between the immediate demands of the present and the long-term, sustainable objectives of the organization. To address these challenges, experienced architects employ a dual-framework approach, incorporating both W.A.R. and P.E.A.C.E. methodologies. At any given moment, an organization is a house divided. On one side, you have the product owners, sales teams, and innovators who are in a state of perpetual W.A.R. (Workarounds, Agility, Reactivity). They are facing the external pressures of a volatile market, where speed is the only currency and being "first" often trumps being "perfect." To them, architecture can feel like a roadblock—a series of bureaucratic "No’s" that stifle the ability to pivot. On the other side, you have the operations, security, and finance teams who crave P.E.A.C.E. (Principles, Efficiency, Alignment, Consistency, Evolution). They see the long-term devastation caused by unchecked "cowboy coding" and fragmented systems. They know that without a foundation of structural integrity, the enterprise will eventually collapse under the weight of its own complexity, turning a fast-moving startup into a sluggish, expensive legacy giant.


Why Identity Will Become the Ultimate Control Point for an Autonomous World in 2026

The law of unintended consequences will dominate organisational cybersecurity in 2026. As enterprises increase their reliance on autonomous AI agents with minimal human oversight, and as machine identities multiply, accountability will blur. The constant tension between efficiency and security will fuel uncontrolled privilege sprawl forcing organisations to innovate not only in technology, but in governance. ... Attackers will exploit this shift, embedding malicious prompts and compromising automated pipelines to trigger actions that bypass traditional controls. Conventional privileged access management and identity access management will no longer be sufficient. Continuous monitoring, adaptive risk frameworks, and real-time credential revocation will become essential to manage the full lifecycle of AI agents. At the same time, innovation in governance and regulation will be critical to prevent a future defined by “runaway” automation. Two years after NIST released its first AI Risk Management Framework, the framework remains voluntary globally, and adoption has been inconsistent since no jurisdiction mandates it. Unless governance becomes a requirement not just a guideline, organisations will continue to treat it as a cost rather than a safeguard. Regulatory frameworks that once focused on data privacy will expand to cover AI identity governance and cyber resilience, mandating cross-region redundancy and responsible agent oversight.


The human paradox at the center of modern cyber resilience

The problem for security leaders is that social engineering is still the most effective way to bypass otherwise robust technical controls. The problem is becoming more acute as threat actors increasingly use AI to deliver compelling, personalized, and scalable phishing attacks. While many such incidents never reach public attention, an attempt last year to defraud WPP used AI-generated video and voice cloning to impersonate senior executives in a highly convincing deepfake meeting. Unfortunately, the risks don’t end there. Even with strong technical controls and a workforce alert to social engineering tactics, risk also comes from employees who introduce tools, devices or processes that fall outside formal IT governance. ... What’s needed instead is a shift in both mindset and culture, where employees understand not just what not to do, but why their day-to-day decisions, which tools they trust, how they handle unexpected requests, and when they choose to slow down and double check something rather than act on instinct genuinely matter. From a leadership perspective, it’s much better to foster a culture which people feel comfortable reporting suspicious activity without fear of blame, rather than an environment where taking the risk feels like the easier option. ... Instead of acting quickly to avoid delaying work, the employee pauses because the culture has normalized slowing down when something seems unusual. They also know exactly how to report or verify because the processes are familiar and straightforward, with no confusion about who to contact or whether they’ll be blamed for raising a false alarm.


Is cloud backup repatriation right for your organization?

Cost is, without a doubt, one of the major reasons for repatriation. Cloud providers have touted the affordability of the cloud over physical data storage, but getting the most bang for your buck from using the cloud requires due diligence to keep costs down. Even major corporations struggle with this issue. The bigger the environment, the more complex it is to accurately model and cost, particularly with multi-cloud environments. And as we know, cloud is incredibly easy to scale up. Keeping with our data theme, understanding the costing model of data backup and bringing back data from deep storage is extremely expensive when done in bulk. Software must be expertly tuned to use the provider storage tier stack efficiently, or massive costs can be incurred. On-premises, the storage costs are already sunk. The data is also local (assuming local backup with remote replication for offsite backup,) so restoring data and services happens quicker. ... Straight-up backup to the cloud can be cheaper and more effective than on-site backups. It also passes a good portion of the management overhead to the cloud provider, such as hardware support, general maintenance and backup security. As we discussed, however, putting backups in another provider's hands might mean longer response and recovery times. Smaller businesses often have an immature environment and cloud backup can be a boon, but larger businesses might consider repatriation if the infrastructure for on-site is available. 


Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents

AI agents are different. They operate with delegated authority and can act on behalf of multiple users or teams without requiring ongoing human involvement. Once authorized, they are autonomous, persistent, and often act across systems, moving between various systems and data sources to complete tasks end-to-end. In this model, delegated access doesn’t just automate user actions, it expands them. Human users are constrained by the permissions they are explicitly granted, but AI agents are often given broader, more powerful access to operate effectively. As a result, the agent can perform actions that the user themselves was never authorized to take. ... It’s no wonder existing IAM assumptions break down. IAM assumes a clear identity, a defined owner, static roles, and periodic reviews that map to human behavior. AI agents don’t follow those patterns. They don’t fit neatly into user or service account categories, they operate continuously, and their effective access is defined by how they are used, not how they were originally approved. Without rethinking these assumptions, IAM becomes blind to the real risk AI agents introduce. ... When agents operate on behalf of individual users, they can provide the user access and capabilities beyond the user’s approved permissions. A user who cannot directly access certain data or perform specific actions may still trigger an agent that can. The agent becomes a proxy, enabling actions the user could never execute on their own. These actions are technically authorized - the agent has valid access. However, they are contextually unsafe.


The CISO’s Recovery-First Game Plan

CISOs must be on top of their game to protect an organization’s data. Lapses in cybersecurity around the data infrastructure can be devastating. Therefore, securing infrastructure needs to be air-tight. The “game plan” that leads a CISO to success must have the following elements: Immutable snapshots; Logical air-gapping; Fenced forensic environment;  Automated cyber protection; Cyber detection; and Near-instantaneous recovery. These six elements constitute the new wave in protecting data: next-generation data protection. There has already been a shift from modern data protection to this substantially higher level of next-gen data protection. A smart CISO would not knowingly leave their enterprise weaker. This is why adoption of automated cyber protection and cyber detection, built right into enterprise storage infrastructure, is increasing, as part of this move to next-gen data protection. Automated cyber protection and cyber detection are becoming a basic requirement for all enterprises that want to eliminate the impact of cyberattacks. All of this is vital for the rapid recovery of data within an enterprise after a cyberattack. ... But what would be smart for CISOs to do is to make adjustments based on what they currently have protecting their storage infrastructure. For example, even in a mixed storage environment, you can deploy automated cyber protection through software. You don’t need to rip and replace the cybersecurity systems and applications that you already have in place. 


ICE’s expanding use of FRT on minors collides with DHS policy, oversight warnings, law

At the center of the case is DHS’s use of Mobile Fortify, a field-deployed application that scans fingerprints and performs facial recognition, then compares collected data against multiple DHS databases, including CBP’s Traveler Verification Service, Border Patrol systems, and Office of Biometric Identity Management’s Automated Biometric Identification System. The complaint alleges DHS launched Mobile Fortify around June 2025 and has used it in the field more than 100,000 times since launch. Unlike CBP’s traveler entry-exit facial recognition program in which U.S. citizens can decline participation and consenting citizens’ photos are retained only until identity verification, Mobile Fortify is not restricted to ports of entry and is not meaningfully limited as to when, where, or from whom biometrics may be taken. The lawsuit cites a DHS Privacy Threshold Analysis stating that ICE agents may use Mobile Fortify when they “encounter an individual or associates of that individual,” and that agents “do not know an individual’s citizenship at the time of initial encounter” and use Mobile Fortify to determine or verify identity. The same passage, as quoted in the complaint, authorizes collection in identifiable form “regardless of citizenship or immigration status,” acknowledging that a photo captured could be of a U.S. citizen or lawful permanent resident.


From Incident to Insight: How Forensic Recovery Drives Adaptive Cyber Resilience

The biggest flaw is that traditional forensics is almost always reactive, and once complete, it ultimately fails to deliver timely insights that are vital to an organization. For example, analysts often begin gathering logs, memory dumps, and disk images only after a breach has been detected, by which point crucial evidence may be gone. Further compounding matters is the fact that the process is typically fragmented, with separate tools for endpoint detection, SIEM, and memory analysis that make it harder to piece together a coherent narrative. ... Modern forensic approaches capture evidence at the first sign of suspicious activity — preserving memory, process data, file paths, and network activity before attackers can destroy them. The key is storing artifacts securely outside the compromised environment, which ensures their integrity and maintains the chain of custody. The most effective strategies operate on parallel tracks. The first is dedicated to restoring operations and delivering forensic artifacts, while the other begins immediate investigations. By integrating forensic, endpoint, and network evidence collection together, silos and blind spots are replaced with a comprehensive and cohesive picture of the incident. ... When integrated into the incident response process, forensic recovery investigations begin earlier, compliance reporting is backed by verifiable facts, and legal defenses are equipped with the necessary evidence. 


Memgraph founder: Don’t get too loose with your use of MCP

“It is becoming almost universally accepted that without strong curation and contextual grounding, LLMs can misfire, misuse tools, or behave unpredictably. Let me clarify what I mean by ‘tool’ i.e. external capabilities provided to the LLM, ranging from search, calculations and database queries to communication, transaction execution and more, with each exposed as an action or API endpoint through MCP.” ... “But security isn’t actually the main possible MCP stumbling block. Perversely enough, by giving the LLM more capabilities, it might just get confused and end up charging too confidently down a completely wrong path,” said Tomicevic. “This problem mirrors context-window overload: too much information increases error rates. Developers still need to carefully curate the tools their LLMs can access, with best practice being to provide only a minimal, essential set. For more complex tasks, the most effective approach is to break them into smaller subtasks, often leveraging a graph-based strategy.” ... The truth that’s coming out of this discussion might lead us to understand that the best of today’s general-purpose models, like those from OpenAI, are trained to use built-in tools effectively. But even with a focused set of tools, organisations are not entirely out of the woods. Context remains a major challenge. Give an LLM a query tool and it runs queries; but without understanding the schema or what the data represents, it won’t generate accurate or meaningful queries.


Speaking the Same Language: Decoding the CISO-CFO Disconnect

On the surface, things look good: 88% of security leaders believe their priorities match business goals, and 55% of finance leaders view cybersecurity as a core strategic driver. However, the conviction is shallow. ... For CISOs, the report is a wake-up call regarding their perceived business acumen. While security leaders feel they are working hard to protect the organization, finance remains skeptical of their execution. The translation gap: Only 52% of finance leaders are "very confident" that their security team can communicate business impact clearly. Prioritization doubts: Just 43% of finance leaders feel very confident that security can prioritize investments based on actual risk. Strategy versus operations: Only 40% express full confidence in security's ability to align with business strategy. ... Chief Financial Officers are increasingly taking responsibility for enterprise risk management and cyber insurance, yet they feel they are operating with incomplete data. Efficiency concerns: Only 46% of finance leaders are very confident that security can deliver cost-efficient solutions. Perception of value: CFOs are split, with 38% viewing cybersecurity as a strategic enabler, while another 38% still view it as a cost center. ... "When security is done right, it doesn't slow the business down—it gives leadership the confidence to move faster. And to do that, you have to be able to connect with your CFO and COO through stories. Dashboards full of red, yellow, and green don't help a CFO," said Krista Arndt,