Showing posts with label NIST. Show all posts
Showing posts with label NIST. Show all posts

Daily Tech Digest - March 23, 2026


Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 23 mins • Perfect for listening on the go.


Testing autonomous agents (Or: how I learned to stop worrying and embrace chaos)

The VentureBeat article "Testing autonomous agents (Or: how I learned to stop worrying and embrace chaos)" explores the critical shift from simple chatbots to autonomous AI agents that function more like independent employees. As agents gain the power to execute actions without human confirmation, the authors argue that "plausible" reasoning is no longer sufficient; systems must instead be engineered for graceful failure and absolute reliability. To achieve this, a four-layered architecture is proposed: high-quality model selection, deterministic guardrails using traditional validation logic, confidence quantification to identify ambiguity, and comprehensive observability for auditing reasoning chains. Reliability is further reinforced by defining clear permission, semantic, and operational boundaries to limit the "blast radius" of potential errors. The article emphasizes that traditional software testing is inadequate for probabilistic systems, advocating instead for simulation environments, red teaming, and "shadow mode" deployments where agents’ decisions are compared against human actions. Ultimately, building enterprise-grade autonomy requires a risk-based investment in safeguards and a rethink of organizational accountability, ensuring that human-in-the-loop patterns remain a central safety mechanism as these systems navigate the complex, often unpredictable reality of production environments.


NIST updates its DNS security guidance for the first time in over a decade

NIST has released Special Publication 800-81r3, the Secure Domain Name System Deployment Guide, marking its first significant update to DNS security standards in over twelve years. This comprehensive revision addresses the modern threat landscape by focusing on three critical pillars: utilizing DNS as an active security control, securing protocols, and hardening infrastructure. A central theme is the implementation of protective DNS (PDNS), which empowers organizations to analyze queries and block access to malicious domains proactively. The guide provides technical advice on deploying encrypted DNS protocols like DNS over TLS, HTTPS, and QUIC to ensure data privacy and integrity. Furthermore, it modernizes DNSSEC recommendations by favoring efficient cryptographic algorithms like ECDSA and Edwards-curve over legacy RSA methods. Organizational hygiene is also prioritized, with strategies to mitigate risks like dangling CNAME records and lame delegations that lead to domain hijacking. By advocating for the separation of authoritative and recursive functions and geographic dispersal, NIST aims to bolster the resilience of network connections. This updated framework serves as an essential roadmap for cybersecurity leaders and technical teams tasked with maintaining secure, future-proof DNS environments in an increasingly complex digital ecosystem.


The insider threat rises again

The article "The Insider Threat Rises Again" examines the escalating risks posed by internal actors in modern organizations. Driven by evolving technologies and shifting work dynamics, insider incidents have become increasingly frequent and costly, with 42% of organizations reporting a rise in both malicious and negligent cases over the past year. The financial impact is staggering, averaging $13.1 million per incident. Today's threat landscape is multifaceted, encompassing deliberate sabotage, inadvertent errors, and the emergence of "coerced insiders" targeted via social media or the dark web. Remote work has exacerbated these risks by lowering psychological barriers to data exfiltration, while AI enables data theft at an unprecedented scale. Furthermore, the article highlights sophisticated tactics like North Korean operatives posing as fake IT workers to gain persistent network access. To combat these threats, experts argue that traditional perimeter security is no longer sufficient. Organizations must instead adopt adaptive controls that monitor high-risk actions in real-time and create friction at the point of data access. Moving beyond managing human behavior, effective security now requires meeting users at the point of risk to identify and block suspicious activity regardless of the actor's credentials.


25 Years of the Agile Manifesto, and the End of the Road for AppSec?

In the article "25 Years of the Agile Manifesto and the End of the Road for AppSec," the author reflects on how the evolution of software development has rendered traditional Application Security (AppSec) models obsolete. Since the inception of the Agile Manifesto, the industry has shifted from slow, monolithic release cycles to rapid, continuous delivery. The core argument is that conventional AppSec—often characterized by "gatekeeping," manual reviews, and siloed security teams—cannot keep pace with the velocity of modern DevOps. This friction creates a bottleneck that developers frequently bypass to meet deadlines, ultimately compromising security. The piece suggests that we have reached the "end of the road" for security as a separate, reactionary phase. Instead, the future lies in "shifting left" and "shifting everywhere," where security is fully integrated into the CI/CD pipeline through automation and developer-centric tools. By empowering developers to take ownership of security within their existing workflows, organizations can achieve the speed promised by Agile without sacrificing safety. Ultimately, the article calls for a cultural and technical transformation where AppSec evolves from a final checkpoint into an invisible, continuous component of the software development lifecycle, ensuring resilience in an increasingly fast-paced digital landscape.


The era of cheap technology could be over

The article suggests that the long-standing era of affordable consumer and enterprise technology is drawing to a close, primarily driven by an unprecedented global shortage of critical hardware components. This shift is largely attributed to the explosive growth of artificial intelligence, which has created an insatiable demand for high-performance processors, memory, and solid-state storage. Manufacturers are increasingly prioritizing high-margin AI-specific hardware over commodity components used in PCs, smartphones, and servers, leading to significant price hikes. Market analysts predict a dramatic surge in DRAM and SSD prices, with some estimates suggesting a 130% increase by the end of the year. Consequently, shipments for personal computers and mobile devices are expected to decline as manufacturing costs become prohibitive. Beyond the AI boom, the crisis is exacerbated by post-pandemic market cycles and geopolitical tensions that continue to destabilize global supply chains. To navigate this new landscape, IT leaders are being forced to rethink procurement strategies, opting for data cleansing, tiered storage solutions, and extending the lifecycle of existing hardware. Ultimately, while these shortages strain budgets, they may encourage more disciplined data management practices as businesses adapt to a more expensive technological environment.


The AI era of incident response: What autonomous operations mean for enterprise IT

The article explores the transformative shift in enterprise IT as it moves toward an era of autonomous operations driven by artificial intelligence. Traditionally, incident response has been a reactive, manual process, leaving IT teams overwhelmed by a constant deluge of alerts and complex troubleshooting tasks. However, as modern environments grow increasingly intricate across cloud and hybrid infrastructures, manual intervention is no longer sustainable. The author argues that AI and machine learning are revolutionizing this landscape by enabling proactive monitoring and automated remediation. These AIOps tools can analyze massive datasets in real-time to identify patterns, pinpoint root causes, and resolve issues before they escalate into significant outages. This transition significantly reduces the Mean Time to Repair (MTTR) and shifts the focus of IT staff from constant firefighting to higher-value strategic initiatives. While human oversight remains essential, the role of IT professionals is evolving into one of managing intelligent systems rather than performing repetitive manual labor. Ultimately, embracing autonomous operations allows organizations to achieve greater system reliability, operational efficiency, and a superior developer experience, marking a definitive end to the limitations of legacy incident management frameworks.


Securing Automation: Why the Specification Stage Is the Right Time to Embed OT Cybersecurity

Manufacturers today are rapidly adopting automation to meet rising demand, yet a significant gap remains in cybersecurity investment, often leaving operational technology (OT) vulnerable. This article argues that the most effective remedy is to embed security requirements directly into the initial specification phase of projects. By integrating specific, testable criteria into Requests for Proposals (RFPs), security becomes a contractually enforceable deliverable rather than a costly afterthought. Effective requirements must adhere to six key attributes: they should be achievable, unambiguous, concise, complete, singular, and verifiable. This structured approach allows for rigorous validation during Factory Acceptance Testing (FAT) and Site Acceptance Testing (SAT), ensuring systems are hardened before they go live. Beyond technical specifications, the author emphasizes a holistic strategy encompassing people and processes, such as developing OT-specific security policies and conducting regular incident-response drills. Resilience is also highlighted through the implementation of immutable backups and "safe-state" logic to maintain production during disruptions. Ultimately, establishing an OT governance board ensures that security remains a continuous, executive-level priority, safeguarding automation investments while maintaining the speed and efficiency essential for modern industrial competitiveness.


The Illusion of Managed Data Products

In "The Illusion of Managed Data Products," Dr. Jarkko Moilanen explores the critical gap between perceiving data as a managed asset and the operational reality of true control. He argues that many organizations mistake visibility—achieved through data catalogs and dashboards—for actual management. While these tools identify existing products and track performance, they often fail to trigger meaningful action when issues arise. This creates an illusion of order where structure and metadata exist, but ownership remains static and metrics lack consequences. Moilanen identifies "diffusion of responsibility" and "latency" as key barriers, where signals are observed but not systematically tied to accountability or execution. To overcome this, the author advocates for a shift from mere observation to an active operating model. This involves creating a closed loop where every signal leads to a defined owner, a triggered action, and subsequent verification. By integrating business outcomes with governance and leveraging AI to bridge the gap between detection and response, organizations can move beyond descriptive catalogs toward a system of coordinated execution. Ultimately, managing data products requires more than just better visualization; it demands a structural transformation that prioritizes responsiveness and ensures that every data insight results in tangible business momentum.


Resilience by Design: How Axis Bank is redefining cybersecurity for the AI-driven banking era

The article titled "Resilience by Design: How Axis Bank is redefining cybersecurity for the AI-driven banking era" features Vinay Tiwari, CISO of Axis Bank, and his vision for securing modern financial services. As banking transitions into an AI-driven landscape, Tiwari emphasizes "resilience by design," a strategy that integrates security into the core of every digital initiative rather than treating it as an afterthought. The bank’s approach is anchored by three critical domains: robust cyber risk governance, secured data architecture, and continuous threat analysis. A central pillar of this transformation is the implementation of Zero Trust Architecture, which replaces implicit trust with continuous verification across all network interactions. Furthermore, Axis Bank leverages advanced AI/ML-powered threat intelligence and automated security operations to detect anomalies and mitigate risks proactively. Beyond technology, Tiwari stresses that true resilience stems from a human-centered culture. By launching comprehensive awareness programs, the bank empowers employees to recognize social engineering and phishing threats. Ultimately, this multifaceted strategy—combining hybrid-cloud protection, preemptive defense, and unified compliance—aims to build digital trust. This ensures that as Axis Bank scales, its security posture remains robust enough to counter the evolving complexities of the modern cyber threat landscape.


Why Data Governance Keeps Falling Short and 6 Actions to Fix It

In this article, Malcolm Hawker explores why data governance initiatives often fail to deliver their promised value, attributing the shortfall to a combination of human, cultural, and organizational barriers. A primary issue is the conceptual misunderstanding where leadership views data governance as a technical IT responsibility rather than a fundamental enterprise capability. This results in an overreliance on technology and a lack of genuine executive engagement beyond mere "buy-in." Furthermore, many organizations struggle to quantify the business benefits of governance, leading it to be perceived as a cost center rather than a value generator. To overcome these obstacles, Hawker proposes six strategic actions aimed at realigning governance with business goals. These include educating leadership to foster a data-driven culture, documenting clear business value, and acknowledging that governance is a cross-functional business issue rather than an IT problem. Additionally, he emphasizes the need to define the true value of data, cover the entire data supply chain, and integrate governance more closely with core business operations. By shifting focus from technological tools to people, leadership, and value quantification, organizations can transform data governance from a stagnant administrative burden into a dynamic driver of competitive advantage and regulatory compliance.

Daily Tech Digest - March 11, 2026


Quote for the day:

“In the end, it is important to remember that we cannot become what we need to be by remaining what we are.” -- Max De Pree

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 21 mins • Perfect for listening on the go.

Jack & Jill went up the hill — and an AI tried to hack them

This Computerworld article details a groundbreaking red-teaming experiment by CodeWall where an autonomous AI agent successfully compromised the Jack & Jill hiring platform. By chaining together four seemingly minor vulnerabilities—a faulty URL fetcher, an exposed test mode, missing role checks, and lack of domain verification—the agent gained full administrative access within an hour. The experiment took a surreal turn when the agent autonomously generated a synthetic voice to interact with the platform’s internal assistants, even masquerading as Donald Trump to demand sensitive data. While the platform’s defensive guardrails successfully repelled direct social engineering attempts, the test proved that AI can navigate complex attack vectors with greater speed and creativity than human experts. CodeWall CEO Paul Price emphasizes that AI’s ability to digest vast information and run thousands of simultaneous experiments necessitates a radical shift in defensive postures. As AI lowers the barrier for sophisticated cyberattacks, organizations must move beyond periodic scans toward continuous, adversarial testing. Ultimately, this piece serves as a stark warning that integrating autonomous agents into business operations creates entirely new, unsecured attack surfaces that require urgent attention from security leaders worldwide.


When is an SBOM not an SBOM? CISA’s Minimum Elements

This Techzine article examines the Cybersecurity and Infrastructure Security Agency's 2025 guidance that significantly elevates the technical standards for Software Bills of Materials. By introducing "Minimum Elements," CISA establishes a rigorous baseline for what constitutes a credible SBOM, moving beyond simple component lists to include cryptographic hashes and detailed generation context. This shift aligns with global regulatory trends, most notably the EU Cyber Resilience Act, which legally mandates "security by design" and persistent SBOM maintenance for digital products sold in Europe. The author emphasizes that a static SBOM is no longer sufficient; instead, these documents must be dynamic, immutable records generated for every build to facilitate rapid incident response. In an era of strict compliance deadlines—often requiring vulnerability notification within 24 hours—the ability to accurately query software dependencies has become a competitive necessity. Ultimately, the article argues that mature, automated SBOM processes are critical for establishing trust with procurement teams and regulators. Organizations failing to adopt these rigorous standards risk being excluded from the global market as the industry moves toward a more transparent, secure, and verifiable software supply chain.


NIST concept paper explores identity and authorization controls for AI agents

The National Institute of Standards and Technology (NIST), through its National Cybersecurity Center of Excellence, has released a pivotal draft concept paper titled “Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization.” This document addresses the critical security gap created by the rapid emergence of “agentic” AI systems—software capable of autonomous decision-making and task execution with minimal human oversight. As these agents increasingly interact with sensitive enterprise networks, NIST argues that traditional automation scripts no longer suffice as a governance model. Instead, the paper proposes that AI agents must be recognized as distinct, identifiable entities within identity management frameworks, rather than operating under shared or anonymous credentials. The initiative explores adapting established standards like OAuth and OpenID Connect to manage the unique challenges of agent authentication and dynamic authorization, ensuring the principle of least privilege remains intact. Furthermore, the paper highlights significant risks such as prompt injection and accountability concerns, suggesting robust logging and auditing mechanisms to trace autonomous actions back to human authorities. Ultimately, NIST aims to provide a practical implementation guide that allows organizations to securely harness the power of AI agents while maintaining rigorous oversight, closing the loop between technical efficiency and enterprise security.


Middle East Conflict Highlights Cloud Resilience Gaps

This Darkreading article explores how recent geopolitical tensions and military actions have shattered the illusion of the cloud as a geography-independent entity. Robert Lemos details how kinetic strikes, including drone and missile attacks on Amazon Web Services (AWS) facilities in the UAE and Bahrain, have shifted data centers from cyber targets to Tier 1 strategic military objectives. These events underscore a critical flaw in current cloud architecture: while designed to withstand natural disasters, facilities are often ill-equipped for the physical destruction of modern warfare. With backup sites frequently located within a 60-mile radius of primary hubs, regional conflicts can simultaneously disable both main and redundant systems, causing permanent hardware loss and long-term operational paralysis. The piece emphasizes that industries reliant on real-time processing, such as finance and defense, face the greatest risks from these localized outages. Consequently, experts are calling for a fundamental shift in disaster recovery strategies, moving away from strict domestic data residency toward "Allied Data Sovereignty." This approach would allow critical national data to be legally backed up and hosted in allied nations during crises, ensuring that essential digital services can survive even when the physical infrastructure on the ground is compromised by kinetic warfare.


Why AI is both a curse and a blessing to open-source software - according to developers

In this ZDNET article Steven Vaughan-Nichols explores the dual-edged impact of artificial intelligence on the open-source community. On the positive side, AI serves as a powerful "blessing" by accelerating security triage and automating tedious maintenance tasks. For instance, Mozilla successfully utilized Anthropic’s Claude to identify critical vulnerabilities in Firefox far more efficiently than traditional methods, while the Linux kernel leverages AI to streamline patch backports and CVE workflows. However, this progress is countered by a significant "curse": a deluge of "AI slop." Maintainers of projects like cURL are being overwhelmed by low-quality, AI-generated security reports that lack substance and drain volunteer resources, a phenomenon Daniel Stenberg describes as a form of DDoS attack. Furthermore, large companies like Google have been criticized for dumping minor, AI-discovered bugs on small projects without offering fixes or financial support. Ultimately, industry leaders like Linus Torvalds emphasize that while AI is an invaluable evolutionary step in coding tools, it must be used responsibly. To ensure a productive future, the open-source ecosystem requires a cultural shift where human accountability and rigorous "showing of work" remain central to the development process, preventing automated noise from drowning out genuine innovation.


When AI safety constrains defenders more than attackers

In the CSO Online article Sharma highlights a growing imbalance in the cybersecurity landscape caused by the rigid implementation of AI safety guardrails. While major AI providers have developed sophisticated filters to prevent harmful content generation, these mechanisms often fail to differentiate between malicious intent and legitimate defensive research. Consequently, security professionals, such as red teamers and penetration testers, frequently encounter refusals when attempting to generate realistic phishing simulations or exploit code for authorized assessments. This friction creates a significant operational gap, as threat actors remain entirely unconstrained by such ethical or technical boundaries. Attackers can easily bypass restrictions using jailbroken models, locally hosted open-source alternatives, or specialized malicious tools available in underground markets. This asymmetry allows cybercriminals to industrialize attack variations while defenders struggle to validate detection rules or train employees against evolving threats. To address this disparity, the author argues for a transition toward authorization-based safety models that verify the identity and purpose of the user rather than relying solely on content-based filtering. Ultimately, for AI to truly enhance security, safety frameworks must evolve to support defensive workflows, ensuring that protective measures do not inadvertently become blind spots that benefit only the attackers.


5 tips for communicating the value of IT

In this CIO.com article Mary K. Pratt emphasizes that IT leaders must transition from being perceived as mere cost centers to being recognized as essential business partners. To achieve this, CIOs are encouraged to proactively highlight IT’s positive impacts, ensuring that technology’s role is not taken for granted or only noticed during catastrophic system failures. A critical shift involves ditching technical jargon in favor of business-centric language that prioritizes tangible impact over raw metrics like bandwidth or latency. By utilizing key performance indicators that resonate with specific stakeholders—such as improvements in sales conversion or employee productivity—leaders can demonstrate how technology investments directly influence the organization's bottom line. Furthermore, the article suggests that IT executives sharpen their storytelling skills to translate complex technical initiatives into relatable, human-centric narratives that address specific organizational pain points. Finally, shifting the focus from simple cost-cutting to asset-building and profit-driving allows IT to frame its contributions as catalysts for top-line growth. Ultimately, by consistently marketing their successes through a clear business lens, IT leaders can successfully shake off utility-like reputations and secure their positions as strategic drivers of value and innovation in an increasingly competitive digital landscape.


5 requirements for using MCP servers to connect AI agents

The Model Context Protocol (MCP) serves as a critical standard for orchestrating communication between AI agents, assistants, and LLMs, but successful deployment requires a strategic approach focused on five key requirements. First, organizations must define a narrow, granular scope for MCP servers to prevent performance degradation and ensure reliability. Second, establishing robust integration governance is essential; this involves deciding how to pull context and enforcing least-privilege access to prevent data exfiltration. Third, security non-negotiables are vital, as MCP lacks built-in authentication; teams should implement cryptographic verification, log all interactions, and maintain human-in-the-loop oversight for sensitive tasks. Fourth, developers must not delegate data responsibilities to the protocol, as MCP is merely a connectivity layer that does not guarantee underlying data quality or safety against prompt injection. Fifth, managing the end-to-end agent experience through comprehensive observability and monitoring is necessary to track agent behavior and prevent costly, inefficient resource exploration. By addressing these operational, security, and governance boundaries, businesses can leverage MCP servers to build more complex, trustworthy agentic workflows. This framework ensures that AI ecosystems remain secure and efficient as organizations transition from experimental projects to production-ready agentic systems that require seamless, cross-platform integration.


The limits of bubble thinking: How AI breaks every historical analogy

This Venturebeat article explores the common human tendency to view emerging technologies through the lens of past market cycles. While investors often compare the current artificial intelligence surge to the dot-com crash or the cryptocurrency craze, the author argues that these historical analogies are increasingly insufficient. This "bubble thinking" relies on instinctive pattern-matching, where people assume that because capital is rushing in and valuations are climbing, a catastrophic collapse is inevitable. However, AI possesses unique characteristics—such as its capacity for rapid self-improvement and its foundational role in transforming diverse industries—that set it apart from previous technological shifts. Unlike the speculative nature of crypto or the localized impact of early internet companies, AI is fundamentally reshaping business models and operational efficiency across the global economy. Consequently, traditional risk assessments and valuation methods may fail to capture the true scale of AI’s potential. Rather than waiting for a predictable burst, the article suggests that financial institutions and investors must adapt their strategies to account for an unprecedented paradigm shift. Ultimately, relying on outdated historical templates may lead to a fundamental misunderstanding of the transformative power and long-term trajectory of the modern AI revolution.


SIM Swaps Expose a Critical Flaw in Identity Security

SIM swap attacks represent a fundamental structural weakness in digital identity security, exploiting the industry's misplaced reliance on mobile phone numbers as trusted authentication anchors. Traditionally used for password resets and multi-factor authentication (MFA), phone numbers are easily compromised through social engineering or insider collusion at telecommunications providers, allowing criminals to seize control of a victim’s digital life. Once a number is successfully reassigned, attackers can intercept SMS-based one-time passcodes and bypass recovery safeguards to access sensitive accounts, including banking, email, and enterprise systems. The article highlights that phone numbers were originally designed for communication routing, not identity verification, making them unsuitable for high-security applications due to their portability and frequent recycling. To mitigate these risks, organizations must shift toward phishing-resistant authentication methods, such as hardware security keys and passkeys, while hardening account recovery workflows to move beyond SMS dependency. Additionally, the piece advocates for continuous identity threat detection and risk-based controls that treat identity as a dynamic signal rather than a static login event. Ultimately, the increasing scale and reliability of SIM swapping demand a significant evolution in security architecture, moving away from legacy assumptions to establish a more resilient, device-bound perimeter for modern identity protection.

Daily Tech Digest - February 04, 2026


Quote for the day:

"The struggle you're in today is developing the strength you need for tomorrow." -- Elizabeth McCormick



A deep technical dive into going fully passwordless in hybrid enterprise environments

Before we can talk about passwordless authentication, we need to address what I call the “prerequisite triangle”: cloud Kerberos trust, device registration and Conditional Access policies. Skip any one of these, and your migration will stall before it gains momentum. ... Once your prerequisites are in place, you face critical architectural decisions that will shape your deployment for years to come. The primary decision point is whether to use Windows Hello for Business, FIDO2 security keys or phone sign-in as your primary authentication mechanism. ... The architectural decision also includes determining how you handle legacy applications that still require passwords. Your options are limited: implement a passwordless-compatible application gateway, deprecate the application entirely or use Entra ID’s smart lockout and password protection features to reduce risk while you transition. ... Start with a pilot group — I recommend between 50 and 200 users who are willing to accept some friction in exchange for security improvements. This group should include IT staff and security-conscious users who can provide meaningful feedback without becoming frustrated with early-stage issues. ... Recovery mechanisms deserve special attention. What happens when a user’s device is stolen? What if the TPM fails? What if they forget their PIN and can’t reach your self-service portal? Document these scenarios and test them with your help desk before full rollout. 


When Cloud Outages Ripple Across the Internet

For consumers, these outages are often experienced as an inconvenience, such as being unable to order food, stream content, or access online services. For businesses, however, the impact is far more severe. When an airline’s booking system goes offline, lost availability translates directly into lost revenue, reputational damage, and operational disruption. These incidents highlight that cloud outages affect far more than compute or networking. One of the most critical and impactful areas is identity. When authentication and authorization are disrupted, the result is not just downtime; it is a core operational and security incident. ... Cloud providers are not identity systems. But modern identity architectures are deeply dependent on cloud-hosted infrastructure and shared services. Even when an authentication service itself remains functional, failures elsewhere in the dependency chain can render identity flows unusable. ... High availability is widely implemented and absolutely necessary, but it is often insufficient for identity systems. Most high-availability designs focus on regional failover: a primary deployment in one region with a secondary in another. If one region fails, traffic shifts to the backup. This approach breaks down when failures affect shared or global services. If identity systems in multiple regions depend on the same cloud control plane, DNS provider, or managed database service, regional failover provides little protection. In these scenarios, the backup system fails for the same reasons as the primary.


The Art of Lean Governance: Elevating Reconciliation to Primary Control for Data Risk

In today's environment comprising of continuous data ecosystems, governance based on periodic inspection is misaligned with how data risk emerges. The central question for boards, regulators, auditors, and risk committees has shifted: Can the institution demonstrate at the moment data is used that it is accurate, complete, and controlled? Lean governance answers this question by elevating data reconciliation from a back-office cleanup activity to the primary control mechanism for data risk reduction. ... Data profiling can tell you that a value looks unusual within one system. It cannot tell you whether that value aligns with upstream sources, downstream consumers, or parallel representations elsewhere in the enterprise.  ... Lean governance reframes governance as a continual process-control discipline rather than a documentation exercise. It borrows from established control theory: Quality is achieved by controlling the process, not by inspecting outputs after failures. Three principles define this approach: Data risk emerges continuously, not periodically; Controls must operate at the same cadence as data movement; and Reconciliation is the control that proves process integrity. ... Data profiling is inherently inward-looking. It evaluates distributions, ranges, patterns, and anomalies within a single dataset. This is useful for hygiene, but insufficient for assessing risk. Reconciliation is inherently relational. It validates consistency between systems, across transformations, and through the lifecycle of data.


Working with Code Assistants: The Skeleton Architecture

Critical non-functional requirements- such as security, scalability, performance, and authentication- are system-wide invariants that cannot be fragmented. If every vertical slice is tasked with implementing its own authorization stack or caching strategy, the result is "Governance Drift": inconsistent security postures and massive code redundancy. This necessitates a new unifying concept: The Skeleton and The Tissue. ... The Stable Skeleton represents the rigid, immutable structures (Abstract Base Classes, Interfaces, Security Contexts) defined by the human although possibly built by the AI. The Vertical Tissue consists of the isolated, implementation-heavy features (Concrete Classes, Business Logic) generated by the AI. This architecture draws on two classical approaches: actor models and object-oriented inversion of control. It is no surprise that some of the world’s most reliable software is written in Erlang, which utilizes actor models to maintain system stability. Similarly, in inversion of control structures, the interaction between slices is managed by abstract base classes, ensuring that concrete implementation classes depend on stable abstractions rather than the other way around. ... Prompts are soft; architecture is hard. Consequently, the developer must monitor the agent with extreme vigilance. ... To make the "Director" role scalable, we must establish "Hard Guardrails"- constraints baked into the system that are physically difficult for the AI to bypass. These act as the immutable laws of the application.


8-Minute Access: AI Accelerates Breach of AWS Environment

A threat actor gained initial access to the environment via credentials discovered in public Simple Storage Service (S3) buckets and then quickly escalated privileges during the attack, which moved laterally across 19 unique AWS principals, the Sysdig Threat Research Team (TRT) revealed in a report published Tuesday. ... While the speed and apparent use of AI were among the most notable aspects of the attack, the researchers also called out the way that the attacker accessed exposed credentials as a cautionary tale for organizations with cloud environments. Indeed, stolen credentials are often an attacker's initial access point to attack a cloud environment. "Leaving access keys in public buckets is a huge mistake," the researchers wrote. "Organizations should prefer IAM roles instead, which use temporary credentials. If they really want to leverage IAM users with long-term credentials, they should secure them and implement a periodic rotation." Moreover, the affected S3 buckets were named using common AI tool naming conventions, they noted. The attackers actively searched for these conventions during reconnaissance, enabling them to find the credentials quite easily, they said. ... During this privilege-escalation part of the attack — which took a mere eight minutes — the actor wrote code in Serbian, suggesting their origin. Moreover, the use of comments, comprehensive exception handling, and the speed at which the script was written "strongly suggests LLM generation," the researchers wrote.


Ask the Experts: The cloud cost reckoning

According to the 2025 Azul CIO Cloud Trends Survey & Report, 83% of the 300 CIOs surveyed are spending an average of 30% more than what they had anticipated for cloud infrastructure and applications; 43% said their CEOs or boards of directors had concerns about cloud spend. Moreover, 13% of surveyed CIOs said their infrastructure and application costs increased with their cloud deployments, and 7% said they saw no savings at all. Other surveys show CIOs are rethinking their cloud strategies, with "repatriation" -- moving workloads from the cloud back to on-premises -- emerging as a viable option due to mounting costs. ... "At Laserfiche we still have a hybrid environment. So we still have a colocation facility, where we house a lot of our compute equipment. And of course, because of that, we need a DR site because you never want to put all your eggs in that one colo. We also have a lot of SaaS services. We're in a hyperscaler environment for Laserfiche cloud. "But the reason why we do both is because it actually costs us less money to run our own compute in a data center colo environment than it does to be all in on cloud." ,,, "The primary reason why the [cloud] costs have been increasing is because our use of cloud services has become much more sophisticated and much more integrated. "But another reason cloud consumption has increased is we're not as diligent in managing our cloud resources in provisioning and maintaining."


NIST develops playbook for online use cases of digital credentials in financial services

The objective is to develop what a panel description calls a “playbook of standards and best practices that all parties can use to set a high bar for privacy and security.” “We really wanted to be able to understand, what does it actually take for an organization to implement this stuff? How does it fit into workflows? And then start to think as well about what are the benefits to these organizations and to individuals.” “The question became, what was the best online use case?” Galuzzo says. “At which point our colleagues in Treasury kind of said, hey, our online banking customer identification program, how do we make that both more usable and more secure at the same time? And it seemed like a really nice fit. So that brought us to both the kind of scope of what we’re focused on, those online components, and the specific use case of financial services as well.” ... The model, he says, “should allow you to engage remotely, to not have to worry about showing up in person to your closest branch, should allow for a reduction in human error from our side and should allow for reduction in fraud and concern over forged documents.” It should also serve to fulfil the bank’s KYC and related compliance requirements. Beyond the bank, the major objective with mDLs remains getting people to use them. The AAMVA’s Maru points to his agency’s digital trust service, and to its efforts in outreach and education – which are as important in driving adoption as anything on the technical side. 


Designing for the unknown: How flexibility is reshaping data center design

Rapid advances in compute architectures – particularly GPUs and AI-oriented systems – are compressing technology cycles faster than many design and delivery processes can respond. In response, flexibility has shifted from a desirable feature to the core principle of successful data center design. This evolution is reshaping how we think about structure, power distribution, equipment procurement, spatial layout, and long-term operability. ... From a design perspective, this means planning for change across several layers: Structural systems that can accommodate higher equipment loads without reinforcement; Spatial layouts that allow reconfiguration of white space and service zones; and Distribution pathways that support future modifications without disrupting live operations. The objective is not to overbuild for every possible scenario, but to provide a framework that can absorb change efficiently and economically. ... Another emerging challenge is equipment lead time. While delivery periods vary by system, generators can now carry lead times approaching 12 months, particularly for higher capacities, while other major infrastructure components – including transformers, UPS modules, and switchgear – typically fall within the 30- to 40-week range. Delays in securing these items can introduce significant risk when procurement decisions are deferred until late in the design cycle.


Onboarding new AI hires calls for context engineering - here's your 3-step action plan

In the AI world, the institutional knowledge is called context. AI agents are the new rockstar employees. You can onboard them in minutes, not months. And the more context that you can provide them with, the better they can perform. Now, when you hear reports that AI agents perform better when they have accurate data, think more broadly than customer data. The data that AI needs to do the job effectively also includes the data that describes the institutional knowledge: context. ... Your employees are good at interpreting it and filling in the gaps using their judgment and applying institutional knowledge. AI agents can now parse unstructured data, but are not as good at applying judgment when there are conflicts, nuances, ambiguity, or omissions. This is why we get hallucinations. ... The process maps provide visibility into manual activities between applications or within applications. The accuracy and completeness of the documented process diagrams vary wildly. Front-office processes are generally very poor. Back-office processes in regulated industries are typically very good. And to exploit the power of AI agents, organizations need to streamline them and optimize their business processes. This has sparked a process reengineering revolution that mirrors the one in the 1990s. This time around, the level of detail required by AI agents is higher than for humans.


Q&A: How Can Trust be Built in Open Source Security?

The security industry has already seen examples in 2025 of bad actors deploying AI in cyberattacks – I’m concerned that 2026 could bring a Heartbleed- or Log4Shell-style incident involving AI. The pace at which these tools operate may outstrip the ability of defenders to keep up in real time. Another focus for the year ahead: how the Cyber Resilience Act (CRA) will begin to reshape global compliance expectations. Starting in September 2026, manufacturers and open source maintainers must report exploited vulnerabilities and breaches to the EU. This is another step closer to CRA enforcement and other countries like Japan, India and Korea are exploring similar legislation. ... The human side of security should really be addressed just as urgently as the technical side. The way forward involves education, tooling and cultural change. Resilient human defences start with education. Courses from the Linux Foundation like Developing Secure Software and Secure AI/ML‑Driven Software Development equip users with the mindset and skills to make better decisions in an AI‑enhanced world. Beyond formal training, reinforcing awareness creating a vigilant community is critical. The goal is to embed security into culture and processes so that it’s not easily overlooked when new technology or tools roll around. ... Maintainers and the community projects they lead are struggling without support from those that use their software.