Showing posts with label multi-cloud. Show all posts
Showing posts with label multi-cloud. Show all posts

Daily Tech Digest - March 07, 2026


Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.



LangChain's CEO argues that better models alone won't get your AI agent to production

LangChain CEO Harrison Chase contends that achieving production-ready AI agents requires more than just utilizing more powerful foundational models. While improved LLMs offer better reasoning, Chase emphasizes that agents often fail due to systemic issues rather than model limitations. He advocates for a shift toward "agentic" engineering, where the focus moves from simple prompting to building robust, stateful systems. A critical component of this transition is the move away from "vibe-based" development—relying on subjective successes—toward rigorous evaluation frameworks like LangSmith. Chase highlights that developers must implement precise control over an agent's logic through tools like LangGraph, which allows for cycles, state management, and human-in-the-loop interactions. These architectural guardrails are essential for managing the inherent unpredictability of LLMs. By treating agent development as a complex systems engineering task, organizations can overcome the "last mile" hurdle, moving beyond impressive demos to reliable, autonomous applications. Ultimately, the maturity of AI agents depends on sophisticated orchestration, detailed observability, and a willingness to architect the environment in which the model operates, rather than expecting a single model to handle every nuance of a complex workflow autonomously.

This article examines the false sense of security provided by multi-factor authentication (MFA) within Windows-centric environments. While MFA is highly effective for cloud-based applications, the piece argues that traditional Active Directory (AD) authentication paths—such as interactive logons, Remote Desktop Protocol (RDP) sessions, and Server Message Block (SMB) traffic—often bypass modern identity providers, leaving internal networks vulnerable to password-only attacks. The article details seven critical gaps, including the persistence of legacy NTLM protocols susceptible to pass-the-hash attacks, the abuse of Kerberos tickets, and the risks posed by unmonitored service accounts or local administrator credentials that frequently lack MFA coverage. To mitigate these significant risks, the author recommends that organizations treat Windows authentication as a distinct security surface by enforcing longer passphrases, continuously blocking compromised passwords, and strictly limiting legacy protocols. Furthermore, the text highlights the importance of auditing service accounts and leveraging advanced security tools like Specops Password Policy to bridge the gap between cloud security and on-premises infrastructure. Ultimately, securing a modern enterprise requires moving beyond simple MFA implementation toward a holistic strategy that addresses these often-overlooked internal authentication vulnerabilities and credential reuse habits.


Why enterprises are still bad at multicloud

In this InfoWorld analysis, David Linthicum argues that while most enterprises are technically multicloud by default, they largely fail to operate them as a cohesive business capability. Instead of a unified strategy, multicloud environments often emerge haphazardly through mergers, acquisitions, or localized team decisions, leading to fragmented "technology estates" that function as isolated silos. Each provider—typically AWS, Azure, and Google—is managed with its own native consoles, security protocols, and talent pools, which creates redundant processes, inconsistent governance, and hidden global costs. Linthicum emphasizes that the "complexity tax" of multicloud is only worth paying if organizations can achieve operational commonality. He advocates for the implementation of common control planes—shared services for identity, policy, and observability—that sit above individual cloud brands to ensure consistent guardrails. To improve maturity, enterprises must shift from viewing cloud adoption as a series of procurement choices to designing a singular operating model. By establishing cross-cloud coordination and relentlessly measuring business value through metrics like recovery speed and unit economics, organizations can move from uncontrolled variety to "controlled optionality," finally leveraging the specialized strengths of different providers without multiplying their operational overhead or fracturing their technical foundations.


The Accidental Orchestrator

This article by O'Reilly Radar examines the profound transformation of the software developer's role in the era of generative AI. It posits that developers are transitioning from traditional manual coding to becoming strategic orchestrators of autonomous AI agents. This shift, described as "accidental," occurred as AI tools evolved from simple autocomplete plugins into sophisticated assistants capable of managing complex, end-to-end tasks. Developers now find themselves overseeing a fleet of agents that handle various components of the software lifecycle, including design, implementation, and debugging. This new reality demands a significant pivot in professional skills; instead of focusing primarily on syntax and logic, engineers must now master prompt engineering, agent coordination, and high-level system architecture. The piece emphasizes that while AI significantly boosts productivity, the complexity of managing these interlinked systems introduces critical challenges regarding transparency, security, and long-term reliability. Ultimately, the role of the accidental orchestrator requires a mindset shift where the developer acts as a tactical director of digital workers rather than a lone creator. This evolution suggests that the future of software engineering lies in the quality of the human-AI partnership and the effective orchestration of intelligent agents.


Powering the new age of AI-led engineering in IT at Microsoft

Microsoft Digital is spearheading a transformative shift toward AI-led engineering, fundamentally changing how IT services are designed, built, and maintained. At the heart of this evolution is the integration of GitHub Copilot and other generative AI tools, which empower developers to automate repetitive "toil" and focus on high-value architectural innovation. By adopting a platform-centric approach, Microsoft standardizes development environments and leverages AI to enhance security, catch bugs earlier, and optimize code quality through sophisticated semantic searches and automated testing. This transition moves beyond simply using AI tools to a holistic culture where AI is woven into the entire software development lifecycle. Key benefits include significantly accelerated deployment cycles, improved developer satisfaction, and a more resilient IT infrastructure. Furthermore, the initiative prioritizes security and compliance by embedding AI-driven checks directly into the engineering pipeline. As Microsoft refines these internal practices, it aims to provide a blueprint for the industry on how to scale enterprise IT operations in an increasingly complex digital landscape. Ultimately, AI-led engineering at Microsoft is not just about speed; it is about fostering a creative environment where engineers solve complex problems with unprecedented efficiency, driving a new standard for modern software development.


Read-Copy-Update (RCU): The Secret to Lock-Free Performance

Read-Copy-Update (RCU) is a sophisticated synchronization mechanism explored in this InfoQ article, primarily utilized within the Linux kernel to handle concurrent data access. Unlike traditional locking methods that can cause significant performance bottlenecks, RCU allows multiple readers to access shared data simultaneously without the overhead of locks or atomic operations. The core concept involves updaters creating a modified copy of the data and then swapping the pointer to the new version, while ensuring that the original data is only reclaimed after a "grace period" when all active readers have finished. This approach ensures that readers always see a consistent, albeit potentially slightly outdated, version of the data without ever being blocked. While RCU offers unparalleled scalability and performance for read-heavy workloads, the article emphasizes that it introduces complexity for developers, particularly regarding memory management and the coordination of update cycles. Updaters must carefully manage the transition between versions to avoid data corruption. Ultimately, RCU represents a fundamental shift in concurrency design, prioritizing reader efficiency at the cost of more intricate update logic, making it an essential tool for high-performance systems where read operations vastly outnumber modifications.


AI transforms ‘dangling DNS’ into automated data exfiltration pipeline

AI-driven automation is fundamentally transforming "dangling DNS" from a common administrative oversight into a sophisticated, high-speed pipeline for automated data exfiltration. Dangling DNS occurs when a Domain Name System record continues to point to a decommissioned cloud resource, such as an abandoned IP address or a deleted storage bucket. While this vulnerability has existed for years, attackers are now utilizing generative AI and advanced scanning scripts to identify these orphaned subdomains across the internet at an unprecedented scale. Once a target is located, AI agents can automatically reclaim the abandoned resource on cloud platforms like AWS or Azure, effectively hijacking the legitimate domain to intercept sensitive traffic, harvest user credentials, or distribute malware through prompt injection attacks. This evolution represents a shift from opportunistic manual exploitation to a systematic, machine-led attack surface management strategy. To counter this, security professionals must move beyond periodic audits, implementing continuous, automated DNS monitoring and lifecycle management. The article underscores that as threat actors leverage AI to weaponize legacy misconfigurations, organizations can no longer afford to leave DNS records unmanaged. Addressing this infrastructure is a critical component of modern cyber defense, requiring the same level of automation that attackers currently use to exploit it.


The New Calculus of Risk: Where AI Speed Meets Human Expertise

The article examines the launch of Crisis24 Horizon, a sophisticated AI-enabled risk management platform designed to address the complexities of a volatile global security landscape. Developed on a modern technology stack, the platform provides a unified "single pane of glass" view, integrating dynamic intelligence with travel, people, and site-specific risk management. By leveraging artificial intelligence to process roughly 20,000 potential incidents daily, Crisis24 Horizon dramatically accelerates threat detection and triage, effectively expanding the capacity of security teams. Key features include "Ask Horizon," a natural language interface for querying risk data; "Latest Event Synopsis," which consolidates fragmented alerts into coherent summaries; and integrated mass notification systems for critical event response. While AI handles massive data aggregation and initial filtering, the platform emphasizes the "human in the loop" approach, where expert analysts provide necessary contextual judgment for high-stakes decisions like emergency evacuations. This synergy of AI speed and human expertise marks a shift from reactive to anticipatory security, allowing organizations to monitor assets in real-time and safeguard operations against interconnected global threats. Ultimately, Crisis24 Horizon empowers leaders to mitigate risks with greater precision, ensuring operational resilience and employee safety amidst geopolitical instability and environmental disasters.


Accelerating AI, cloud, and automation for global competitiveness in 2026

The guest blog post by Pavan Chidella argues that by 2026, the global competitiveness of enterprises will be defined by their ability to transition from AI experimentation to large-scale, disciplined execution. Focusing primarily on the healthcare sector, the author illustrates how the orchestration of AI, cloud-native architectures, and intelligent automation is essential for modernizing legacy processes like claims adjudication, which traditionally suffer from structural latency. In this evolving landscape, technology is no longer an isolated tool but a strategic driver of measurable business outcomes, including improved operational efficiency and enhanced customer transparency. Chidella emphasizes that "responsible acceleration" requires embedding governance, ethical AI monitoring, and regulatory compliance directly into system designs rather than treating them as afterthoughts. By adopting a product-led engineering mindset, organizations can reduce friction and build trust within their ecosystems. Ultimately, the piece asserts that global leadership in 2026 will belong to those who successfully integrate speed and precision with accountability, effectively leveraging hybrid cloud capabilities to process data in real-time. This shift represents a broader competitive imperative to move beyond proof-of-concept stages toward a resilient, automated, and digitally mature infrastructure that can thrive amidst increasing global complexity and regulatory scrutiny.


Engineering for AI intensity: The new blueprint for high-density data centers

This article explores the critical infrastructure evolution required to support the escalating demands of artificial intelligence. As traditional data centers struggle with the unprecedented power and thermal requirements of GPU-heavy workloads, a new engineering paradigm is emerging. This blueprint emphasizes a radical transition from legacy air-cooling systems to advanced liquid cooling technologies, such as direct-to-chip and immersion cooling, which are essential for managing rack densities that now frequently exceed 50kW and can reach up to 100kW per cabinet. Beyond thermal management, the article highlights the necessity of modular, high-voltage power distribution to ensure electrical efficiency and minimize transmission losses across the facility. It also underscores the importance of structural adaptations, including reinforced flooring to support heavier liquid-cooled hardware and overhead cable management to optimize airflow. Furthermore, the blueprint advocates for high-bandwidth, low-latency networking fabrics to facilitate the massive data exchanges inherent in parallel AI training. Ultimately, the piece argues that achieving AI intensity requires a holistic, future-proof design strategy that integrates power scalability, structural flexibility, and sustainable practices, positioning the modern data center as the strategic engine for digital transformation in an AI-first era.


Daily Tech Digest - February 27, 2026


Quote for the day:

"The best leaders build teams that don’t rely on them. That’s true excellence." -- Gordon Tredgold



Ransomware groups switch to stealthy attacks and long-term access

“Ransomware groups no longer treat vulnerabilities as isolated entry points,” says Aviral Verma, lead threat intelligence analyst at penetration testing and cybersecurity services firm Securin. “They assemble them into deliberate exploitation chains, selecting weaknesses not just for severity, but for how effectively they can collapse trust, persistence, and operational control across entire platforms.” AI is now widely accessible to threat actors, but it primarily functions as a force multiplier rather than a driving force in ransomware attacks. ... Vasileios Mourtzinos, a member of the threat team at managed detection and response firm Quorum Cyber, says that more groups are moving away from high-impact encryption towards extortion-led models that prioritize data theft and prolonged, low-noise access. “This approach, popularized by actors such as Cl0p through large-scale exploitation of third-party and supply chain vulnerabilities, is now being mirrored more widely, alongside increased abuse of valid accounts, legitimate administrative tools to blend into normal activity, and in some cases attempts to recruit or incentivize insiders to facilitate access,” Mourtzinos says. ... “For CISOs, the priority should be strengthening identity controls, closely monitoring trusted applications and third-party integrations, and ensuring detection strategies focus on persistence and data exfiltration activity,” Mourtzinos advises.


Expert Maps Identity Risk and Multi-Cloud Complexity to Evolving Cloud Threats

Cavalancia began by noting that cloud adoption has fundamentally altered traditional security boundaries. With 88 percent of organizations now operating in hybrid or multi-cloud environments, the hardened network edge is no longer the primary control point. Instead, identity and privilege determine access across distributed systems. ... Discussing identity risk specifically, he underscored how central privilege is to modern attacks, saying, "If you don't have identity, you don't have identity, you don't have privilege, you don't have privilege, you don't have a threat." Excessive permissions and credential abuse create privilege escalation paths once access is obtained. ... Reducing exploitable attack paths requires prioritizing risk based on business impact. Rather than attempting to address every vulnerability equally, organizations should identify which exposures would cause the greatest operational or financial harm and focus there first. ... Looking ahead, Cavalancia argued that security must be built around continuous monitoring and identity-first principles. "Continuous monitoring, continuous validation, continuous improvement, maybe we should just have the word continuous here," he said. He also cautioned that AI-assisted attacks are already influencing the threat landscape, noting that "90% of the decisions being made by that attack were done solely by AI, no human intervention whatsoever." 


Data Centers in Space: Pi in the Sky or AI Hallucination?

Space is a great place for data centers because it solves one of the biggest problems with locating data centers on Earth: power, argues Google’s Senior Director of Paradigms of Intelligence, Travis Beals. ... SpaceX is also on board with the idea of data centers in space. Last month, it filed a request with the Federal Communications Commission to launch a constellation of up to one million solar-powered satellites that it said will serve as data centers for artificial intelligence. ... “Data centers in space can access solar power 24/7 in certain ‘sun-synchronous’ orbits, giving them all the power they need to operate without putting immense strain on power grids here on Earth,” Scherer told TechNewsWorld. “This would alleviate concerns about consumers having to bear the costs of higher energy use.” “There is also less risk of running out of real estate in space, no complex permitting requirements, and no community pushback to new data centers being built in people’s backyards,” he added. ... “By some estimates, energy and land costs are only around 25% of the total cost for a data center,” Yoon told TechNewsWorld. “AI hardware is the real cost driver, and shifting to space only makes that hardware more expensive.” “Hardware cannot be repaired or upgraded at scale in space,” he explained. “Maintaining satellites is extremely hard, especially if you have hundreds of thousands of them. Maintaining a traditional data center is extremely easy.”


Centralized Security Can't Scale. It's Time to Embrace Federation

In a federated model, the organization recognizes that technology leaders, whether from across security, IT, and Engineering, have a deep understanding of the nuances of their assigned units. Their specialized knowledge helps them set strategies that match the goals, technologies, workflows, and risks they need. That in turn leads to benefits that a centralized security authority can't touch. To start with, security decisions happen faster when the people making them are closer to the action. Service and application owners already have the context and expertise to make the right calls based on their scopes. Delegated authority allows companies to seize market opportunities faster, deploy new tools more easily, manage fewer escalations, and reduce friction and delays. ... In practice, that might look like a CISO setting data classification standards, while partner teams take responsibility for implementing these standards via low-friction policies and capabilities at the source of record for the data. Netflix's security team figured this out early. Their "Paved Roads" philosophy offers a collection of secure options that meet corporate guidelines while being the easiest for developers to use. In other words, less saying no, more offering a secure path forward. Outside of engineering, organization-wide standards also need to provide flexibility and avoid becoming overly specific or too narrow. 


Linux explores new way of authenticating developers and their code - here's how it works

Today, kernel maintainers who want a kernel.org account must find someone already in the PGP web of trust, meet them face‑to‑face, show government ID, and get their key signed. ... the kernel maintainers are working to replace this fragile PGP key‑signing web of trust with a decentralized, privacy‑preserving identity layer that can vouch for both developers and the code they sign. ... Linux ID is meant to give the kernel community a more flexible way to prove who people are, and who they're not, without falling back on brittle key‑signing parties or ad‑hoc video calls. ... At the core of Linux ID is a set of cryptographic "proofs of personhood" built on modern digital identity standards rather than traditional PGP key signing. Instead of a single monolithic web of trust, the system issues and exchanges personhood credentials and verifiable credentials that assert things like "this person is a real individual," "this person is employed by company X," or "this Linux maintainer has met this person and recognized them as a kernel maintainer." ... Technically, Linux ID is built around decentralized identifiers (DIDs). This is a W3C‑style mechanism for creating globally unique IDs and attaching public keys and service endpoints to them. Developers create DIDs, potentially using existing Curve25519‑based keys from today's PGP world, and publish DID documents via secure channels such as HTTPS‑based "did:web" endpoints that expose their public key infrastructure and where to send encrypted messages.


IT hiring is under relentless pressure. Here's how leaders are responding

The CIO's relationship with the chief human resources officer (CHRO) matters greatly, though historically, they've viewed recruitment through different lenses. HR professionals tend not to be technologists, so their approach to hiring tends to be generic. Conversely, IT leaders aren't HR professionals. Many of them were promoted to management or executive roles for their expert technical skills, not their managerial or people skills. ... The multigenerational workforce can be frustrating for everyone at times, simply because employees' lives and work experiences can be so different. While not all individuals in a demographic group are homogeneous, at a 30,000-foot view, Gen Z wants to work on interesting and innovative projects -- things that matter on a greater scale, such as climate change. They also expect more rapid advancement than previous generations, such as being promoted to a management role after a year or two versus five or seven years, for example. ... Most organizational leaders will tell you their companies have great cultures, but not all their employees would likely agree. Cultural decisions made behind closed doors by a few for the many tend to fail because too many assumptions are made, and not enough hypotheses tested. "Seeing how your job helps the company move forward has been a point of opacity for a long time, and after a certain point, it's like, 'Why am I still here?'" Skillsoft's Daly said.


Generative AI has ushered in a new era of fraud, say reports from Plaid, SEON

“Generative AI has lowered the barrier to creating fake personas, falsifying documents, and impersonating real people at scale,” says a new report from Plaid, “Rethinking fraud in the AI era.” “As a result, fraud losses are projected to reach $40 billion globally within the next few years, driven in large part by AI-enabled attacks.” The warning is familiar. What’s different about Plaid’s approach to the problem is “network insights” – “each person’s unique behavioral footprint across the broader financial and app ecosystem,” understood as a system of relationships and long-standing patterns. In these combined signals, the company says, can be found “a resilient, high-signal lens into intent, risk and legitimacy.” ... “The industry is overdue for its next wave of fraud-fighting innovation,” the report says. “The question is not whether change is needed, but what unique combination of data, insights, and analytics can meet this moment.” The AI era needs its weapon of choice, and it needs to work continuously. “AI driven fraud is exposing the limits of identity controls that were designed for point in time verification rather than continuous assurance,” says Sam Abadir, research director for risk, financial (crime & compliance) at IDC, as quoted in the Plaid report. ... The overarching message is that “AI is real, embedded and widely trusted, but it has not materially reduced the scope of fraud and AML operations.” Fraud continues to scale, enabled by the same AI boom.


The hidden cost of AI adoption: Why most companies overestimate readiness

Walk into enough leadership meetings and you’ll hear the same story told with different accents: “We need AI.” It shows up in board decks, annual strategy documents and that one slide with a hockey-stick curve that magically turns pilot into profit. ... When I talk about the hidden cost of AI adoption, I’m not talking about model pricing or vendor fees. Those are visible and negotiable. The real cost lives in the messy middle: data foundations, integration work, operating model changes, governance, security, compliance and the ongoing effort required to keep AI useful after the demo fades. ... If I had to summarize AI readiness in one sentence, it would be this: AI readiness is your organization’s ability to repeatedly take a business problem, turn it into a well-defined decision or workflow, feed it trustworthy data and ship a solution you can monitor, audit and improve. ... Having data is not the same as having usable data. AI systems amplify quality problems at scale. Until proven otherwise, “we already have the data” usually means duplicated records, inconsistent definitions, missing fields, sensitive data in the wrong places and unclear ownership. ... If it adds friction or produces unreliable outputs, adoption collapses fast. Vendor risk doesn’t disappear either. Pricing changes. Usage spikes. Workflows become coupled to tools you don’t fully control. Without internal ownership, you’re not building capability, you’re renting it.


Overcoming Security Challenges in Remote Energy Operations

The security landscape for remote facilities has shifted "dramatically," and energy providers can no longer rely on isolation for protection, said Nir Ayalon, founder and CEO of Cydome, a maritime and critical infrastructure cybersecurity firm. "These sites are just as exposed as a corporate office - but with far more complex operational challenges," Ayalon said. ... A recent PES Wind report by Cyber Energia found that only 1% of 11,000 wind assets worldwide have adequate cyber protection, while U.K.-based renewable assets face up to 1,000 attempted cyberattacks daily. Trustwave SpiderLabs also reported an 80% rise in ransomware attacks on energy and utilities in 2025, with average costs exceeding $5 million. Ransomware is the most common form of attack. ... Protecting offshore facilities is also costly and a major challenge. Sending a technician for on-site installation can run up to $200,000, including vessel rental. Ayalon said most sites lack specialized IT staff. The person managing the hardware is usually an operator or engineer and not necessarily a certified cybersecurity professional. Limited space for racks and equipment, as well as poor bandwidth poses major challenges, said Rick Kaun, global director of cybersecurity services at Rockwell Automation. ... Designing secure offshore energy systems and shipping vessels is no longer a choice but a necessity. Cybersecurity can't be an afterthought, said Guy Platten, secretary general of the International Chamber of Shipping.


How the CISO’s Role is Evolving From Technologist to Chief Educator

Regardless of structure, modern CISOs are embedded in executive decision-making, legal strategy and supply chain oversight. Their responsibilities have expanded from managing technical defenses to maintaining dynamic risk portfolios, where trade-offs must be weighed across business functions. Stakeholders now include regulators, customers and strategic partners, not just internal IT teams. ... Effective leaders accumulate knowledge and know when to go deep and when to delegate, ensuring subject-matter experts are empowered while key decisions remain aligned to business outcomes. This blend of technical insight and strategic judgment defines the CISO’s value in complex environments. ... As security becomes more embedded in daily operations, cultural leadership plays a defining role in long-term resilience. A positive cybersecurity culture is proactive and free from blame, creating an environment where employees feel safe to speak up and suggest improvements without fear of repercussions. This shift leads to earlier detection, better mitigation and stronger overall security posture. Teams asking for security input during the design phase and employees self-reporting suspicious activity signal a mature culture that understands protection is everyone’s job. ... The modern CISO operates at the intersection of technology, risk, leadership and influence. Leaders must navigate shifting business priorities and complex stakeholder relationships while building a strong security culture across the enterprise.

Daily Tech Digest - February 26, 2026


Quote for the day:

"It is not such a fierce something to lead once you see your leadership as part of God's overall plan for his world." -- Calvin Miller



Boards don’t need cyber metrics — they need risk signals

Decision-makers want to know whether risk is increasing or decreasing, whether controls are effective, and whether the organization can limit damage when prevention fails. Metrics are therefore useful when they clarify those questions. “Time is really the universal metric because everyone can understand time,” Richard Bejtlich, strategist and author in residence at Corelight, tells CSO. “How fast do we detect problems, and how fast do we contain them. Dwell time, containment time. That’s the whole game for me.” Organizations cannot prevent every intrusion, Bejtlich argues, but they can measure how quickly they recognize and contain one. ... Wendy Nather, a longtime CISO who is now an advisor at EPSD, cautions against equating measurement with understanding. “When you are reporting to the board, there are some things you just cannot count that you have to report anyway,” she tells CSO. She points to incidents, near misses, and changes in assumptions as examples. “Anything that changes your assumptions about how you’re managing your security program, you should be bringing those to the board, even if you can’t count them,” Nather says. Regular metrics can create a rhythm of predictability, and that predictability could lull board members into a false sense of security. “Metrics are very seductive,” she says. “They lead us toward things that can be counted, that happen on a regular basis.” The result may be a steady flow of data that obscures structural risk or emerging weaknesses, Nather warns. 


The Enterprise AI Postmortem Playbook: Diagnosing Failures at the Data Layer

Your first rule of the playbook is to treat AI incidents as data incidents – until proven otherwise. You should start by tagging the failure type. Document whether it’s a structure issue, retrieval misalignment, conflict with metric definition, or other categories. Ideally, you want to assign the issue to an owner and attach evidence to force some discipline into the review. Try to classify the issue into clearly defined buckets. For example, you can classify into these four buckets: structural failure, retrieval misalignment, definition conflict, or freshness failure. Once this part is clear, the investigation becomes more focused. The goal with this step is to isolate the data fault line. ... The next step is to move one layer deeper. Identify the source table behind the retrieved context. You also want to confirm the timestamp of the last refresh. Check whether any ingestion jobs failed, partially completed, or ran late. Silent failures are common. A job may succeed technically while loading incomplete data. As you go through the playbook continue tracing upstream. Find the transformation job that shaped the dataset. Look at recent schema changes. Check whether any business rules were updated. The idea here is to rebuild the exact path that led to the output. Try to not make any assumptions at this stage about model behavior – simply keep tracing until the process is complete. Don’t be surprised if the model simply worked with what it was given.


Top Attacks On Biometric Systems (And How To Defend Against Them)

Presentation attacks, often referred to as spoofing attacks, occur when an attacker presents a fake biometric sample to a sensor (like a camera or microphone) in an attempt to impersonate a legitimate user. Common examples include printed photos, video replays, silicone masks, prosthetics or synthetic fingerprints. More recently, high-quality deepfake videos have become a powerful new tool in the attacker’s arsenal. ... Passive liveness techniques, which analyze subtle physiological and behavioral signals without requiring user interaction, are particularly effective because they reduce friction while improving security. However, liveness detection must be resilient to unknown attack methods, not just tuned to detect known spoof types. ... Not all biometric attacks happen in front of the sensor. Replay and injection attacks target the biometric data pipeline itself. In these scenarios, attackers intercept, replay or inject biometric data, such as images or templates, directly into the system, bypassing the sensor entirely. ... Defensive strategies must extend beyond the biometric algorithm. Secure transmission, encryption in transit, device attestation, trusted execution environments and validation that data originates from an authorized sensor are all essential. ... Although less visible to end users, attacks targeting biometric templates and databases can pose long-term risks. If biometric templates are compromised, the impact extends far beyond a single breach.


Open-source security debt grows across commercial software

High and critical risk findings remain widespread. Most codebases contain at least one high risk vulnerability, and nearly half contain at least one critical risk issue. Those rates dipped slightly from the prior year even as total vulnerability counts rose. Supply chain attacks add another layer of risk. Sixty five percent of surveyed organizations experienced a software supply chain attack in the past year. ... “As AI reshapes software development, security teams will have to continue to adapt in turn. Security budgets and security guidelines should reflect this new reality. Leaders should continue to invest in tooling and education required to equip teams to manage the drastic increase in velocity, volume, and complexity of applications,” Mackey said. Board level reporting also requires adjustment as vulnerability volumes rise. ... Outdated components appear in nearly every audited environment. More than nine in ten codebases contain components that are several years out of date or show no recent development activity. A large share of components run many versions behind current releases. Only a small fraction operate on the latest available version. This maintenance debt intersects with regulatory obligations. The EU Cyber Resilience Act entered into effect in late 2024, with key reporting requirements taking effect in 2026 and broader enforcement following in 2027. 


The agentic enterprise: Why value streams and capability maps are your new governance control plane

The enterprise is currently undergoing a seismic pivot from generative AI, which focuses on content creation, to agentic AI, which focuses on goal execution. Unlike their predecessors, these agents possess “structured autonomy”: the ability to perceive contexts, plan actions and execute across systems without constant human intervention. For the CIO and the enterprise architect, this is not merely an upgrade in automation speed; it is a fundamental shift in the firm’s economic equation. We are moving from labor-centric workflows to digital labor capable of disassembling and reassembling entire value chains. ... In an agentic enterprise, the value stream map is no longer just a diagram; it is the control plane. It must explicitly define the handoff protocols between human and digital agents. In my opinion, Value stream maps must move from static documents stored in a repository to context documents used to drive agentic automation. ... If a value stream does not exist, you cannot automate it. For new agentic workflows, do not map the current human process. Instead, use an outcome-backwards approach. Work backward from the concrete deliverable (e.g., customer onboarded) to identify the minimum viable API calls required. Before granting write access, run the new agentic stream in shadow mode to validate agent decisions against human outcomes.


Beyond compliance: Building a culture of data security in the digital enterprise

Cyber compliance is something organisations across industrial sectors take seriously, especially with new regulations getting introduced and non-compliance having consequences such as hefty penalties. Hence, businesses are placing compliance among their top priorities. However, hyper-focusing only on compliance can lead to tunnel vision, crippling creativity, and innovation. It fails to offer a comprehensive risk assessment due to the checklist approach it follows, exposing organizations to vulnerabilities and fast-evolving threats. Having a compliance-first mindset can lead to incomplete risk assessment, creating blind spots and security gaps in security provisions. ... With businesses relying on data for operations, customer engagement, and decision-making, ensuring data security protects both users and organisations. Data breaches have severe consequences, including financial losses, reputational damage, customer churn, and regulatory penalties. With data moving across on-premises data centers, cloud platforms, third-party ecosystems, remote work environments, and AI-driven applications, there is a need for a holistic, culture-driven approach to cybersecurity. ... Data protection traditionally was focused on safeguarding the perimeter by securing networks and systems within the physical boundaries where data was normally stored. 


If you thought RTO battles were bad, wait until AI mandates start taking hold across the industry

With the advent of generative AI and the incessant beating of the drum by executives hellbent on unlocking productivity gains, we could see a revival of the dreaded workforce mandate –- only this time with AI. We’ve already had a glimpse of the same RTO tactics being used with AI over the last year. In mid-2025, Microsoft introduced new rules aimed at boosting AI use across the company, with an internal memo warning staff that “using AI is no longer optional”. ... As with RTO mandates, we’re now reaching a point where upward mobility within the enterprise could be at risk as a result of AI use. It’s a tactic initially touted by Dell in 2024 when enforcing its own hybrid work rules, which prompted a fierce backlash among staff. Forcing workers to use AI or risk losing out on promotions will have the desired effect executives want, namely that employees will use the technology, but that’s missing the point entirely. AI has been framed by many big tech providers as a prime opportunity to supercharge productivity and streamline enterprise efficiency. We’ve all heard the marketing jargon. If business leaders are at the point where they’re forcing staff to use the technology, it begs the question of whether it’s actually having the desired effect, which recent analysis suggests it’s not. ... Recent analysis from CompTIA found roughly one-third of companies now require staff to complete AI training. 


In perfect harmony: How Emerald AI is turning data centers into flexible grid assets

At the core of Emerald AI is its Emerald Conductor platform. Described by Sivaram as “an AI for AI,” the system orchestrates thousands of AI workloads across one or more data centers, dynamically adjusting operations to respond to grid conditions while ensuring the facility maintains performance. The system achieves this through a closed-loop orchestration platform comprising an autonomous agent and a digital twin simulator. ... A point keenly pointed out by Steve Smith, chief strategy and regulation officer at National Grid, at the time of the announcement: “As the UK’s digital economy grows, unlocking new ways to flexibly manage energy use is essential for connecting more data centers to our network efficiently.” The second reason was National Grid's transatlantic stature - as an American company active in both the UK and US markets - and its commitment to the technology. “They’ve invested in the program and agreed to a demo, which makes them the ideal partner for our first international launch,” says Sivaram. The final, and most important, factor, notes Sivaram, was the access to the NextGrid Alliance, a consortium of 150 utilities worldwide. By gaining access to such a robust partner network, the deal could serve as a springboard for further international projects. This aligns with the company’s broader partnership approach. Emerald AI has already leveraged Nvidia’s cloud partner network to test its technology across US data centers, laying the groundwork for broader deployment and continued global collaboration. 


7 ways to tame multicloud chaos with generative AI

Architects have the difficult job of understanding tradeoffs between proprietary cloud services and cross-cloud platforms. For example, should developers use AWS Glue, Azure Data Factory, or Google Cloud Data Fusion to develop data pipelines on the respective platforms, or should they adopt a data integration platform that works across clouds? ... “Managing multicloud is like learning multiple languages from AWS, Azure, Oracle, and others, and it’s rare to have teams that can traverse these environments fluidly and effectively. Plus, services and concepts are not portable among clouds, especially in cloud-native PaaS services that go beyond IaaS,” says Harshit Omar, co-founder and CTO at FluidCloud. One way to work around this issue is to assign an AI agent to support the developer or architect in evaluating platform selections. ... Standardizing infrastructure and service configurations across different clouds requires expertise in different naming conventions, architecture, tools, APIs, and other paradigms. Look for genAI tools to act as a translator to streamline configurations, especially for organizations that can templatize their requirements. ... CI/CD, infrastructure-as-code, and process automation are key tools for driving efficiency, especially when tasks span multiple cloud environments. Many of these tools use basic flows and rules to streamline tasks or orchestrate operations, which can create boundary cases that cause process-blocking errors. 


It’s Time To Reinforce Institutional Crypto Key Management With MPC: Sodot CEO

For years, crypto security operations were almost exclusively focused on finding a way to protect the private keys to crypto wallets. It’s known as the “custody risk,” and it will always be a concern to anyone holding digital assets. However, Sofer believes that custody is no longer the weakest link. Cyberattackers have come to realize that secure wallets, often held in cold storage, are far too difficult to crack. ... Sodot has built a self-hosted infrastructure platform that leverages a pair of cutting-edge security techniques – namely, Multi-Party Computation or MPC and Trusted Execution Environments or TEEs. With Sodot’s platform, API keys are never reassembled in full plaintext, eliminating one of the main weaknesses of traditional secrets managers, which typically expose the entire key to any authenticated machine. Instead, Sodot uses MPC to split each key into multiple “shares” that are held by different partners on different technology stacks, Sofer explained. Distributing risk in this way makes an attacker’s job exponentially more difficult, as it means they would have to compromise multiple isolated systems to gain access. ... “Keys are here to stay, and they will control more value and become more sensitive as technology progresses,” Sofer concluded. “As financial institutions get more involved in crypto, we believe demand for self-hosted solutions that secure them will only grow, driven by performance requirements, operational resilience, and control over security boundaries.”

Daily Tech Digest - February 17, 2026


Quote for the day:

"If you want to become the best leader you can be, you need to pay the price of self-discipline." -- John C. Maxwell



6 reasons why autonomous enterprises are still more a vision than reality

"AI is the first technology that allows systems that can reason and learn to be integrated into real business processes," Vohra said. ... Autonomous organizations, he continued, "are built on human-AI agent collaboration, where AI handles speed and scale, leaving judgment and strategy up to humans." They are defined by "AI systems that go beyond just generating insights in silos, which is how most enterprises are currently leveraging AI," he added. Now, the momentum is toward "executing decisions across workflows with humans setting intent and guardrails." ... The survey highlighted that work is required to help develop agents. Only 3% of organizations -- and 10% of leaders -- are actively implementing agentic orchestration. "This limited adoption signals that orchestration is still an emerging discipline," the report stated. "The scarcity of orchestration is a litmus test for both internal capability and external strategic positioning. Successful orchestration requires integrating AI into workflows, systems, and decision loops with precision and accountability." ... Workforce capability gaps continue to be the most frequently cited organizational constraint to AI adoption, as reported by six in 10 executives -- yet only 45% say their organizations offer AI training for all employees. ... As AI takes on more execution and pattern recognition, human value increasingly shifts toward system design, integration, governance, and judgment -- areas where trust, context, and accountability still sit firmly with people.


Finding the key to the AI agent control plane

Agents change the physics of risk. As I’ve noted, an agent doesn’t just recommend code. It can run the migration, open the ticket, change the permission, send the email, or approve the refund. As such, risk shifts from legal liability to existential reality. If a large language model hallucinates, you get a bad paragraph. ... Every time an AI system makes a mistake that a human has to clean up, the real cost of that system goes up. The only way to lower that tax is to stop treating governance as a policy problem and start treating it as architecture. That means least privilege for agents, not just humans. It means separating “draft” from “send.” It means making “read-only” a first-class capability, not an afterthought. It means auditable action logs and reversible workflows. It means designing your agent system as if it will be attacked because it will be. ... Right now, permissions are a mess of vendor-specific toggles. One platform has its own way of scoping actions. Another bolts on an approval workflow. A third punts the problem to your identity and access management team. That fragmentation will slow adoption, not accelerate it. Enterprises can’t scale agents until they can express simple rules. We need to be able to say that an agent can read production data but not write to it. We need to say an agent can draft emails but not send them. We need to say an agent can provision infrastructure only inside a sandbox, with quotas, or that it must request human approval before any destructive action.


PAM in Multi‑Cloud Infrastructure: Strategies for Effective Implementation

The "Identity Gap" has emerged as the leading cause of cloud security breaches. Traditional vault-based Privileged Access Management (PAM) solutions, designed for static server environments, are inadequate for today’s dynamic, API-driven cloud infrastructure. ... PAM has evolved from an optional security measure to an essential and fundamental requirement in multi-cloud environments. This shift is attributed to the increased complexity, decentralized structure, and rapid changes characteristic of modern cloud architectures. As organizations distribute workloads across AWS, Azure, Google Cloud, and on-premises systems, traditional security perimeters have become obsolete, positioning identity and privileged access as central elements of contemporary security strategies. ... Fragmented identity systems hinder multi‑cloud PAM. Centralizing identity and federating access resolves this, with a Unified Identity and Access Foundation managing all digital identities—human or machine—across the organization. This approach removes silos between on-premises, cloud, and legacy applications, providing a single control point for authentication, authorization, and lifecycle management. ... Cloud providers deliver robust IAM tools, but their features vary. A strong PAM approach aligns these tools using RBAC and ABAC. RBAC assigns permissions by job role for easy scaling, while ABAC uses user and environment attributes for tight security.


Giving AI ‘hands’ in your SaaS stack

If an attacker manages to use an indirect prompt injection — hiding malicious instructions in a calendar invite or a web page the agent reads — that agent essentially becomes a confused deputy. It has the keys to the kingdom. It can delete opportunities, export customer lists or modify pricing configurations. ... For AI agents, this means we must treat them as non-human identities (NHIs) with the same or greater scrutiny than we apply to employees. ... The industry is coalescing around the model context protocol (MCP) as a standard for this layer. It provides a universal USB-C port for connecting AI models to your data sources. By using an MCP server as your gateway, you ensure the agent never sees the credentials or the full API surface area, only the tools you explicitly allow. ... We need to treat AI actions with the same reverence. My rule for autonomous agents is simple: If it can’t dry run, it doesn’t ship. Every state-changing tool exposed to an agent must support a dry_run=true mode. When the agent wants to update a record, it first calls the tool in dry-run mode. The system returns a diff — a preview of exactly what will change . This allows us to implement a human-in-the-loop approval gate for high-risk actions. The agent proposes the change, the human confirms it and only then is the live transaction executed. ... As CIOs and IT leaders, our job isn’t to say “no” to AI. It’s to build the invisible rails that allow the business to say “yes” safely. By focusing on gateways, identity and transactional safety, we can give AI the hands it needs to do real work, without losing our grip on the wheel.


AI-fuelled supply chain cyber attacks surge in Asia-Pacific

Exposed credentials, source code, API keys and internal communications can provide detailed insight into business processes, supplier relationships and technology stacks. When combined with brokered access, that information can support impersonation, targeted intrusion and fraud activity that blends in with legitimate use. One area of concern is open-source software distribution, where widely used libraries can spread malicious code at scale. ... The report points to AI-assisted phishing campaigns that target OAuth flows and other single sign-on mechanisms. These techniques can bypass multi-factor authentication where users approve malicious prompts or where tokens are stolen after login. ... "AI did not create supply chain attacks, it has made them cheaper, faster, and harder to detect," Mr Volkov added. "Unchecked trust in software and services is now a strategic liability." The report names a range of actors associated with supply-chain-focused activity, including Lazarus, Scattered Spider, HAFNIUM, DragonForce and 888, as well as campaigns linked to Shai-Hulud. It said these groups illustrate how criminal organisations and state-aligned operators are targeting similar platforms and integration layers. ... The report's focus on upstream compromise reflects a broader trend in cyber risk management, where organisations assess not only their own exposure but also the resilience of vendors and technology supply chains.


Automation cannot come at the cost of accountability; trust has to be embedded into the architecture

Visa is actively working with issuers, merchants, and payment aggregators to roll out authentication mechanisms based on global standards. “Consumers want payments to be invisible,” Chhabra adds. “They want to enjoy the shopping experience, not struggle through the payment process.” Tokenisation plays a critical role in enabling this vision. By replacing sensitive card details with unique digital tokens, Visa has created a secure foundation for tap-and-pay, in-app purchases, and cross-border transactions. In India alone, nearly half a billion cards have already been tokenised. “Once tokenisation is in place, device-based payments and seamless commerce become possible,” Chhabra explains. “It’s the bedrock of frictionless payments.” Fraud prevention, however, is no longer limited to card-based transactions. With real-time and account-to-account payments gaining momentum, Visa has expanded its scope through strategic acquisitions such as Featurespace. The UK-based firm specialises in behavioural analytics for real-time fraud detection, an area Chhabra describes as increasingly critical. “We don’t just want to detect fraud on the Visa network. We want to help prevent fraud across payment types and networks,” he says. Before deploying such capabilities in India, Visa conducts extensive back-testing using localised data and works closely with regulators. “Global intelligence is powerful, but it has to be adapted to local behaviour. You can’t simply overfit global models to India’s unique payment patterns.”


Most ransomware playbooks don't address machine credentials. Attackers know it.

The gap between ransomware threats and the defenses meant to stop them is getting worse, not better. Ivanti’s 2026 State of Cybersecurity Report found that the preparedness gap widened by an average of 10 points year over year across every threat category the firm tracks. ... The accompanying Ransomware Playbook Toolkit walks teams through four phases: containment, analysis, remediation, and recovery. The credential reset step instructs teams to ensure all affected user and device accounts are reset. Service accounts are absent. So are API keys, tokens, and certificates. The most widely used playbook framework in enterprise security stops at human and device credentials. The organizations following it inherit that blind spot without realizing it. ... “Although defenders are optimistic about the promise of AI in cybersecurity, Ivanti’s findings also show companies are falling further behind in terms of how well prepared they are to defend against a variety of threats,” said Daniel Spicer, Ivanti’s Chief Security Officer. “This is what I call the ‘Cybersecurity Readiness Deficit,’ a persistent, year-over-year widening imbalance in an organization’s ability to defend their data, people, and networks against the evolving threat landscape.” ... You can’t reset credentials that you don’t know exist. Service accounts, API keys, and tokens need ownership assignments mapped pre-incident. Discovering them mid-breach costs days.


CISO Julie Chatman offers insights for you to take control of your security leadership role

In a few high-profile cases, security leaders have faced criminal charges for how they handled breach disclosures, and civil enforcement for how they reported risks to investors and regulators. The trend is toward holding CISOs personally accountable for governance and disclosure decisions. ... You’re seeing the rise of fractional CISOs, virtual CISOs, heads of IT security instead of full CISO titles. It’s a lot harder to hold a fractional CISO personally liable. This is relatively new. The liability conversation really intensified after some high-profile enforcement actions, and now we’re seeing the market respond. ... First, negotiate protection upfront. When you’re thinking about accepting a CISO role, explicitly ask about D&O insurance coverage. If the CISO is not considered a director or an officer of the company and can’t be given D&O coverage, will the company subsidize individual coverage? There are companies now selling CISO-specific policies. Make this part of your compensation negotiation. Second, do your job well but understand the paradox. Sometimes when you do your job properly, you’re labeled ‘the office of no,’ you’re seen as ‘difficult,’ and you last 18 months. It’s a catch-22. Real liability protection is changing how your organization thinks about risk ownership. Most organizations don’t have a unified view of risk or the vocabulary to discuss it properly. If you can advance that as a CISO, you can help the business understand that risk is theirs to accept, not yours.


The AI bubble will burst for firms that can’t get beyond demos and LLMs

Even though the discussion of a potential bubble is ubiquitous, what’s going on is more nuanced than simple boom-and-bust chatter, said Francisco Martin-Rayo, CEO of Helios AI. “What people are really debating is the gap between valuation and real-world impact. Many companies are labeled ‘AI-driven,’ but only a subset are delivering measurable value at scale,” Martin-Rayo said. Founders confuse fundraising with progress, which comes only when they are solving real problems for real clients, said Nacho De Marco, founder of BairesDev. “Fundraising gives you dopamine, but real progress comes from customers,” De Marco said. “The real value of a $1B valuation is customer validation.” ... The AI shakeout has already started, and the tenor at WEF “feels less like peak hype and more like the beginning of a sorting process,” Martin-Rayo said. ... Companies that survive the coming shakeout will be those willing to rebuild operations from the ground up rather than throwing AI into existing workflows, said Jinsook Han, chief agentic AI officer at Genpact. ”It’s not about just bolting some AI into your existing operation,” Han said. “You have to really build from ground up — it’s a complete operating model change.” Foundational models are becoming more mature and can do more of what startups sell. As a result, AI providers that don’t offer distinct value will have a tough time surviving, Han said.


What could make the EU Digital Identity Wallets fail?

Large-scale digital identity initiatives rarely fail because the technology does not work. They fail because adoption, incentives, trust, and accountability are underestimated. The EU Digital Identity Wallet could still fail, or partially fail, succeeding in some countries while struggling or stagnating in others. ... A realistic risk is fragmented success. Some member states are likely to deliver robust wallets on time. Others may launch late, with limited functionality, or without meaningful uptake. A smaller group may fail to deliver a convincing solution at all, at least in the first phase. From the perspective of users and service providers, this fragmentation already undermines cross border usage. If wallets differ significantly in capabilities, attributes, and reliability across borders, the promise of a seamless European digital identity weakens. ... While EU Digital Identity Wallets offer significantly higher security than current solutions, they will not eliminate fraud entirely. There will still be cases of wallets issued to the wrong individual, phishing attempts, and wallet takeovers. If early fraud cases are poorly handled or publicly misunderstood, trust in the ecosystem could erode quickly. The wallet’s strong privacy architecture introduces real trade-offs. One uncomfortable but necessary question worth asking is: are we going too far with privacy? ... The EU Digital Identity Wallet will succeed only if policymakers, wallet providers, and service providers treat trust, economics, and usability as core design principles, not secondary concerns.

Daily Tech Digest - November 20, 2025


Quote for the day:

"Choose your heroes very carefully and then emulate them. You will never be perfect, but you can always be better." -- Warren Buffet



A developer’s guide to avoiding the brambles

Protect against the impossible, because it just might happen. Code has a way of surprising you, and it definitely changes. Right now you might think there is no way that a given integer variable would be less than zero, but you have no idea what some crazed future developer might do. Go ahead and guard against the impossible, and you’ll never have to worry about it becoming possible. ... If you’re ever tempted to reuse a variable within a routine for something completely different, don’t do it. Just declare another variable. If you’re ever tempted to have a function do two things depending on a “flag” that you passed in as a parameter, write two different functions. If you have a switch statement that is going to pick from five different queries for a class to execute, write a class for each query and use a factory to produce the right class for the job. ... Ruthlessly root out the smallest of mistakes. I follow this rule religiously when I code. I don’t allow typos in comments. I don’t allow myself even the smallest of formatting inconsistencies. I remove any unused variables. I don’t allow commented code to remain in the code base. If your language of choice is case-insensitive, refuse to allow inconsistent casing in your code. ... Implicitness increases cognitive load. When code does things implicitly, the developer has to stop and guess what the compiler is going to do. Default variables, hidden conversions, and hidden side effects all make code hard to reason about.


SaaS Rolls Forward, Not Backward: Strategies to Prevent Data Loss and Downtime

The SaaS provider owns infrastructure-level redundancy and backups to maintain operational continuity during regional outages or major disruptions. InfoSec and SaaS teams are no longer responsible for infrastructure resilience. Instead, they are responsible for backing up and recovering data and files stored in their SaaS instances. This is significant for two primary reasons. First, the RTO and RPO for SaaS data become dependent on the vendor's capabilities, which are not within the control of the customer. ... A common misconception, even among mature InfoSec teams, is the assumption that SaaS data protection is fully managed by the vendor. This “set it and forget it” mindset, while understandable given the cloud promise, overlooks the need for organizations to backup their SaaS data. Common causes of data loss and corruption are human errors within the customer’s SaaS instance, including accidental deletion, integration issues, and migration mishaps which fall under the customer’s responsibility. ... InfoSec and SaaS teams must combine their knowledge and experience to ensure that backups contain all necessary data, as well as metadata, which provides the necessary context, and can be restored reliably. SaaS administrators can prevent users from logging in, disable automations, block upstream data from being sent, or restrict data from being sent to downstream systems as needed.


EU publishes Digital Omnibus leaving AI Act future uncertain

The European Commission unveiled amendments on Wednesday designed to simplify its digital regulatory framework, including the AI Act and data privacy rules, in a bid to boost innovation. The Digital Omnibus package introduces several measures, including delaying the stricter regulation of ‘high-risk’ AI applications until late 2027 and allowing companies to use sensitive data, such as biometrics, for AI training under certain conditions. ... The Digital Omnibus also attempts to adapt rules within privacy regulation, such as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act. The Commission plans to clarify when data stops being “personal.” This could open the doors for tech companies to include anonymous information from EU citizens into large datasets for training AI, even when they contain sensitive information such as biometric data, as long as they make reasonable efforts to remove it. ... EU member states have also called for postponing the rollout of the AI Act altogether, citing difficulties in defining related technical standards and the need for Europe to stay competitive in the global technological race. “Europe has not so far reaped the full benefits of the digital revolution,” says European economy commissioner Valdis Dombrovskis. “And we cannot afford to pay the price for failing to keep up with demands of the changing world.”


Building Distributed Event-Driven Architectures Across Multi-Cloud Boundaries

The elegant simplicity of "fire an event and forget" becomes a complex orchestration of latency optimization, failure recovery, and data consistency across provider boundaries. Yet, when done right, multi-cloud event-driven architectures offer unprecedented resilience, performance, and business agility. ... Multi-cloud latency isn't just about network speed, it's about the compound effect of architectural decisions across cloud boundaries. Consider a transaction that needs to traverse from on-premise to AWS for risk assessment, then to Azure for analytics processing, and back to on-premise for core banking updates. Each hop introduces latency, but the cumulative effect can transform a sub-100 ms transaction into a multi-second operation. ... Here is an uncomfortable truth: Most resilience strategies focus on the wrong problem. As engineers, we typically put our efforts into handling failures that occur during an outage or when a service component is down. Equally important is how you recover from those failures after the outage is over. This approach to recovery creates systems that "fail fast" but "recover never". ... The combination of event stores, resilient policies, and systematic event replay capabilities creates a distributed system that not only survives failures, but also recovers automatically, which is a critical requirement for multi-cloud architectures. ... While duplicate risk processing merely wastes resources, duplicate financial transactions create regulatory nightmares and audit failures.


For AI to succeed in the SOC, CISOs need to remove legacy walls now

"The legacy SOC, as we know it, can't compete. It's turned into a modern-day firefighter," warned CrowdStrike CEO George Kurtz during his keynote at Fal.Con 2025. "The world is entering an arms race for AI superiority as adversaries weaponize AI to accelerate attacks. In the AI era, security comes down to three things: the quality of your data, the speed of your response, and the precision of your enforcement." Enterprise SOCs average 83 security tools across 29 different vendors, each generating isolated data streams that defy easy integration to the latest generation of AI systems. System fragmentation and lack of integration represent AI's greatest vulnerability, and organizations' most fixable problem. The mathematics of tool sprawl proves devastating. Organizations deploying AI across fragmented toolsets report significantly elevated false-positive rates. ... Getting governance right is one of a CISO's most formidable challenges and often includes removing longstanding roadblocks to make sure their organization can connect and make contributions across the business. ... A CISO's transformation from security gatekeeper to business enabler and strategist is the single best step any security professional can take in their career. CISOS often remark in interviews that the transition from being an app and data disciplinarian to an enabler of new growth with the ultimate goal of showing how their teams help drive revenue was the catalyst their careers needed.


Selling to the CISO: An open letter to the cybersecurity industry

Vendors think they’re selling technology. They’re not. They’re trying to sell confidence to people whose jobs depend on managing the impossible. As a CISO, I buy because I’m trying to reduce the odds that something catastrophic happens on my watch. Every decision is a gamble. There is no “safe” option in this field. I buy to reduce personal and organizational risk, knowing there’s no such thing as perfect protection. Cybersecurity is not a puzzle you solve. It’s a game you play — and it never ends. You make the best moves you can, knowing you’ll never win. Even if I somehow patched every system and closed every gap, the cost of perfection would cripple the company. ... The truth is that most organizations don’t need more tools. They need to get the fundamentals right. If you can patch consistently, maintain good access controls, and segment your networks so you aren’t running flat, you’re ahead of most of the market — no shiny tools required. Strong patching alone will eliminate most of the attack surface that vendors keep promising to “detect.” ... We can’t blame vendors alone. We created the market they’re serving. We bought into the illusion that innovation equals progress. We ignored the fundamentals because they’re hard and unglamorous. We filled our environments with products we couldn’t fully use and called it maturity. We built complexity and called it strategy. Then we act shocked when the same root causes keep taking us down. Good security still starts with good IT. Always has. Always will. If you don’t know what you own, you can’t protect it.


When IT fails, OT pays the price

Criminal groups are now demonstrating a better understanding of industrial dependencies. The Qilin group carried out 63 confirmed attacks against industrial entities since mid 2024 and has focused on energy distribution and water utilities. Their use of Windows and Linux payloads gives them wider reach inside mixed environments. Several incidents involved encryption of shared engineering resources and historian systems, which caused operational delays even when controllers remained untouched. ... Across intrusions, attackers favored techniques that exploit weak segmentation. PowerShell activity made up the largest share of detections, followed by Cobalt Strike. The findings show that adversaries rarely need ICS specific exploits at the start of an attack. They rely on stolen accounts, remote access tools, and administrative shares to move toward engineering assets. ... The vulnerability data reinforces the emphasis on the boundary between enterprise systems and industrial systems. Ongoing exploitation of Cisco ASA and FTD devices, including attacks that modified device firmware. Several critical flaws in SAP NetWeaver and other manufacturing operations software were also exploited, which created direct pivot points into factory workflows. Recent disclosures affecting Rockwell ControlLogix and GuardLogix platforms allow remote code execution or force the controller into a failed state. Attacks on these devices pose immediate availability and safety risks. 


India has the building blocks to influence global standards in AI infrastructure

The convergence of cloud, edge, and connectivity represents the foundation of India’s next AI leap. In a country as geographically and economically diverse as India, AI workloads can’t depend solely on centralized cloud resources. Edge computing allows us to bring compute closer to the source of data be it in a factory, retail store, or farm which reduces latency, lowers costs, and enhances privacy. Cloud provides elasticity and scalability, while secure connectivity ensures that both environments communicate seamlessly. This triad enables an AI model to be trained in the cloud, refined at the edge, and deployed securely across networks unlocking innovation in every geography. We have been building this connected fabric to ensure that access to compute and intelligence isn’t limited by location or scale. ... We see this evolution already unfolding. AI-as-a-Service will thrive when infrastructure, connectivity, and platforms converge under a single, interoperable framework. Each stakeholder; telecoms, data centres, and hyperscalers brings a unique value: scale, proximity, and reach. ... India is already shaping global conversations around digital equity and secure connectivity, and the same potential exists in AI infrastructure. In next 5 years, India could stand out not for the size of its compute capacity but for how effectively it builds an inclusive digital foundation, one that blends cloud, edge, data governance, and innovation seamlessly.


How to Overcome Latency in Your Cyber Career

The presence of latency is not an indictment of your ability. It's a signal that something in your system needs attention. Identifying what creates latency in your professional life and learning how to address it are essential components of long-term growth. With a diagnostic mindset and a willingness to optimize, you can restore throughput and move forward with purpose. ... Career latency often appears when your knowledge no longer reflects current industry expectations. Even highly capable professionals experience slowdown when their technical foundation lags behind evolving practices. ... Unclear goals create misalignment between where you invest your time and where you want to progress. Without a defined direction, you may be working hard but not moving in a way that supports advancement. ... Professionals often operate under heavy workloads that dilute productivity. Too many competing responsibilities, constant context switching or tasks disconnected from your goals can limit your effectiveness and delay growth. ... Career progress can slow when your professional network lacks the signal strength needed to route opportunities in your direction. Without mentorship, community or visibility, growth becomes harder to sustain. ... Missed opportunities often stem from limited readiness. Preparation, bandwidth or timing may be misaligned, and promising chances can disappear before you can act.


Why IT-SecOps Convergence is Non-Negotiable

The message is clear: siloed operations are no longer just inefficient—they’re a security liability. ... The first, and often the most difficult step toward achieving true IT-SecOps convergence, is cultural. For years, IT and security teams have operated in silos, essentially functioning as two different businesses. ... On paper, these Key Performance Indicators (KPIs) appear aligned—both measure speed and efficiency. But in practice, they reflect different views: one is laser-focused on minimizing risk, the other on maximizing uptime. ... The real opportunity lies in establishing a shared mandate. Both teams need to understand that their goals are two sides of the same coin: you can’t have productive systems that aren’t secure, and security that breaks the system isn’t sustainable; therefore, convergence begins not with tools, but with alignment of intent. Once this clicks, both teams begin working from a common set of goals, shared KPIs, and joint decision frameworks. ... The strongest security posture doesn’t come from piling on more tools. It comes from creating continuous alignment between management, security, and user experience. When those three functions operate in sync, IT doesn’t deploy technology that security can’t enforce, security doesn’t introduce controls that slow down work, and users don’t feel the need to bypass policies with shadow apps or risky shortcuts. ... When a unified structure is implemented, policies can be deployed instantly, validated automatically, and adjusted based on real user impact—all without waiting for separate teams to sync.