Showing posts with label data breach. Show all posts
Showing posts with label data breach. Show all posts

Daily Tech Digest - March 21, 2026


Quote for the day:

"Management is about arranging and telling. Leadership is about nurturing and enhancing." -- Tom Peters


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


Three ways AI is learning to understand the physical world

The VentureBeat article "Three ways AI is learning to understand the physical world" explores how researchers are overcoming the physical reasoning limitations of large language models through "world models." While LLMs excel at abstract knowledge, they lack grounding in causality, prompting a shift toward three distinct architectural approaches to simulate the real world. The first, Joint Embedding Predictive Architecture (JEPA), mimics human cognition by learning abstract latent features, ignoring irrelevant pixels to achieve the high efficiency required for real-time robotics. The second approach utilizes Gaussian splats to generate detailed 3D spatial environments from prompts, allowing AI agents to interact within standard physics engines like Unreal Engine. Finally, end-to-end generative models, such as DeepMind’s Genie 3 and Nvidia’s Cosmos, act as native physics engines by continuously generating frames and physical dynamics on the fly. This third method is particularly vital for creating massive synthetic data factories to safely train autonomous systems in complex edge cases. Ultimately, the analysis suggests a future defined by hybrid architectures, where LLMs provide the reasoning interface while world models serve as the foundational infrastructure for spatial data, enabling AI to move beyond digital browsers and into physical spaces.


Field workers don’t need more access, they need better security

In this interview, Chris Thompson, CISO at West Shore Home, outlines the evolving landscape of cybersecurity for field-based workforces. He emphasizes that the principle of least privilege should be applied consistently across all roles, dismissing the notion that field workers require broader access for convenience. A significant shift involves replacing antiquated, shared generic accounts with individual credentials secured by robust multifactor authentication, reflecting a modern standard where security is never sacrificed for speed. Thompson details how West Shore Home manages sensitive customer data through continuous risk assessments and bi-monthly executive reviews, ensuring mitigation strategies remain agile rather than stuck in traditional annual cycles. Addressing the logistical hurdles of training, he advocates for integrating security awareness into daily "toolbox talks" at warehouses, which proves more effective than email-based modules for employees on the move. By aligning security protocols with the technology field teams use daily, the organization fosters a unified culture where every worker understands their role in the broader security posture. Ultimately, Thompson argues that field workers do not need expanded access; they require more sophisticated, integrated security measures that support their unique operational environment without introducing unnecessary risk to the enterprise.


6 innovation curves are rewriting enterprise IT strategy

The article "6 innovation curves are rewriting enterprise IT strategy" highlights a fundamental shift from sequential technology updates to managing multiple, overlapping waves of digital transformation. These six innovation curves include transitioning from traditional software to systems of autonomous collaborators, adopting AI-native applications that embed machine learning into their core architecture, and treating enterprise memory as a queryable knowledge layer for real-time decision-making. Additionally, IT leaders must redesign human-machine interactions to enhance productivity, establish robust governance for trust and integrity in a world of synthetic data, and utilize virtual simulations to de-risk experimentation. The author emphasizes that these curves are deeply interdependent; for example, autonomous agents require high-quality memory layers to function effectively, while simulation environments provide the necessary testing grounds for AI-native interactions. To succeed, organizations must move beyond linear management models and instead develop an integrated strategy that orchestrates these curves concurrently. By focusing on areas like "AgentOps" and persistent data layers, businesses can build a resilient digital architecture capable of absorbing continuous disruption while maintaining operational priorities, effectively redefining how enterprises create value and manage risk in an AI-driven landscape.


Credential theft compounded in 2025, says new data from Recorded Future

Recorded Future’s 2025 Identity Threat Landscape Report reveals that credential theft has become the primary initial access vector for enterprise security breaches, characterized by a staggering escalation throughout the year. Data indicates that credential indexing surged by 90 percent in the final quarter compared to the first, with a significant majority of these attacks specifically targeting authentication systems to maximize unauthorized access. A particularly alarming trend is the proliferation of infostealer malware, which harvested 276 million credentials containing active session cookies. These cookies enable cybercriminals to bypass multi-factor authentication entirely, rendering traditional security measures increasingly insufficient. The report underscores that a single compromised endpoint can jeopardize an entire organization, as the average infected device now yields approximately 87 distinct stolen credentials across various corporate and personal platforms. Consequently, industry experts advocate for a transition toward "verified trust" models, which emphasize continuous, contextual identity verification using biometrics and passkeys. Despite the escalating risk, research from IDC and Ping Identity suggests that only nine percent of organizations have successfully operationalized these advanced safeguards at scale, highlighting a critical maturity gap in global digital infrastructure and a pressing need for board-level prioritization of identity security.


Configuration as a Control Plane: Designing for Safety and Reliability at Scale

The InfoQ article "Configuration as a Control Plane" explores the evolution of configuration from static deployment files into a dynamic, live control plane that actively shapes system behavior. In modern cloud-native architectures, configuration changes often move faster and impact more systems than application code, making them a primary driver of large-scale reliability incidents. Consequently, configuration management is transitioning from traditional agent-based convergence toward continuously reconciled, policy-enforced systems. The article emphasizes treating configuration as a high-leverage reliability discipline rather than a mere operational task. Key strategies discussed include using strongly typed, schema-validated configurations and policy engines like Open Policy Agent (OPA) to enforce guardrails before and during rollouts. By adopting practices such as staged regional rollouts, canary deployments, and automated diff analysis, organizations can ensure that configuration correctness is a systemic property rather than a manual checklist. Looking ahead, the integration of AI-driven risk assessment and unified configuration APIs promises to further enhance safety and resilience. Ultimately, this shift enables infrastructure to become more self-healing and predictable, allowing teams to manage complex, ephemeral workloads at scale while minimizing the risk of catastrophic human error or cascading failures.


10 Million IoT Devices Hacked: Is Yours Next?

The Medium article "10 Million IoT Devices Hacked: Is Yours Next?" explores the alarming rise of BadBox 2.0, a sophisticated global botnet that has compromised over ten million Internet of Things (IoT) devices. Highlighting a 2025 federal lawsuit by Google, the piece details how seemingly harmless gadgets—such as unbranded streaming boxes, digital picture frames, and car infotainment systems—are being transformed into criminal infrastructure. A critical revelation is that many of these devices are pre-infected with malware during manufacturing, meaning consumers are compromised the moment they connect to Wi-Fi. The vulnerability primarily affects cheap hardware running the Android Open Source Project (AOSP) without Google’s Play Protect certification. To safeguard home networks, the author recommends identifying all connected devices via router admin panels and scanning for red flags like "Seekiny Studio" apps or unusual traffic to foreign IP ranges. Ultimately, the article serves as a stark warning against purchasing low-cost, unverified electronics, urging users to prioritize "purchase hygiene" by sticking to reputable brands with verifiable firmware update histories. By verifying Play Protect status and monitoring for network anomalies, users can better defend their digital privacy against these pervasive, invisible threats.


How CISOs Can Survive the Era of Geopolitical Cyberattacks

In the current era of geopolitical cyber warfare, Chief Information Security Officers (CISOs) must pivot from traditional perimeter defense to a robust strategy of internal containment. Geopolitical attacks, exemplified by Iranian wiper campaigns like the Handala group’s strike on Stryker, differ from standard ransomware because they prioritize operational chaos and destruction over financial gain. To survive these threats, the article outlines a vital five-step playbook centered on limiting lateral movement. First, CISOs should implement identity-aware access controls to prevent compromised credentials from granting broad network access. Second, they must enforce default-deny policies on administrative ports to block common pivot points. Third, restricting privileged accounts through role-based segmentation is essential to reduce the potential blast radius of a breach. Fourth, organizations need deep visibility into internal traffic to detect covert tunnels and unauthorized connection paths. Finally, implementing automated isolation capabilities ensures that destructive activity is contained before it can spread across the entire infrastructure. Ultimately, the transition to a self-defending network that focuses on stopping an attacker’s mobility rather than just their entry is crucial. By treating internal connectivity as a primary risk factor, CISOs can ensure their organizations remain operational despite increasingly sophisticated, state-sponsored cyber disruptions.


Building A Sustainable Hustle Culture

In "Building A Sustainable Hustle Culture," Greg Dolan, CEO of Keen Decision Systems, critiques the traditional "work hard, play hard" model for its tendency to cause burnout and employee dissatisfaction. Instead, he advocates for a reimagined "smart hustle" that prioritizes work-life integration and mental well-being over relentless overwork. Central to this approach is the implementation of a four-day workweek, which Dolan argues allows for the deep rest necessary for high performance. By establishing clear temporal constraints, employees are encouraged to maximize their focus during work hours while fully disconnecting during their time off. This period of rest often serves as a catalyst for innovation, as personal interactions and downtime can unlock fresh professional insights. Despite the fact that only 22% of American employers have adopted this schedule, Dolan highlights research showing that 98% of employees feel significantly more motivated under such a model. Ultimately, the article suggests that sustainable success is achieved not through endless hours, but by valuing employee autonomy and recognizing that a refreshed workforce is inherently more productive and creative, transforming the very definition of professional ambition and organizational health in the modern era.


5 Production Scaling Challenges for Agentic AI in 2026

In the article "5 Production Scaling Challenges for Agentic AI in 2026," Nahla Davies examines the significant hurdles organizations face when moving autonomous systems from prototype to large-scale production. The first major obstacle is orchestration complexity, which grows exponentially in multi-agent environments where coordination overhead often becomes a performance bottleneck. Second, current observability tools remain inadequate for tracing the non-deterministic, multi-step decision paths inherent in agentic workflows, making debugging a profound challenge. Third, cost management is increasingly difficult as autonomous loops consume tokens rapidly, with variable execution paths creating high billing unpredictability. Fourth, traditional testing and evaluation methods are insufficient for probabilistic systems; teams must instead develop advanced simulation environments or "LLM-as-a-judge" pipelines to ensure reliability. Finally, the rapid deployment of agentic capabilities has outpaced governance and safety frameworks. Implementing robust guardrails is essential to prevent harmful real-world actions—such as unauthorized transactions or database modifications—without stifling the agent’s practical utility. Ultimately, the analysis highlights that while agentic AI is transformative, bridging the production gap requires solving these foundational infrastructure and safety problems to move beyond "pilot purgatory" into meaningful, scaled operations.


Building trust in the future of quantum computing

The article "The Future of Quantum," published on Phys.org in March 2026, outlines a pivotal transition in quantum science from experimental demonstrations to "utility-scale" industrial applications. As the field marks the centennial of quantum mechanics, researchers are shifting focus from simply increasing qubit counts to enhancing system reliability through advanced error-mitigation and standardized benchmarking. A central theme is "building trust," which involves creating transparent performance metrics that allow industries to transition from classical to quantum-enhanced workflows in sectors like drug discovery, sustainable material design, and financial modeling. Significant breakthroughs highlighted include the development of diamond-based quantum internet nodes and the emergence of "quantum batteries" that exhibit faster charging at larger scales. Additionally, the analysis emphasizes the geopolitical dimension, noting substantial national investments aimed at securing sovereign quantum capabilities for national security and economic resilience. Ultimately, the piece argues that the "second quantum revolution" is now defined by the convergence of hardware stability and sophisticated software stacks, effectively turning the strange properties of entanglement and superposition into dependable tools for global digital infrastructure and solving previously intractable computational challenges.

Daily Tech Digest - March 20, 2026


Quote for the day:

"Nothing so conclusively proves a man's ability to lead others as what he does from day to day to lead himself." -- Thomas J. Watson


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 23 mins • Perfect for listening on the go.


Rethinking Cyber Preparedness in Age of AI Cyberwarfare

The article "Rethinking Cyber Preparedness in the Age of AI and Cyberwarfare" highlights a critical disconnect termed the "readiness paradox," where nearly 80% of IT leaders feel prepared for cyberwarfare despite over half of organizations suffering AI-driven attacks recently. According to Armis’s latest report, traditional defense mechanisms are failing against agentic AI, which nation-state actors now deploy for rapid reconnaissance and lateral movement. As autonomous agents begin weaponizing zero-day exploits faster than human researchers can categorize them, the attack surface has expanded to include overlooked assets like building management systems and IoT devices. The financial stakes are escalating, with average ransomware payouts reaching $11.6 million, often exceeding annual security budgets. To counter these sophisticated threats, the article emphasizes that organizations must achieve superior visibility into their internal environments and map every network asset. Furthermore, IT leaders should embrace AI-driven security policies rather than ineffective bans to combat the risks of "shadow AI" used by employees. Ultimately, true resilience depends on whether a company knows its own infrastructure better than its adversaries, transforming AI from a liability into a vital defensive tool for modern geopolitical threats.


Are small language models finally having their moment?

The rapid ascent of Small Language Models (SLMs) marks a strategic shift in the artificial intelligence landscape, as enterprises seek to mitigate the immense costs and security risks associated with massive frontier models. Unlike their trillion-parameter counterparts, SLMs operate with significantly fewer parameters—ranging from millions to a few billion—allowing them to run locally on laptops or mobile devices without internet connectivity. This architectural efficiency ensures superior data privacy and regulatory compliance, particularly in sensitive sectors like healthcare, defense, and banking where proprietary data must remain on-premises. While Large Language Models (LLMs) excel at general synthesis and creative tasks, SLMs are increasingly preferred for specialized, rules-based functions such as code completion and document classification. Gartner even projects that by 2027, task-specific SLM usage will triple that of LLMs. Through techniques like knowledge distillation and pruning, these compact models offer a cost-effective, energy-efficient alternative that delivers high performance with minimal latency. Consequently, the industry is moving toward a hybrid ecosystem where SLMs handle secure, specialized operations while LLMs provide broader abstraction, proving that in the evolving world of enterprise AI, bigger is not always better for every specific business need.


What it takes to level up your org’s AI maturity

To advance an organization's AI maturity, leaders must transition from merely "doing AI" to driving substantial business impact through an outcomes-based, AI-first strategy. According to experts Afshean Talasaz and Zar Toolan, this shift requires CIOs to adopt an "innovator-operator" mindset, balancing the need for rapid evolution with the stability required for consistent execution. Maturity is categorized into three levels, with the most advanced organizations enjoying a first-mover advantage led by CEO-backed agendas. A critical component of this journey is the "from-to so-that" modeling, which aligns data and AI initiatives with specific strategic outcomes like trust, business value, and reduced time to value. Winners in this space prioritize long-term infrastructure investments and rigorous data cleanup while securing short-term wins to demonstrate ROI. Furthermore, scaling AI successfully demands an intense focus on granular details rather than abstract concepts; without getting the technical and operational nuances right, true scale remains elusive. Ultimately, the transformation is a "team sport" requiring absolute alignment across the C-suite and a commitment to reducing internal volatility. By preparing thoroughly and maintaining consistent execution, organizations can move beyond operational tools to treat sovereign enterprise data as a powerful competitive moat.


The Power Ladder Architecture—A System For Turning Risk Work Into Decisions, Delivery And Proof

Maman Ibrahim’s article, "The Power Ladder Architecture," addresses the critical gap between identifying organizational risks and executing meaningful change. Ibrahim argues that risk management often fails not because of a lack of effort, but because it fails to convert analysis into "leadership work." Many teams present polished dashboards that provide a false sense of security while stalling when faced with difficult trade-offs. The Power Ladder is proposed as a solution, shifting the focus from mere reporting to three tangible outcomes: decisions, delivery, and proof. First, "decisions" require framing risks as binary choices for leadership, forcing clarity on trade-offs like speed versus security. Second, "delivery" ensures that once a choice is made, it is translated into structured tasks with clear ownership and deadlines. Finally, "proof" demands verifiable evidence that the risk profile has actually improved, rather than just being documented. By implementing this architecture, organizations can move beyond ceremonial risk management and establish a high-altitude system where audit concerns and cyber exposures are effectively neutralized. This approach transforms risk work into a powerful engine for operational resilience, ensuring that every identified vulnerability leads to a documented decision and a validated result.


The espionage reality: Your infrastructure is already in the collection path

Modern enterprises are increasingly caught in the "collection path" of global espionage, not necessarily as primary targets, but because they utilize the same centralized infrastructure as their adversaries. This shift highlights a structural exposure problem where shared dependencies—such as telecommunications, cloud services, and identity layers—become conduits for siphoning data and monitoring authentication. When national telecommunications providers are compromised, attackers can collect intelligence directly from the pathways an organization relies on, rendering traditional internal security measures insufficient. The article emphasizes that security leaders must move beyond internal asset protection to evaluate risk through the lens of upstream dependencies. Key recommendations include demanding integrity attestation from providers, reducing implicit trust in external networks, and hardening session layers to mitigate token theft and impersonation. Furthermore, the persistence of advanced persistent threats (APTs) within backbone infrastructure is now influencing the cyber insurance market, leading to higher premiums and stricter exclusions. Ultimately, organizations must integrate intelligence-driven assessments into their governance models, acknowledging that upstream compromise is a structural reality. To maintain resilience, CISOs must treat every external partner as an active component of their threat surface and design systems that degrade safely under inevitable compromise.


A direct approach to satellite communication

The article "A Direct Approach to Satellite Communication" on Data Center Dynamics explores the transformative shift in how satellite systems integrate with terrestrial network infrastructures. It highlights the evolution from traditional, isolated satellite setups toward a more "direct" and seamless integration within the broader data center and cloud ecosystem. The piece details how Low Earth Orbit (LEO) constellations and advancements in software-defined networking (SDN) are reducing latency and increasing bandwidth, making satellite links a viable, high-performance extension for enterprise networks rather than just a backup for remote locations. By treating space-based assets as reachable network nodes, providers can offer direct cloud connectivity, bypassing complex ground-station hops that previously hampered speed. This integration allows data centers to achieve greater resiliency and global reach, facilitating real-time data processing for edge computing and IoT applications in underserved regions. Ultimately, the analysis suggests that the convergence of space and ground infrastructure is turning satellite communication into a mainstream pillar of modern digital architecture, effectively "cloudifying" the final frontier to support the next generation of global, high-speed connectivity.


AI will accelerate tech job growth - former Tesla president explains where and why

In this ZDNet article, Jon McNeill, former Tesla president and current CEO of DVx Ventures, challenges the "tech job apocalypse" narrative by highlighting how artificial intelligence will actually accelerate employment in specific sectors. McNeill argues that the growing complexity of AI-driven ecosystems creates an intense demand for human expertise, particularly in infrastructure and networking. As organizations deploy massive server farms and sophisticated GPU clusters, the need for skilled professionals to manage, synchronize, and maintain these resilient networks becomes critical. While AI may handle basic coding and quality control, McNeill emphasizes that high-level architectural design remains a uniquely human domain, requiring "smart computer scientists" to navigate multi-layered model stacks. A core takeaway from his experience is the "automate last" principle, which suggests that businesses must first simplify and optimize their manual processes before introducing automation. By doing so, companies avoid the trap of embedding complexity into rigid code. Ultimately, McNeill urges technology professionals to move up the value chain, focusing on architectural innovation and process optimization, while cautioning against using expensive AI solutions where simpler, human-led methods are more effective and efficient for long-term growth.


Are You the Problem at Work? These 15 Questions Will Reveal the Truth.

In the Entrepreneur article "15 Questions That Reveal If You’re the Problem at Work," author Roy Dekel challenges leaders to look inward rather than blaming external factors for workplace issues like high turnover or low engagement. The piece argues that while many professionals prioritize strategic optimization, the true bottleneck is often a lack of emotional intelligence (EQ). To help leaders identify their blind spots, Dekel presents fifteen diagnostic questions that assess one’s "emotional wake." These include whether a team falls silent when the leader enters the room, how the leader reacts to bad news, and whether they value outcomes over effort. High EQ is framed as the foundation of psychological safety; leaders who possess it tend to listen more, apologize easily, and regulate their emotions under pressure, ultimately making their employees feel "bigger" rather than "smaller." By honestly answering these questions, managers can transition from being a source of tension to becoming a catalyst for trust and innovation. The article concludes that leadership is effectively the environment in which others must work, emphasizing that self-awareness is a learnable skill that can fundamentally transform organizational culture and employee satisfaction.


Aura breach and AI companion app flaws sharpen privacy fears

The recent security report highlighting widespread vulnerabilities in AI companion apps, coupled with a significant data exposure at identity protection firm Aura, has intensified global privacy concerns regarding the management of intimate user data. Aura recently confirmed that a targeted phishing attack on an employee allowed unauthorized access to approximately 900,000 records, including names and email addresses, though sensitive financial data remained secure. Simultaneously, research by Oversecured revealed that seventeen popular AI companion and dating simulator apps—boasting over 150 million installs—contain hundreds of critical and high-severity security flaws. These vulnerabilities, ranging from hardcoded cloud credentials to exploitable chat interfaces, potentially expose deeply personal information such as erotic chat histories, sexual orientation, and even suicidal thoughts. Despite the sensitivity of this data, the report emphasizes a regulatory "blind spot," noting that while authorities have addressed child safety and broad privacy disclosures, they have yet to enforce rigorous application-layer security standards. Together, these incidents underscore the growing risk of a digital era where companies frequently fail to protect the highly personal details they solicit from users. This convergence of corporate breaches and structural app flaws highlights an urgent need for stricter oversight and improved security architectures across the global network ecosystem.


The rise of the intelligent agent: Why human-in-the-loop is the future of AIOps

The article "The Rise of the Intelligent Agent: Why Human-in-the-Loop is the Future of AIOps" examines the transformative role of Agentic AI in IT operations through an interview with Srinivasa Raghavan S of ManageEngine. It argues that intelligent agents should amplify human expertise rather than replace it, specifically by automating repetitive tasks and filtering out telemetry noise to provide actionable insights. A central theme is the "human-in-the-loop" architecture, which integrates automation with strict policy guardrails, orchestration, and auditability to ensure engineers maintain control. These systems utilize machine learning for predictive anomaly detection and causal AI for rapid root-cause analysis, significantly decreasing mean time to resolution. By transitioning from reactive monitoring to self-driving observability, enterprises can better align technical health with business goals like customer experience and uptime SLAs. Although hybrid and multi-cloud environments introduce visibility challenges, unified observability platforms help manage this complexity. Ultimately, the article advocates for a phased adoption of autonomous remediation, building trust through transparent, guarded processes that combine machine speed with human oversight to navigate the intricacies of modern digital infrastructure effectively and safely.

Daily Tech Digest - February 04, 2026


Quote for the day:

"The struggle you're in today is developing the strength you need for tomorrow." -- Elizabeth McCormick



A deep technical dive into going fully passwordless in hybrid enterprise environments

Before we can talk about passwordless authentication, we need to address what I call the “prerequisite triangle”: cloud Kerberos trust, device registration and Conditional Access policies. Skip any one of these, and your migration will stall before it gains momentum. ... Once your prerequisites are in place, you face critical architectural decisions that will shape your deployment for years to come. The primary decision point is whether to use Windows Hello for Business, FIDO2 security keys or phone sign-in as your primary authentication mechanism. ... The architectural decision also includes determining how you handle legacy applications that still require passwords. Your options are limited: implement a passwordless-compatible application gateway, deprecate the application entirely or use Entra ID’s smart lockout and password protection features to reduce risk while you transition. ... Start with a pilot group — I recommend between 50 and 200 users who are willing to accept some friction in exchange for security improvements. This group should include IT staff and security-conscious users who can provide meaningful feedback without becoming frustrated with early-stage issues. ... Recovery mechanisms deserve special attention. What happens when a user’s device is stolen? What if the TPM fails? What if they forget their PIN and can’t reach your self-service portal? Document these scenarios and test them with your help desk before full rollout. 


When Cloud Outages Ripple Across the Internet

For consumers, these outages are often experienced as an inconvenience, such as being unable to order food, stream content, or access online services. For businesses, however, the impact is far more severe. When an airline’s booking system goes offline, lost availability translates directly into lost revenue, reputational damage, and operational disruption. These incidents highlight that cloud outages affect far more than compute or networking. One of the most critical and impactful areas is identity. When authentication and authorization are disrupted, the result is not just downtime; it is a core operational and security incident. ... Cloud providers are not identity systems. But modern identity architectures are deeply dependent on cloud-hosted infrastructure and shared services. Even when an authentication service itself remains functional, failures elsewhere in the dependency chain can render identity flows unusable. ... High availability is widely implemented and absolutely necessary, but it is often insufficient for identity systems. Most high-availability designs focus on regional failover: a primary deployment in one region with a secondary in another. If one region fails, traffic shifts to the backup. This approach breaks down when failures affect shared or global services. If identity systems in multiple regions depend on the same cloud control plane, DNS provider, or managed database service, regional failover provides little protection. In these scenarios, the backup system fails for the same reasons as the primary.


The Art of Lean Governance: Elevating Reconciliation to Primary Control for Data Risk

In today's environment comprising of continuous data ecosystems, governance based on periodic inspection is misaligned with how data risk emerges. The central question for boards, regulators, auditors, and risk committees has shifted: Can the institution demonstrate at the moment data is used that it is accurate, complete, and controlled? Lean governance answers this question by elevating data reconciliation from a back-office cleanup activity to the primary control mechanism for data risk reduction. ... Data profiling can tell you that a value looks unusual within one system. It cannot tell you whether that value aligns with upstream sources, downstream consumers, or parallel representations elsewhere in the enterprise.  ... Lean governance reframes governance as a continual process-control discipline rather than a documentation exercise. It borrows from established control theory: Quality is achieved by controlling the process, not by inspecting outputs after failures. Three principles define this approach: Data risk emerges continuously, not periodically; Controls must operate at the same cadence as data movement; and Reconciliation is the control that proves process integrity. ... Data profiling is inherently inward-looking. It evaluates distributions, ranges, patterns, and anomalies within a single dataset. This is useful for hygiene, but insufficient for assessing risk. Reconciliation is inherently relational. It validates consistency between systems, across transformations, and through the lifecycle of data.


Working with Code Assistants: The Skeleton Architecture

Critical non-functional requirements- such as security, scalability, performance, and authentication- are system-wide invariants that cannot be fragmented. If every vertical slice is tasked with implementing its own authorization stack or caching strategy, the result is "Governance Drift": inconsistent security postures and massive code redundancy. This necessitates a new unifying concept: The Skeleton and The Tissue. ... The Stable Skeleton represents the rigid, immutable structures (Abstract Base Classes, Interfaces, Security Contexts) defined by the human although possibly built by the AI. The Vertical Tissue consists of the isolated, implementation-heavy features (Concrete Classes, Business Logic) generated by the AI. This architecture draws on two classical approaches: actor models and object-oriented inversion of control. It is no surprise that some of the world’s most reliable software is written in Erlang, which utilizes actor models to maintain system stability. Similarly, in inversion of control structures, the interaction between slices is managed by abstract base classes, ensuring that concrete implementation classes depend on stable abstractions rather than the other way around. ... Prompts are soft; architecture is hard. Consequently, the developer must monitor the agent with extreme vigilance. ... To make the "Director" role scalable, we must establish "Hard Guardrails"- constraints baked into the system that are physically difficult for the AI to bypass. These act as the immutable laws of the application.


8-Minute Access: AI Accelerates Breach of AWS Environment

A threat actor gained initial access to the environment via credentials discovered in public Simple Storage Service (S3) buckets and then quickly escalated privileges during the attack, which moved laterally across 19 unique AWS principals, the Sysdig Threat Research Team (TRT) revealed in a report published Tuesday. ... While the speed and apparent use of AI were among the most notable aspects of the attack, the researchers also called out the way that the attacker accessed exposed credentials as a cautionary tale for organizations with cloud environments. Indeed, stolen credentials are often an attacker's initial access point to attack a cloud environment. "Leaving access keys in public buckets is a huge mistake," the researchers wrote. "Organizations should prefer IAM roles instead, which use temporary credentials. If they really want to leverage IAM users with long-term credentials, they should secure them and implement a periodic rotation." Moreover, the affected S3 buckets were named using common AI tool naming conventions, they noted. The attackers actively searched for these conventions during reconnaissance, enabling them to find the credentials quite easily, they said. ... During this privilege-escalation part of the attack — which took a mere eight minutes — the actor wrote code in Serbian, suggesting their origin. Moreover, the use of comments, comprehensive exception handling, and the speed at which the script was written "strongly suggests LLM generation," the researchers wrote.


Ask the Experts: The cloud cost reckoning

According to the 2025 Azul CIO Cloud Trends Survey & Report, 83% of the 300 CIOs surveyed are spending an average of 30% more than what they had anticipated for cloud infrastructure and applications; 43% said their CEOs or boards of directors had concerns about cloud spend. Moreover, 13% of surveyed CIOs said their infrastructure and application costs increased with their cloud deployments, and 7% said they saw no savings at all. Other surveys show CIOs are rethinking their cloud strategies, with "repatriation" -- moving workloads from the cloud back to on-premises -- emerging as a viable option due to mounting costs. ... "At Laserfiche we still have a hybrid environment. So we still have a colocation facility, where we house a lot of our compute equipment. And of course, because of that, we need a DR site because you never want to put all your eggs in that one colo. We also have a lot of SaaS services. We're in a hyperscaler environment for Laserfiche cloud. "But the reason why we do both is because it actually costs us less money to run our own compute in a data center colo environment than it does to be all in on cloud." ,,, "The primary reason why the [cloud] costs have been increasing is because our use of cloud services has become much more sophisticated and much more integrated. "But another reason cloud consumption has increased is we're not as diligent in managing our cloud resources in provisioning and maintaining."


NIST develops playbook for online use cases of digital credentials in financial services

The objective is to develop what a panel description calls a “playbook of standards and best practices that all parties can use to set a high bar for privacy and security.” “We really wanted to be able to understand, what does it actually take for an organization to implement this stuff? How does it fit into workflows? And then start to think as well about what are the benefits to these organizations and to individuals.” “The question became, what was the best online use case?” Galuzzo says. “At which point our colleagues in Treasury kind of said, hey, our online banking customer identification program, how do we make that both more usable and more secure at the same time? And it seemed like a really nice fit. So that brought us to both the kind of scope of what we’re focused on, those online components, and the specific use case of financial services as well.” ... The model, he says, “should allow you to engage remotely, to not have to worry about showing up in person to your closest branch, should allow for a reduction in human error from our side and should allow for reduction in fraud and concern over forged documents.” It should also serve to fulfil the bank’s KYC and related compliance requirements. Beyond the bank, the major objective with mDLs remains getting people to use them. The AAMVA’s Maru points to his agency’s digital trust service, and to its efforts in outreach and education – which are as important in driving adoption as anything on the technical side. 


Designing for the unknown: How flexibility is reshaping data center design

Rapid advances in compute architectures – particularly GPUs and AI-oriented systems – are compressing technology cycles faster than many design and delivery processes can respond. In response, flexibility has shifted from a desirable feature to the core principle of successful data center design. This evolution is reshaping how we think about structure, power distribution, equipment procurement, spatial layout, and long-term operability. ... From a design perspective, this means planning for change across several layers: Structural systems that can accommodate higher equipment loads without reinforcement; Spatial layouts that allow reconfiguration of white space and service zones; and Distribution pathways that support future modifications without disrupting live operations. The objective is not to overbuild for every possible scenario, but to provide a framework that can absorb change efficiently and economically. ... Another emerging challenge is equipment lead time. While delivery periods vary by system, generators can now carry lead times approaching 12 months, particularly for higher capacities, while other major infrastructure components – including transformers, UPS modules, and switchgear – typically fall within the 30- to 40-week range. Delays in securing these items can introduce significant risk when procurement decisions are deferred until late in the design cycle.


Onboarding new AI hires calls for context engineering - here's your 3-step action plan

In the AI world, the institutional knowledge is called context. AI agents are the new rockstar employees. You can onboard them in minutes, not months. And the more context that you can provide them with, the better they can perform. Now, when you hear reports that AI agents perform better when they have accurate data, think more broadly than customer data. The data that AI needs to do the job effectively also includes the data that describes the institutional knowledge: context. ... Your employees are good at interpreting it and filling in the gaps using their judgment and applying institutional knowledge. AI agents can now parse unstructured data, but are not as good at applying judgment when there are conflicts, nuances, ambiguity, or omissions. This is why we get hallucinations. ... The process maps provide visibility into manual activities between applications or within applications. The accuracy and completeness of the documented process diagrams vary wildly. Front-office processes are generally very poor. Back-office processes in regulated industries are typically very good. And to exploit the power of AI agents, organizations need to streamline them and optimize their business processes. This has sparked a process reengineering revolution that mirrors the one in the 1990s. This time around, the level of detail required by AI agents is higher than for humans.


Q&A: How Can Trust be Built in Open Source Security?

The security industry has already seen examples in 2025 of bad actors deploying AI in cyberattacks – I’m concerned that 2026 could bring a Heartbleed- or Log4Shell-style incident involving AI. The pace at which these tools operate may outstrip the ability of defenders to keep up in real time. Another focus for the year ahead: how the Cyber Resilience Act (CRA) will begin to reshape global compliance expectations. Starting in September 2026, manufacturers and open source maintainers must report exploited vulnerabilities and breaches to the EU. This is another step closer to CRA enforcement and other countries like Japan, India and Korea are exploring similar legislation. ... The human side of security should really be addressed just as urgently as the technical side. The way forward involves education, tooling and cultural change. Resilient human defences start with education. Courses from the Linux Foundation like Developing Secure Software and Secure AI/ML‑Driven Software Development equip users with the mindset and skills to make better decisions in an AI‑enhanced world. Beyond formal training, reinforcing awareness creating a vigilant community is critical. The goal is to embed security into culture and processes so that it’s not easily overlooked when new technology or tools roll around. ... Maintainers and the community projects they lead are struggling without support from those that use their software.

Daily Tech Digest - October 30, 2025


Quote for the day:

"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis



Why CIOs need to master the art of adaptation

Adaptability sounds simple in theory, but when and how CIOs should walk away from tested tools and procedures is another matter. ... “If those criteria are clear, then saying no to a vendor or not yet to a CEO is measurable and people can see the reasoning, rather than it feeling arbitrary,” says Dimitri Osler ... Not every piece of wisdom about adaptability deserves to be followed. Mantras like fail fast sound inspiring but can lead CIOs astray. The risk is spreading teams too thin, chasing fads, and losing sight of real priorities. “The most overrated advice is this idea you immediately have to adopt everything new or risk being left behind,” says Osler. “In practice, reckless adoption just creates technical and cultural debt that slows you down later.” Another piece of advice he’d challenge is the idea of constant reorganization. “Change for the sake of change doesn’t make teams more adaptive,” he says. “It destabilizes them.” Real adaptability comes from anchored adjustments, where every shift is tied to a purpose, otherwise, you’re just creating motion without progress, Osler adds. ... A powerful way to build adaptability is to create a culture of constant learning, in which employees at all levels are expected to grow. This can be achieved by seeing change as an opportunity, not a disruption. Structures like flatter hierarchies can also play a role because they can enable fast decision-making and give people the confidence to respond to shifting circumstances, Madanchian adds.


Building Responsible Agentic AI Architecture

The architecture of agentic AI with guardrails defines how intelligent systems progress from understanding intent to taking action—all while being continuously monitored for compliance, contextual accuracy, and ethical safety. At its core, this architecture is not just about enabling autonomy but about establishing structured accountability. Each layer builds upon the previous one to ensure that the AI system functions within defined operational, ethical, and regulatory boundaries. ... Implementing agentic guardrails requires a combination of technical, architectural, and governance components that work together to ensure AI systems operate safely and reliably. These components span across multiple layers — from data ingestion and prompt handling to reasoning validation and continuous monitoring — forming a cohesive control infrastructure for responsible AI behavior.​ ... The deployment of AI guardrails spans nearly every major industry where automation, decision-making, and compliance intersect. Guardrails act as the architectural assurance layer that ensures AI systems operate safely, ethically, and within regulatory and operational constraints. ... While agentic AI holds extraordinary potential, recent failures across industries underscore the need for comprehensive governance frameworks, robust integration strategies, and explicit success criteria. 


Decoding Black Box AI: The Global Push for Explainability and Transparency

The relationship between regulatory requirements and standards development highlights the connection between legal, technical, and institutional domains. Regulations like the AI Act can guide standardization, while standards help put regulatory principles into practice across different regions. Yet, on a global level, we mostly see recognition of the importance of explainability and encouragement of standards, rather than detailed or universally adopted rules. To bridge this gap, further research and global coordination are needed to harmonize emerging standards with regulatory frameworks, ultimately ensuring that explainability is effectively addressed as AI technologies proliferate across borders. ... However, in practice, several of these strategies tend to equate explainability primarily with technical transparency. They often frame solutions in terms of making AI systems’ inner workings more accessible to technical experts, rather than addressing broader societal or ethical dimensions. ... Transparency initiatives are increasingly recognized in fostering stakeholder trust and promoting the adoption of AI technologies, especially when clear regulatory directives on AI explainability are not developed yet. By providing stakeholders with visibility into the underlying algorithms and data usage, these initiatives demystify AI systems and serve as foundational elements for building credibility and accountability within organizations.


How neighbors could spy on smart homes

Even with strong wireless encryption, privacy in connected homes may be thinner than expected. A new study from Leipzig University shows that someone in an adjacent apartment could learn personal details about a household without breaking any encryption. ... the analysis focused on what leaks through side channels, the parts of communication that remain visible even when payloads are protected. Every wireless packet exposes timing, size, and signal strength. By watching these details over time, the researcher could map out daily routines. ... Given the black box nature of this passive monitoring, even if the CSI was accurate, you would have no ground truth to ‘decode’ the readings to assign them to human behavior. So technically it would be advantageous, but you would have a hard time in classifying this data.” Once these patterns were established, a passive observer could tell when someone was awake, working, cooking, or relaxing. Activity peaks from a smart speaker or streaming box pointed to media consumption, while long quiet periods matched sleeping hours. None of this required access to the home’s WiFi network. ... The findings show that privacy exposure in smart homes goes beyond traditional hacking. Even with WPA2 or WPA3 encryption, network traffic leaks enough side information for outsiders to make inferences about occupants. A determined observer could build profiles of daily schedules, detect absences, and learn which devices are in use.


Ransom payment rates drop to historic low as attackers adapt

The economics of ransomware are changing rapidly. Historically, attackers relied on broad access through vulnerabilities and credentials, operating with low overheads. The introduction of the RaaS model allowed for greater scalability, but also brought increased costs associated with access brokers, data storage, and operational logistics. Over time, this has eroded profit margins and fractured trust among affiliates, leading some groups to abandon ransomware in favour of data-theft-only operations. Recent industry upheaval, including the collapse of prominent RaaS brands in 2024, has further destabilised the market. ... In Q3 2025, both the average ransom payment (USD $376,941) and median payment (USD $140,000) dropped sharply by 66% and 65% respectively compared with the previous quarter. Payment rates also fell to a historic low of 23% across incidents involving encryption, data exfiltration, and other forms of extortion, underlining the challenges faced by ransomware groups in securing financial rewards. This trend reflects two predominant factors: Large enterprises are increasingly refusing to pay ransoms, and attacks on smaller organisations, which are more likely to pay, generally result in lower sums. The drop in payment rates is even more pronounced in data exfiltration-only incidents, with just 19% resulting in a payout in Q3, down to another record low.


Shadow AI’s Role in Data Breaches

The adoption barrier is nearly zero: no procurement process, no integration meetings, no IT tickets. All it takes is curiosity and an internet connection. Employees see immediate productivity gains, faster answers, better drafts, cleaner code, and the risks feel abstract. Even when policies prohibit certain AI tools, enforcement is tricky. Blocking sites might prevent direct access, but it won’t stop someone from using their phone or personal laptop. The reality is that AI tools are designed for frictionless use, and that very frictionlessness is what makes them so hard to contain. ... For regulated industries, the compliance fallout can be severe. Healthcare providers risk HIPAA violations if patient information is exposed. Financial institutions face penalties for breaking data residency laws. In competitive sectors, leaked product designs or proprietary algorithms can hand rivals an unearned advantage. The reputational hit can be just as damaging, and once customers or partners lose confidence in your data handling, restoring trust becomes a long-term uphill climb. Unlike a breach caused by a known vulnerability, the root cause in shadow AI incidents is often harder to patch because it stems from behavior, not just infrastructure. ... The first instinct might be to ban unapproved AI outright. That approach rarely works long-term. Employees will either find workarounds or disengage from productivity gains entirely, fostering frustration and eroding trust in leadership. 


Deepfake Attacks Are Happening. Here’s How Firms Should Respond

The quality of deepfake technology is increasing “at a dramatic rate,” agrees Will Richmond-Coggan, partner and head of cyber disputes at Freeths LLP. “The result is that there can be less confidence that real-time audio deepfakes, or even video, will be detectable through artefacts and errors as it has been in the past.” Adding to the risk, many people share images and audio recordings of themselves via social media, while some host vlogs or podcasts.  ... As the technology develops, Tigges predicts fake Zoom meetings will become more compelling and interactive. “Interviews with prospective employees and third-party vendors may be malicious, and conventional employees will find themselves battling state sponsored threat actors more regularly in pursuit of their daily remit.” ... User scepticism is critical, agrees Tigges. He recommends "out of band authentication.” “If someone asks to make an IT-related change, ask that person in another communication method. If you're in a Zoom meeting, shoot them a Slack message.” To avoid being caught out by deepfakes, it is also important that employees are willing to challenge authority, says Richmond-Coggan. “Even in an emergency it will be better for someone in leadership to be challenged and made to verify their identity, than the organisation being brought down because someone blindly followed instructions that didn’t make sense to them, or which they were too afraid to challenge.”


Obsidian: SaaS Vendors Must Adopt Security Standards as Threats Grow

The problem is the SaaS vendors tend to set their own rules, he wrote, so security settings and permissions can differ from app to app – hampering risk management – posture management is hobbled by limited-security APIs that restrict visibility into their configurations, and poor logs and data telemetry make threats difficult to detect, investigate, and respond to. “For years, SaaS security has been a one-way street,” Tran wrote. “SaaS vendors cite the shared responsibility model, while customers struggle to secure hundreds of unique applications, each with limited, inconsistent security controls and blind spots.” ... Obsidian’s Tran pointed to the recent breaches of hundreds of Salesforce customers due to OAuth tokens associated with a third party, Salesloft and its Drift AI chat agent, being compromised, allowing the threat actors access into both Salesforce and Google Workspace instances. The incidents illustrated the need for strong security in SaaS environments. “The same cascading risks apply to misconfigured AI agents,” Tran wrote. “We’ve witnessed one agent download over 16 million files while every other user and app combined accounted for just one million. AI agents not only move unprecedented amounts of data, they are often overprivileged. Our data shows 90% of AI agents are over-permissioned in SaaS.” ... Given the rising threats, “SaaS customers are sounding the alarm and demanding greater visibility, guardrails and accountability from vendors to curb these risks,” he wrote.


Why your Technology Spend isn’t Delivering the Productivity you Expected

Firms essentially spend years building technical debt faster than they can pay it down. Even after modernisation projects, they can’t bring themselves to decommission old systems. So they end up running both. This is the vicious cycle. You keep spending to maintain what you have, building more debt, paying what amounts to a complexity tax in time and money. This problem compounds in asset management because most firms are running fragmented systems for different asset classes, with siloed data environments and no comprehensive platform. Integrating anything becomes a nightmare. ... Here’s where it gets interesting, and where most firms stop short. Virtualisation gives you access to data wherever it lives. That’s the foundation. But the real power comes when you layer on a modern investment management platform that maintains bi-temporal records (which track both when something happened and when it was recorded) as well as full audit trails. Now you can query data as it existed at any point in time. Understand exactly how positions and valuations evolved. ... The best data strategy is often the simplest one: connect, don’t copy, govern, then operationalise. This may sound almost too straightforward given the complexity most firms are dealing with. But that’s precisely the point. We’ve overcomplicated data architecture to the point where 80 per cent of our budget goes to maintenance instead of innovation.


Beyond FUD: The Economist's Guide to Defending Your Cybersecurity Budget

Budget conversations often drift toward "Fear, Uncertainty, and Doubt." The language signals urgency without demonstrating scale, which weakens credibility with financially minded executives. Risk programs earn trust when they quantify likelihood and impact using recognized methods for risk assessment and communication. ... Applied to cybersecurity, VaR frames exposure as a distribution of financial outcomes rather than a binary event. A CISO can estimate loss for data disclosure, ransomware downtime, or intellectual-property theft and present a 95% confidence loss figure over a quarterly or annual horizon, aligning the presentation with established financial risk practice. NIST's guidance supports this structure by emphasizing scenario definition, likelihood modeling, and impact estimation that feed enterprise risk records and executive reporting. The result is a definitive change from alarm to analysis. A board hears an exposure stated as a probability-weighted magnitude with a clear confidence level and time frame. The number becomes a defensible metric that fits governance, insurance negotiations, and budget trade-offs governed by enterprise risk appetite. ... ELA quantifies the dollar value of risk reduction attributable to a control. The calculation values avoided losses against calibrated probabilities, producing a defensible benefit line item that aligns with financial reporting. 

Daily Tech Digest - October 06, 2025


Quote for the day:

"Success seems to be connected with action. Successful people keep moving. They make mistakes but they don’t quit." -- Conrad Hilton


Beyond Von Neumann: Toward a unified deterministic architecture

In large AI workloads, datasets often cannot fit into caches, and the processor must pull them directly from DRAM or HBM. Accesses can take hundreds of cycles, leaving functional units idle and burning energy. Traditional pipelines stall on every dependency, magnifying the performance gap between theoretical and delivered throughput. Deterministic Execution addresses these challenges in three important ways. First, it provides a unified architecture in which general-purpose processing and AI acceleration coexist on a single chip, eliminating the overhead of switching between units. Second, it delivers predictable performance through cycle-accurate execution, making it ideal for latency-sensitive applications such as large langauge model (LLM) inference, fraud detection and industrial automation. Finally, it reduces power consumption and physical footprint by simplifying control logic, which in turn translates to a smaller die area and lower energy use. ... For enterprises deploying AI at scale, architectural efficiency translates directly into competitive advantage. Predictable, latency-free execution simplifies capacity planning for LLM inference clusters, ensuring consistent response times even under peak loads. Lower power consumption and reduced silicon footprint cut operational expenses, especially in large data centers where cooling and energy costs dominate budgets. 


Invest in quantum adoption now to be a winner in the quantum revolution

History shows that transformative compute paradigms require years of preparation before delivering real returns. Graphics processing units (GPUs), for example, took more than a decade of groundwork before fueling the AI revolution that now powers almost every sector of the economy. Organizations that invested early positioned themselves to capture this growth, while those who waited paid more, were caught flat-footed, and lost ground to competitors. Quantum will follow the same trajectory. ... Investing in readiness today reduces both risk and cost. By spreading integration work over time, organizations avoid the disruption and price premium of a sudden adoption push once the full enterprise value of quantum computing is achieved. Budget holders know that rushed, unplanned programs often exceed forecasts and erode margins. Smaller projects with clear deliverables can be managed within existing budgets and allow lessons to be learned incrementally, lowering both financial exposure and operational risk. For decision-makers, this creates a predictable investment profile rather than a costly “big bang” rollout. Early engagement also builds skills at a fraction of the future cost. Recruiting or retraining talent under pressure once the market overheats will be significantly more expensive. 


What an IT career will look like in 5 years — and how to thrive through the changes

Success in the near future will depend less on narrow expertise — mastering a specific technology stack for example — and more on evaluating, adapting, and applying the right tools to solve organizational problems. “People shift into cloud, security, data, or AI work depending on business need,” says Chris Camacho, COO and co-founder at Abstract Security. “Titles matter less than visible proof-of-work — small wins shared internally or publicly. Pick a lane and go deep, then layer AI expertise on top. And show your work — on GitHub, LinkedIn, wherever recruiters can see results.” Justina Nixon-Saintil, global chief impact officer at IBM, says success in the future will favor those who are adaptable and use AI to amplify creativity rather than replace it. “Technology roles are evolving from traditional tasks into more dynamic, interdisciplinary pathways that blend technical expertise with strategic thinking,” Nixon-Saintil says. “Those who can navigate the ethical challenges of AI and technology will succeed, leveraging innovation responsibly to solve complex problems and anticipate evolving business needs. You’ll not only future-proof your career but also unlock new opportunities for growth and innovation.” Beth Scagnoli, vice president of product management of Redpoint Global, agrees the successful pro of the near future will easily move between related but traditionally separate IT domains, such as system architecture and development.


Using AI as a Therapist? Why Professionals Say You Should Think Again

It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they're always ready to engage with you. That can be a downside in some cases, where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently.  ... While chatbots are great at holding a conversation -- they almost never get tired of talking to you -- that's not what makes a therapist a therapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the recent study alongside experts from Minnesota, Stanford and Texas. "To a large extent it seems like we are trying to solve the many problems that therapy has with the wrong tool," Agnew told me. "At the end of the day, AI in the foreseeable future just isn't going to be able to be embodied, be within the community, do the many tasks that comprise therapy that aren't texting or speaking."


CISOs rethink the security organization for the AI era

“Organizations that have invested in security over time are seeing efficiencies by layering AI-driven tools into their workflows,” Oleksak says. “But those who haven’t taken security seriously are still stuck with the same exposures they’ve always had. AI doesn’t magically catch them up.’” In fact, because attackers are using AI to make phishing, scanning, and deepfakes cheaper and faster, Oleksak adds, the gap between mature and unprepared organizations is widening. ... “We’re now embedding cybersecurity into AI initiatives from the start, working closely across teams to ensure innovation is both safe and ethical,” she stresses. “Our commitment to responsible AI means every solution is designed with transparency, fairness, and accountability in mind.” Jason Lander, senior vice president of product management at Aya Healthcare, who manages security for the organization, is also seeing a change in the dynamics between cybersecurity and IT. “AI is noticeably reshaping how security and IT departments collaborate, streamline workflows, blend responsibilities, make decisions and redefine trust dynamics,” he says.  ... “IT’s focus is on speed, efficiency, and enabling the business, while the CISO’s focus is on protecting the business. That distinction is often misunderstood,” he maintains. “As AI introduces powerful new risks, from deepfakes and AI-driven phishing to employees unintentionally exposing sensitive IP through AI queries, only the CISO is positioned to anticipate and mitigate these threats.”


Why Secure Data Migration is the Next Big Boardroom Priority

Industries with the highest dependency on sensitive data are leading the way in secure migration. Financial services, with their heavy regulatory responsibilities and high stakes for customer trust, are among the most proactive industries when it comes to secure data migration. Banks moving from legacy mainframes to cloud-native platforms know that a single misstep could cascade into systemic risk. Healthcare, another high-stakes sector, faces similar urgency. ... Technology hyperscalers such as Microsoft, Google, and Amazon Web Services (AWS) play a dual role: enablers of secure migration and, simultaneously, critical dependencies for enterprises. This reliance brings resilience but also concentration risk. Many CIOs remain concerned about vendor lock-in, even as few alternatives exist at a comparable scale. Enterprises must therefore ensure secure migration while also diversifying their strategy to avoid overreliance on a single ecosystem. ... The shift is clear: secure data migration is no longer an IT department problem. It is a board-level agenda item, shaping strategy and shareholder value. As per the latest findings, 82% of CISOs now directly report to the CEO’s, underscoring their elevated importance. The World Economic Forum has gone further, warning in its 2025 Global Risks Report that data migration failures represent an underappreciated threat to global business resilience


How self-learning AI agents will reshape operational workflows

Experience-based training for AI agents offers strong potential because it allows agents to act autonomously in real-world situations, guided by rewards that emerge from the environment. In the context of operations management, this means agents can learn from past incidents, events, customer tickets, application and infrastructure metrics and logs, as well as any other metrics made available to them. While modern-day hype cycles demand rapid results, much of the promise of AI agents lies in how they will improve operations management over time. Given enough time and training data, the AI agent will be able to plan actions and predict their consequences in the environment—i.e., predict the reward—much better than a human. ... Experience-based learning in this context requires human engineers to conduct post-incident reviews to understand an incident and establish actions to prevent that incident from recurring. However, in many cases, the learnings from a post-incident review are siloed to individual teams and not shared with the wider organization. ... Given that organizations do not consistently conduct post-incident learning reviews or share their findings across the wider organization, operations management is ripe for “agentification” powered by self-learning agents. Instead of burdening busy human engineers with post-incident reviews, AI agents can conduct these reviews and then apply this valuable experience-based training data. 


The DPDPA’s impact on law firms

Most of the personal data processing in HR departments is for purposes related to employment. The DPDPA does provide exemption from obtaining consent from employment purposes under Sec. 7(i) ... However, a reading of this Section would indicate that this exemption is applicable only to current employees and it excludes all processing which happens post-employment or pre-employment. In some instances, where an employee or intern voluntarily emails their resumes to HR departments and the HR departments do not consider the application or take any action on the resume received through email, the DPDPA compliances will not kick in as DPDPA does not apply to personal data which is provided voluntarily by a data principal. But HR departments will need to be vigilant about data collected through designated online portals available on their websites, as in such a case, they can be said to be actively inviting applications unlike the former scenario wherein a candidate is voluntarily sharing their data. ... Under Section 3 of the DPDPA, any foreign entity offering services to individuals in India falls within the law’s extra-territorial scope. ... Several law firms in India have shown significant efforts in enhancing operational standards to ensure that client and partner data is handled safely. Several law firms have implemented standards like ISO 27001, which improves information security, risk management and compliance with regulations.



Is quantum computing poised for another breakthrough?

“Almost all of us in the quantum computing field are absolutely convinced,” Kulkarni said. “But even the skeptics who always thought this was something of the future and never really going to materialize, I think, can concur with us that this is going to happen.” ... Quantum processors currently provide physicists and other scientists with the tools to do big research projects that simply aren’t realistic with other computers. That’s the main use of the technology for now, Boixo said, but as things continue to move forward, the pool of who will use quantum computers will grow. Of course, it’s not just scientists trying to uncover the limits of quantum technology who are using the computers. Marc Lijour, a researcher with the Institute of Electrical and Electronics Engineers, told IT Brew that attackers are interested in how quantum computers can potentially crack encryption much faster than traditional computers. They’re probably already playing with the technology, and waiting until the computers are widely available. “Attackers…are downloading everything they can at the moment and storing it, basically copying the internet and anything they can so they can open it later [using quantum technology],” Lijour said. That’s still a ways in the future. Boixo estimated chaining together 50–100 logical qubits is about five or so years away. With a number of firms looking at developing the next level of quantum computing, it’s a race. 


CISO Spotlights Cybersecurity Challenges in Education Following Kido Breach

Budget is certainly going to be a challenge for all, but more so for state-funded schools and organizations. We do see that as being a challenge everywhere, they have limited resources. The overwhelming feedback is that they just don't have any money to spend, and it's perceived that, therefore, that they can't deploy the security controls that they need. That's a big thing, but I think an even bigger issue is the lack of expertise and time. On lots of occasions you’ll discover institutions where there just aren’t experts on the ground that can manage these cybersecurity risks. They often lean on IT service providers and assume that they’re doing something about cybersecurity, whereas that is not necessarily the case. Budget, expertise and time are big constraints, and I think those issues are causing so many schools to be vulnerable. ... There are plenty of things that can be done with little or no cost. Reviewing all the users, identifying who’s got access and making sure MFA is turned on doesn't carry a significant cost beyond somebody taking the time to do it. That’s going to have a material impact on their posture. Most schools will have an awareness training program, but it's probably a tick box exercise where somebody has to do the course when they join and that’s it. Assigning one person to really own and champion that program could make a material difference to peoples’ awareness.