Showing posts with label data architecture. Show all posts
Showing posts with label data architecture. Show all posts

Daily Tech Digest - March 17, 2026


Quote for the day:

"Make heroes out of the employees who personify what you want to see in the organization." -- Anita Roddick


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 20 mins • Perfect for listening on the go.


How organizations can make a successful transition to Post-Quantum Cryptography (PQC)

In the article "How Organizations Can Make a Successful Transition to Post-Quantum Cryptography (PQC)," the author outlines a strategic framework for businesses to defend against the impending "Harvest Now, Decrypt Later" (HNDL) threat. This tactic involves malicious actors exfiltrating sensitive data today to decrypt it once powerful quantum computers become viable. To counter this, organizations must first establish a top-down strategy that prioritizes a hybrid cryptographic approach. By combining classical, proven algorithms like ECDH with new NIST-standardized PQC algorithms such as ML-KEM, companies create a safety net against unforeseen vulnerabilities in emerging standards. A critical foundational step is the creation of a comprehensive "Crypto-Bill of Materials" (CBOM) to inventory all cryptographic assets and prioritize "crown jewels" like financial transactions and intellectual property. Furthermore, enterprises should codify these requirements into their procurement policies to prevent the accumulation of further cryptographic debt during new software acquisitions. Finally, the article stresses the importance of assigning clear, cross-functional ownership to ensure accountability across IT, legal, and supply chain departments. By treating the PQC transition as a long-term strategic initiative rather than a simple technical patch, CIOs can ensure their organizations remain resilient and protect the long-term integrity of their most vital data.


Who’s in the data-center space race?

In the article "Who’s in the data-center space race?" on Network World, Maria Korolov explores the ambitious frontier of orbital computing and the major players vying for celestial dominance. Tech giants like SpaceX and Google lead the charge, with Elon Musk’s SpaceX proposing a massive constellation of one million satellites for xAI workloads, while Google’s Project Suncatcher aims to deploy solar-powered tensor processing units in orbit. These initiatives seek to capitalize on abundant solar energy and the natural cooling of space, bypassing terrestrial power constraints and environmental hurdles. Startups like Lonestar are even targeting lunar data storage, while European and Chinese consortiums plan to establish extensive AI training networks by 2030. Despite the promise of high-speed optical downlinks and lower latency, significant obstacles remain, including the extreme costs of orbital launches and the necessity of radiation-hardening sensitive silicon chips. Experts predict that economic feasibility hinges on reducing launch prices to under $200 per kilogram, a milestone expected by the mid-2030s. Ultimately, this space race represents a transformative shift in infrastructure, moving beyond terrestrial limitations to build a decentralized, planet-scale intelligence backbone that could redefine global connectivity and artificial intelligence processing.


When Code Becomes Cheap, Engineering Becomes Governance

In the article "When Code Becomes Cheap, Engineering Becomes Governance" on DevOps.com, Alan Shimel discusses how generative AI is fundamentally recalibrating the software development lifecycle by making the production of code almost instantaneous and effectively "cheap." As AI agents handle the manual labor of writing syntax, the traditional bottleneck of code authorship is vanishing, creating a significant paradox: while output volume explodes, risks associated with security, technical debt, and architectural coherence multiply. Consequently, the core discipline of software engineering is transitioning from a focus on creation to a focus on governance. Engineering teams must now prioritize the curation, verification, and oversight of automated output to prevent unmanageable complexity. This new paradigm demands that developers act as strategic supervisors or "building inspectors," implementing rigorous policy enforcement and guardrails to ensure system integrity. Shimel argues that in an era of abundant code, human expertise is most valuable for high-level decision-making and risk management. Ultimately, success depends on an organization's ability to evolve its culture, treating governance as the essential backbone of sustainable, secure software delivery. This evolution ensures that while machines generate syntax, humans remain responsible for the stability and comprehensibility of the overall system.

On March 6, 2026, the Trump Administration unveiled its "Cyber Strategy for America," an aggressive framework emphasizing offensive deterrence, deregulation, and the rapid adoption of AI-powered security measures. While the seven-page document outlines six core pillars—including shaping adversary behavior and hardening critical infrastructure—experts at Biometric Update highlight a significant "identity gap" within the overarching plan. Although the strategy explicitly prioritizes emerging technologies like blockchain, post-quantum cryptography, and autonomous agentic AI, it notably fails to establish a centralized national digital identity strategy or a unified identity assurance framework. This omission is particularly striking as identity fraud and synthetic personas increasingly fuel transnational cybercrime, financial scams, and voter suppression fears. Critics argue that treating digital identity as an afterthought rather than a front-line defense leaves both government and the private sector navigating a fragmented regulatory environment. Interestingly, this lack of focus contrasts with concurrent reports from the Treasury Department, which position digital identity as a critical security layer for modern digital assets. Ultimately, while the strategy successfully shifts the national posture toward risk imposition and technological dominance, it remains an incomplete doctrine by leaving the foundational challenge of identity verification unresolved in an era of sophisticated AI-generated deception.


Practical DevOps leadership Without the Drama

In the article "Practical DevOps Leadership Without the Drama" on the DevOps Oasis blog, the author argues that effective leadership in a technical environment is less about "mystical" management and more about grounded problem-solving and unblocking teams. The piece outlines several pragmatic pillars to maintain a high-performing, low-stress culture. First, it emphasizes starting every initiative by clearly defining the problem to avoid "hobby projects" and align with DORA metrics. Second, it champions visibility through flow, risk, and ownership tracking, suggesting that "red is a color, not a career-limiting event" to surface issues early. Third, leadership involves setting standards that remove repetitive decisions rather than autonomy, using tools like Kubernetes baselines to make the "safe path the easy path." The article also stresses that incident leadership requires a calm, structured routine where coordination is prioritized over individual heroics. Finally, it highlights the importance of a systematic approach to feedback, intentional hiring for systems thinking, and the courage to use guardrails—such as policy-as-code—to prevent predictable operational pain. Ultimately, the post serves as a playbook for building resilient teams that ship quality code without sacrificing sleep or psychological safety.


Rocketlane CEO: AI requires a structural reset of professional SaaS

In the Techzine article, Rocketlane CEO Srikrishnan Ganesan argues that the rise of artificial intelligence necessitates a fundamental "structural reset" of the professional SaaS industry. He contends that simply layering AI features onto existing platforms is a superficial approach that fails to capture the technology's true potential. Instead, the next generation of SaaS must transition from being mere "systems of record" to "systems of action" where AI agents actively execute tasks—such as automated documentation, data transformation, and project management—rather than just tracking them. This shift is particularly impactful for professional services and customer onboarding, where traditional hourly billing models are becoming obsolete in favor of value-based outcomes and fixed fees. Ganesan emphasizes that by delegating routine configurations to AI, human teams can evolve into "orchestrators" focused on high-level strategy and ROI. This transformation enables vendors to offer more scalable, "white-glove" experiences while significantly reducing delivery costs. Ultimately, the article suggests that organizations re-architecting their service models around autonomous capabilities will define the next operating model, while those clinging to legacy, labor-intensive frameworks risk being outpaced by AI-native competitors that redefine the speed of service delivery.


Cryptojackers Lurk in Open Source Clouds

The article "Cryptojackers Lurk in Open Source Clouds" from CACM News explores the growing threat of host-based cryptojacking, where attackers infiltrate Linux cloud environments to surreptitiously mine cryptocurrency. Unlike traditional PC-based malware, cloud-level cryptojacking is highly lucrative because a single entry point can grant access to millions of processors. Attackers typically evade detection by "throttling" their resource usage to blend into background kernel noise and utilizing techniques like program-identification randomization to bypass standard monitoring. This structural complexity often obscures accountability, enabling malicious code to persist even through manual scans. To combat these sophisticated vulnerabilities, researchers introduced CryptoGuard, an open-source framework that leverages deep learning to integrate detection and automated remediation. By tracking specific time-series patterns in kernel-space system calls rather than relying on easily obfuscated process IDs, CryptoGuard can pinpoint scheduler tampering and execute periodic automated erasures to thwart reinfection. This represents a vital shift toward proactive defense, moving beyond simple alerting to real-time, scale-ready intervention. Ultimately, the article argues that restoring visibility in dynamic cloud infrastructures requires such automated, high-fidelity solutions to empower security teams against innovatively hidden cyber threats that continue to exploit vast, under-monitored computational resources.

The article "A million hard drives go offline daily: the massive data waste problem" on Data Center Dynamics highlights a critical yet often overlooked sustainability crisis within the global technology industry. Each year, tens of millions of hard disk drives reach the end of their functional lifespan, yet a staggering number are shredded rather than repurposed. This practice, often driven by rigid security compliance standards like NIST 800-88, leads to an environmental "tsunami" of e-waste, with an estimated one million drives being destroyed every single day. The destruction of these devices not only creates massive amounts of physical waste but also results in the permanent loss of precious, non-renewable raw materials such as neodymium, gold, and copper, valued at hundreds of millions of dollars annually. To combat this, the piece advocates for a shift toward a circular economy model, emphasizing secure data sanitization—software-based wiping—over physical destruction. By adopting "delete, don't destroy" policies and utilizing robotic disassembly for component recovery, the industry could significantly reduce its carbon footprint. Ultimately, the article calls for a collaborative effort between tech giants, regulators, and data center operators to prioritize resource recovery and sustainable innovation to protect the planet’s future.
In the article "Green IT Meets Database Engineering," Craig S. Mullins explores the critical intersection of database administration and environmental sustainability, arguing that efficient data architecture is essential for reducing an organization's energy footprint. As data centers consume a significant portion of global electricity, DBAs must transition toward "carbon-aware" engineering by addressing "data sprawl"—the accumulation of unused tables and redundant records that inflate storage and cooling demands. The author emphasizes that fundamental practices like proper schema normalization, appropriate data typing, and rigorous index discipline are not just performance boosters but key drivers for energy conservation. Efficient SQL coding further reduces CPU cycles and I/O operations, directly cutting power usage. Furthermore, the shift toward cloud-native environments requires precise "right-sizing" to prevent energy waste from overprovisioned resources. By integrating these green principles into the architectural lifecycle, database engineers can align cost-effectiveness with corporate social responsibility. Ultimately, the piece posits that sustainable data management is rooted in disciplined engineering, where every optimized query and trimmed dataset contributes to a more ecologically responsible digital ecosystem without sacrificing growth or technical excellence.


What Africa’s shared data centres can teach the rest of EMEA

In the article "What Africa’s shared data centres can teach the rest of EMEA" on Data Centre Review, Ryan Holmes explores how African nations are leapfrogging traditional IT evolution by bypassing legacy infrastructure in favor of local, shared colocation platforms. As demand for AI-driven workloads and real-time processing surges, organizations across the continent are prioritizing proximity to minimize latency and ensure data sovereignty. This shift mirrors earlier technological breakthroughs like mobile money, allowing emerging markets to avoid the high costs and risks associated with self-managed enterprise servers or offshore hyperscale dependency. The author highlights that shared data centers offer a pragmatic solution for governments and businesses to meet strict residency regulations while maintaining high operational resilience. Furthermore, the absence of major hyperscalers in many African regions has fostered a robust ecosystem of professionally managed, carrier-neutral facilities that provide a cost-effective, opex-based alternative to capital-intensive builds. Ultimately, Africa’s move toward localized, resilient, and collaborative infrastructure provides a vital blueprint for the rest of EMEA, demonstrating that digital independence and performance are best achieved through partnership and strategic proximity rather than isolated ownership or total reliance on global giants.

Daily Tech Digest - February 12, 2026


Quote for the day:

"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode



The hard part of purple teaming starts after detection

Imagine you’re driving, and you see the car ahead braking suddenly. Awareness helps, but it’s your immediate reaction that avoids the collision. Insurance plans don’t matter at that moment. Nor do compliance reports or dashboards. Only vigilance and rehearsal matter. Cyber resilience works the same way. You can’t build the instinct required to act by running one simulation a year. You build it through repetition. Through testing how specific scenarios unfold. Through examining not only how adversaries get in, but also how they move, escalate, evade, and exfiltrate. This is the heart of real purple teaming. ... AI can accelerate analysis, but it can’t replace intuition, design, or the judgment required to act. If the organization hasn’t rehearsed what to do when the signal appears, AI only accelerates the moment when everyone realises they don’t know what happens next. This is why so much testing today only addresses opportunistic attacks. It cleans up the low-hanging fruit. ... The standard testing model traps everyone involved: One-off tests create false confidence; Scopes limit imagination. Time pressure eliminates depth; Commercial structures discourage collaboration; Tooling gives the illusion of capability; and Compliance encourages the appearance of rigour instead of the reality of it. This is why purple teaming often becomes “jump out, stabilize, pull the chute, roll on landing.” But what about the hard scenarios? What about partial deployments? What about complex failures? That’s where resilience is built.


State AI regulations could leave CIOs with unusable systems

Numerous states are considering AI regulations for systems used in medical care, insurance, human resources, finance and other critical areas. ... Despite the growing regulatory risk, businesses appear unwilling to slow AI deployments. "Moving away from AI with the regulation is not going to be an option for us," Juttiyavar said. He said AI is already deeply embedded in how organizations operate and is essential for speed and competitiveness. ... If CIOs establish strong internal frameworks for AI deployment, "that helps you react better to legislative change" and anticipate new requirements, Kourinian said. Still, regulatory shifts can leave companies with systems that are technically sound but legally unusable, said Peter Cassat, a partner at CM Law. To manage that risk, Cassat advises CIOs to negotiate "change of law" provisions in vendor contracts that provide termination rights if regulations make continued use of a system impossible or impractical. But such provisions do not eliminate the risk of sunk costs. "If it's a SaaS provider and you've signed a three-year term, they don't want to necessarily let you walk for free either," Cassat said. Beyond legal exposure, CIOs must also anticipate public and political reaction to AI and biometric tools. "The CIO absolutely has the responsibility to understand how this technology could be perceived -- not just internally, but by the public and lawmakers," said Mark Moccia, an analyst at Forrester Research.


Your dev team isn’t a cost center — it’s about to become a multiplier

If you treat AI as a pathway to eliminate developer headcount, sure, you’ll capture some cost savings in the short term. But you’ll miss the bigger opportunity entirely. You’ll be the bank executive in 1975 who saw ATMs and thought, “Great, we can close branches and fire tellers.” Meanwhile, your competitors have automated the mundane teller tasks and are opening new branches to sell higher-end services to more people. The 1.4-1.6x productivity improvement that GDPval documented isn’t about doing the same work with fewer people. It’s about doing vastly more work with the same people. That new product idea you had that was 10x too expensive to develop? It’s now possible. That customer experience improvement that could drive loyalty that you didn’t have the headcount for? It’s on the table. The technical debt you’ve been accumulating? You can start to pay it down. ... What struck me about Werner’s final keynote wasn’t the content, it was the intent. This was Werner’s last time at that podium. He could have done a victory lap through AWS’s greatest hits. Instead, he spent his time outlining a framework of success for the next generation of developers. For those of us leading technology organizations, the framework is both validating and challenging. Validating because these traits aren’t new. They have always separated good developers from great ones. Challenging because AI amplifies everything, including the gaps in our capabilities.


Cloud teams are hitting maturity walls in governance, security, and AI use

Migration activity remains heavy across enterprises, especially for data platforms. At the same time, downtime tolerance is limited. Nearly half of respondents said their organizations can accept only one to six hours of downtime for cutover during migration. That combination creates pressure to migrate at speed while keeping data integrity intact. In regulated environments, that pressure extends to audit evidence and compliance validation, which often needs to be produced in parallel with migration execution. ... Cloud-native managed database adoption is also high. More than half of respondents reported using managed cloud databases, and a third reported using SaaS-based database services. Only 10% reported operating self-hosted databases. This shift toward managed services reduces operational burden on infrastructure teams, but it increases reliance on identity governance, network segmentation, and application-layer security controls. It also creates stronger dependency on cloud provider logging and access models. ... Development stacks also reflect this shift. Python was reported as a primary language, with Java close behind. These languages remain central to AI workflows, data engineering, and enterprise application back ends. Machine learning adoption is also widespread since organizations reported actively training ML models. Many of these pipelines are now part of production environments, making operational continuity a priority.


MIT's new fine-tuning method lets LLMs learn new skills without losing old ones

To build truly adaptive AI, the industry needs to solve "continual learning," allowing systems to accumulate knowledge much like humans do throughout their careers. The most effective way for models to learn is through "on-policy learning.” In this approach, the model learns from data it generates itself allowing it to correct its own errors and reasoning processes. This stands in contrast to learning by simply mimicking static datasets. ... The standard alternative is supervised fine-tuning (SFT), where the model is trained on a fixed dataset of expert demonstrations. While SFT provides clear ground truth, it is inherently "off-policy." Because the model is just mimicking data rather than learning from its own attempts, it often fails to generalize to out-of-distribution examples and suffers heavily from catastrophic forgetting. SDFT seeks to bridge this gap: enabling the benefits of on-policy learning using only prerecorded demonstrations, without needing a reward function. ... For teams considering SDFT, the practical tradeoffs come down to model size and compute. The technique requires models with strong enough in-context learning to act as their own teachers — currently around 4 billion parameters with newer architectures like Qwen 3, though Shenfeld expects 1 billion-parameter models to work soon. It demands roughly 2.5 times the compute of standard fine-tuning, but is best suited for organizations that need a single model to accumulate multiple skills over time, particularly in domains where defining a reward function for reinforcement learning is difficult or impossible.


The Illusion of Zero Trust in Modern Data Architectures

Modern data stacks stretch far beyond a single system. Data flows from SaaS tools into ingestion pipelines, through transformation layers, into warehouses, lakes, feature stores, and analytics tools. Each hop introduces a new identity, a new permission model, and a new surface area for implicit trust. Not to mention, niches like healthcare data storage are a completely different beast. Whatever the system may be, teams may enforce strict access at the perimeter while internal services freely exchange data with long-lived credentials and broad scopes. This is where the illusion forms. Zero Trust is declared because no user gets blanket access, yet services trust other services almost entirely. Tokens are reused, roles are overprovisioned, and data products inherit permissions they were never meant to have. The architecture technically verifies everything, but conceptually trusts too much. ... Data rarely stays where Zero Trust policies are strongest. Warehouses enforce row-level security, masking, and role-based access, but data doesn’t live exclusively in warehouses. Extracts are generated, snapshots are shared, and datasets are copied into downstream systems for performance or convenience. Each copy weakens the original trust guarantees and problems worse than increasing cloud costs come to fruition. Once data leaves its source, context is often stripped away.


Top Cyber Industry Defenses Spike CO2 Emissions

Though rarely discussed, like any other technologies, cybersecurity protections carry their own costs to the planet. Programs run on electricity. Servers demand water. Devices are built from natural resources and eventually get thrown out. ... "CISOs can help or make the situation worse [when it comes to] sustainability, depending on the way they write security rules," he says. "And that's why we started a study: to enable the CISO to be part of the sustainability process of his or her company, and to find actionable ways to reduce CO2 consumption while at the same time not adding more risks." ... "We collect a lot of logs, not exactly always knowing why, and the retention period is a huge cost in terms of infrastructure, and also CO2," Billois says. "So at some point, you can revisit your log collection, and log retention, and if there are no legal issues, you can think about compressing them to reduce their volume. It's something that is, I would say, quite easy to do. ... All of that said, unfortunately, the biggest cyber polluter, by far, is also the most difficult to scale back without incurring risk. Some companies can swap underutilized physical infrastructure for virtualized backups, which eat less power, if they're not already doing that; but there are few other great ways to make cyber resilience more efficient. "You can reduce CO2 [from backups] very easily: you stop buying two servers, or you stop having a duplicate of all your data," Billois says.


Five ways quantum technology could shape everyday life

There is growing promise of quantum technology’s ability to solve problems that today’s systems struggle to overcome, or cannot even begin to tackle, with implications for industry, national security and everyday life. ... In healthcare, faster drug discovery could bring quicker response to outbreaks and epidemics, personalised medicine and insight into previously inscrutable biological interactions. Quantum simulation of how materials behave could lead to new high efficiency energy materials, catalysts, alloys and polymers. ... In medicine, quantum sensors could improve diagnostic capabilities via more sensitive, quicker and noninvasive imaging modes. In environmental monitoring, these sensors could track delicate shifts beneath the Earth’s surface, offer early warnings of seismic activity, or detect trace pollutants in air and water with exceptional accuracy. ... Airlines and rail networks could automatically reconfigure to avoid cascading delays, while energy providers might balance renewable generation, storage and consumption with far greater precision. Banks could use quantum computers to evaluate numerous market scenarios in parallel, informing the management of investment portfolios. ... While still at an early stage of development, quantum algorithms might accelerate a subset of AI called machine learning (where algorithms improve with experience), help simulate complex systems, or optimise AI architectures more efficiently.


Nokia predicts huge WAN traffic growth, but experts question assumptions

“Consumer- and enterprise-generated AI traffic imposes a substantial impact on the wide-area network (WAN) by adding AI workloads processed by data centers across the WAN. AI traffic does not stay inside one data center; it moves across edge, metro, core, and cloud infrastructure, driving dense lateral flows and new capacity demands,” the report says. An explosion in agentic AI applications further fuels growth “by inducing extra machine-to-machine (M2M) traffic in the background,” Nokia predicts. “AI traffic isn’t just creating more demand inside data centers; it’s driving a sustained surge of traffic between them. AI inferencing traffic—both user-initiated and agentic-AI-induced M2M—moving over inter-data-center links grows at a 20.3% CAGR through 2034.” ... Global enterprise and industrial traffic, including fixed wireless access, will also steadily rise over the next decade, “as more operations, machines, and workers become digitally connected,” Nokia predicts. “Pervasive automation, high-resolution video, AI-driven analytics, and remote access to industrial systems,” will drive traffic growth. “Factory lines are streaming machine vision data to the cloud. AI copilots are assisting personnel in real time. Field teams are using AR instead of manuals. Robots are coordinating across sites,” the Nokia report says. “Industrial systems are continuously sending telemetry over the WAN instead of keeping it on-site. This shift makes wide-area connectivity part of the core production workflow.”


The death of reactive IT: How predictive engineering will redefine cloud performance in 10 years

Reactive monitoring fails not because tools are inadequate, but because the underlying assumption that failures are detectable after they occur no longer holds true. Modern distributed systems have reached a level of interdependence that produces non-linear failure propagation. A minor slowdown in a storage subsystem can exponentially increase tail latencies across an API gateway. ... Predictive engineering is not marketing jargon. It is a sophisticated engineering discipline that combines statistical forecasting, machine learning, causal inference, simulation modeling and autonomous control systems. ... Predictive engineering will usher in a new operational era where outages become statistical anomalies rather than weekly realities. Systems will no longer wait for degradation, they will preempt it. War rooms will disappear, replaced by continuous optimization loops. Cloud platforms will behave like self-regulating ecosystems, balancing resources, traffic and workloads with anticipatory intelligence. ... In distributed networks, routing will adapt in real time to avoid predicted congestion. Databases will adjust indexing strategies before query slowdowns accumulate. The long-term trajectory is unmistakable: autonomous cloud operations. Predictive engineering is not merely the next chapter in observability, it is the foundation of fully self-healing, self-optimizing digital infrastructure. 

Daily Tech Digest - January 11, 2026


Quote for the day:

"Courage doesn't mean you don't get afraid. Courage means you don't let fear stop you." -- Bethany Hamilton



From Coder to Catalyst: What They Don’t Teach About Technical Leadership

The best technical leaders don’t just solve harder problems – they multiply their impact by solving different kinds of problems. What follows is the three-tier evolution most engineers never see coming, and the skills you’ll need that no computer science program ever taught you. ... You’ll have moments of doubt. When you’re starting out, if a junior engineer falls behind, your instinct is to jump in and solve the problem yourself. You might feel like a hero, but this is bad leadership. You’re not holding the junior engineer accountable, and worse, you’re breaking trust—signaling that you don’t believe they can handle the challenge. ... When projects drift off track, you’re cutting scope, reallocating people, and making key decisions at crossroads. But there’s something more critical: risk management. You need to think one step ahead of the projects, identify key risks before they materialize, and mitigate them proactively. ... Additionally, there’s one more thing nobody mentions: managing stakeholders. Not just your team, but peers across the organization and leaders above you. Technical leadership isn’t just downward – it’s omnidirectional. ... The learning curve never ends. You never stop feeling like you’re figuring it out as you go, and that’s the point. Technical leadership is continuous adaptation. The best leaders stay humble enough to admit they’re still learning. The real measure of success isn’t in your commit history. You’re succeeding when your team can execute without you. When people you hired are better than you at things you used to do.


In an AI-perfect world, it’s time to prove you’re human

Being yourself in all communication is not only about authenticity, but individuality. By communicating in a way that only you can communicate, you increase your appeal and value in a world of generic, faceless, zero-personality AI content. For marketing communications, this goes double. The public will increasingly assume what they see is AI-generated, and therefore cheap garbage. ... Not only will the public reject what they assume to be AI, the social algorithms will increasingly reward and boost content offering the signals of authenticity. In fact, Mosseri said that within Meta there is a push to prioritize “original content” over “templated“ or “generic“ AI content that is easy to churn out at a massive scale. ... Rather than thinking of AI as a tool that replaces work and workers, we should think of it as a “scaffolding for human potential,” a way to magnify our cognitive capabilities, not replace them. In other words, instead of viewing AI as something that writes and creates pictures so we don’t have to or writes code so we don’t have to — meaning we don’t even have to learn how to code — we need to use AI to become great at writing, creating images and coding. From now on, everyone will assume everyone else has and uses AI. Content and communications will always exist on a spectrum from fully AI-generated to zero-AI human communication. The further toward the human any bit of content gets, the more valuable it will feel to both the receivers of the content and to the gatekeepers.


How to Build a Robust Data Architecture for Scalable Business Growth

As early in the process as possible, you should begin engaging with stakeholders like IT teams, business and data analysts, executives, administrators, and any other group within your organization that regularly interacts with data. Get to know their data practices and goals, which will provide insight into the requirements for your new data architecture, ensuring you have a deep well of information to draw from. ... After communicating with stakeholders and researching your organization’s current data landscape, you can determine exactly what your data architecture will need now and into the future. Some requirements you will need to precisely define the volume of data your architecture will handle, how fast data needs to move through your organization, and how secure the data needs to be. All this data about your data will guide you toward better decisions in designing and building your data architecture. ... The exact construction of your data architecture will depend largely upon the needs you outlined during the previous step, but some solutions are more advantageous for businesses looking to expand. ... While there is plenty of healthy debate regarding the merits of horizontal scaling versus vertical scaling, the truth is that the best database architectures use both. Horizontal scaling, or using multiple servers to distribute data and processes, allows an organization to have many nodes within a system so the system can dedicate resources to specific data tasks. 


The Quiet Shift Changing UX

Right now, three big transformations collide. Designers are moving away from static screens, leaning into building full flows and shaping behaviours. Conversational AI redefines the user experiences from the ground up. Plus, with Gen-AI tools and mature design systems, designers shift from pixel movers to curators of experiences. All these transformations quietly reshape UX at its core. ... Back in the day, UX ‌design focused mainly on interfaces. Think pages and layouts, breakpoints, all the components, yeah, that defined the work. We’d talk about flows, sure, but really, we just built out sequences of screens. But now, that way of doing things is changing. Products are now changing and adapting depending on what’s happening around them, what the user has done before and what’s happening right now. One thing you do can lead to completely different results depending on how the user uses the system or what they know about it. Screens are becoming temporary; what really matters is what’s happening underneath and how the system changes. ... Designers now focus on curating, refining and shaping the final results, which is a strategic and decisive role. This shift does come with some risks. Sometimes, we settle for ‘good enough’ design, which can mask more serious issues. The design might look good on the surface, but it could be acting strangely beneath the surface.


What does the drought at Stack Overflow teach us?

“AI developer tools seem to be taking attention away from static question-and-answer solutions, replacing Stack Overflow with generated code without the middleman… and without waiting for a question to be answered,” said Walls. “Interestingly, AI tools lack the reputational metadata that Stack Overflow relied on: i.e. when was this solution posted and who posted it… and do they have a lot of prior answers? Developers are conferring trust to LLMs that human-sourced sites had to build over years and fight to retain. It’s much easier for developers to ask an agent for some code to accomplish a task and click accept, regardless of the provenance of that code.” ... “Today we know that LLMs like ChatGPT are already pretty good at answering common questions, which are the bulk of the questions asked at StackOverflow. Additionally, LLMs can respond in real time, so it is not a surprise that people were shifting away from StackOverflow. It might be not the only reason though – some people also reported StackOverflow moderators being rather hostile and unwelcoming towards new users, which had additional impact,” said Zaitsev. “Why would you deal with what you see as bad treatment, if an alternative exists?” ... “With AI now available directly in IDEs, engineers naturally turn to quick, contextual support as they work,” said Jackson. 


Ready or Not, AI is Rewriting the Rules for Software Testing

Etan Lightstone, a product design leader at Domino Data Lab, argues that building trust in agents requires applying familiar operational principles. He suggests that for an enterprise with mature MLOps capabilities, trusting an agent is not enormously different from trusting a human user, because the same pillars of governance are in place: Robust logging of every action, complete auditability to trace what happened and the critical ability to roll back any action if something goes wrong. This product-centric mindset also extends to how we design and test the MCP tools before they ever reach production. Lightstone proposes a novel approach he calls “usability testing for AI.” Just as a product team would run usability tests with human beings to uncover design flaws before a release, he advises that MCP servers should be tested with sample AI agents. This is an effective way to discover issues in how a tool’s functions are documented and described — which is critical, since this documentation effectively becomes part of the prompt that the AI agent uses. Furthermore, he suggests we need to build “support links” for AI agents acting on our behalf. When a user gets stuck, they can often click a link to get help or submit feedback. Lightstone argues that AI agents need similar recovery mechanisms. This could be an MCP-exposed feedback tool that an agent can call if it cannot recover from an error or a dedicated function to get help from a documentation search. 


Defending at Scale: The Importance of People in Data Center Security

In the tech world, the mantra of “move fast and break things” has become a badge of innovation. For cases like social platforms or mobile apps, where “breaking things” translates to inconveniences rather than catastrophes, it can work quite well. But when it comes to building critical infrastructure that supports essential functions and drives the future of society, companies must take the time to ensure they build safely and sustainably. Establishing robust physical security is already challenging, and implementing strong policies and processes to support those controls is even more difficult. Often, the core risk lies in the human layer that determines whether controls are applied consistently. ... With the promise of AI-powered efficiency gains, there’s increased pressure to move faster. When organizations take shortcuts in the name of speed, however, those shortcuts often come at the cost of consistent and thorough security. This could include gaps in training for guards, technicians, and vendors, unclear policies for after-hours access, frequent contractor changes, poorly defined emergency protocols, or procedures that only exist on paper. ... As businesses rush to meet the demand for AI, the data center boom is expected to continue rising. In all this rush, it's easy to overlook that moving fast without first establishing and reliably executing proper processes increases risk. Building too quickly without a strong security culture can lead to expensive problems down the line. 


Industrial cyber governance hits inflection point, shifts toward measurable resilience and executive accountability

For industrial operators, the harder task is converting cyber exposure into defensible investment decisions. Quantified risk approaches, promoted by the World Economic Forum, are gaining traction by linking potential downtime, safety impact, and financial loss to capital planning and insurance strategy. ... “Governance should shift to a unified IT/OT risk council where safety engineers and CISOs share a common language of operational impact,” Paul Shaver, global practice leader at Mandiant’s Industrial Control Systems/Operational Technology Security Consulting practice, told Industrial Cyber. “Organizations should integrate OT-specific safety metrics into the standard IT risk framework to ensure cybersecurity decisions are made with production uptime in mind. This evolution requires aligning IT’s data confidentiality goals with OT’s requirement for high availability and human safety. ... Organizations need to move from siloed governance to a risk-first model that prioritizes the most critical threats, whether cyber or operational, and updates policies dynamically based on risk assessments, Jacob Marzloff, president and co-founder at Armexa, told Industrial Cyber. “A shared risk matrix across teams enables consistent trade-offs for safety and cybersecurity. Oversight should be centralized through a cross-functional Risk Committee rather than a single leader, ensuring expertise from IT, engineering, and operations. This committee creates a feedback loop between real-world risks and governance, building resilience.”


A Reality Check on Global AI Adoption

"AI is diffusing at extraordinary speed, but not evenly," the report said. Advanced digital economies are integrating AI into everyday work far faster than emerging markets. The findings underscore a shift in the AI race from model development to real-world deployment in which diffusion, not innovation alone, determines who benefits most. Microsoft CEO Satya Nadella in a recent blog said, "The next phase of the AI will be defined by execution at scale rather than discovery. The industry is moving from model breakthroughs to the harder work of building systems that deliver real-world value." ... Microsoft defines AI diffusion as the proportion of working-age individuals who have used generative AI tools within a defined period. This usage-based measurement shifts attention from venture funding, compute ownership or research output to real-world interaction including how AI is entering daily workflows, from coding and analysis to communication and content creation. ... Infrastructure gaps persist, language limitations reduce the effectiveness of many generative AI systems, and skills shortages constrain adoption when education and workforce training have not kept pace. Institutional capacity also plays a role, influencing trust, governance and public-sector deployment. At the same time, the diffusion metric captures breadth, not depth. A one-time interaction with a chatbot is measured the same as embedding AI into mission-critical enterprise systems. 


The Hidden Resilience Gap: Why Most Organizations Are One Vendor Failure Away from Crisis

The most striking finding: when vendors lack business continuity or IT recovery plans, 43% of organizations simply ask them to create one and resubmit later. Another 32% do nothing at all. Only 13% provide structured questionnaires to actually help vendors develop meaningful plans. This means 75% of enterprises are essentially hoping their vendors figure it out on their own. ... Here’s another uncomfortable truth: 43% of organizations don’t have any system for combining operational and cyber risk indicators into a unified vendor resilience score. Another 22% track separate indicators but never connect the dots. That means nearly two-thirds of organizations can’t answer a simple question: “Which of our vendors pose the highest operational risk right now?” ... But compliance alone won’t fix this. Organizations need vendor resilience programs that actually reduce operational risk, not just check regulatory boxes. That requires moving beyond point-in-time assessments toward continuous intelligence. It means combining cyber indicators, financial health signals, operational metrics, and recovery evidence into coherent risk profiles. It demands bringing business owners, procurement teams, and risk functions into the same system with the same data. ... whatever you prioritize, make it measurable, make it continuous, and make it integrated. Fragmented data creates fragmented decisions. Point-in-time assessments create point-in-time confidence. Manual processes create manual failure modes. The organizations that crack this will have competitive advantage. 

Daily Tech Digest - November 26, 2025


Quote for the day:

“There is only one thing that makes a dream impossible to achieve: the fear of failure.” -- Paulo Coelho



7 signs your cybersecurity framework needs rebuilding

The biggest mistake, Pearlson says, is failing to recognize that the current plan is out of date or simply not working. Breaches happen, but that doesn’t always mean your cyber framework needs rebuilding. It does, however, indicate that the framework needs to be rethought and redesigned. ... “If your framework hasn’t kept pace with evolving threats or business needs, it’s time for a rebuild.” Cyber threats are always evolving, so staying proactive with regular reviews and fostering a culture of cybersecurity awareness will help catch issues before they become crises, Bucher says. ... “The cybersecurity landscape has evolved rapidly, especially with the rise of generative AI — your framework should reflect these shifts.” McLeod recommends a complete a biannual framework review combined with a cursory review during the gap years. “This helps to ensure that the framework stays aligned with evolving threats, business changes, and regulatory requirements.” Ideally, security leaders should always have their security framework in mind while maintaining a rough, running list of areas that could be improved, streamlined, or clarified, McLeod suggests. ... If an organization is stuck in a cycle of continually chasing alerts and incidents, as well as reporting events after the fact instead of performing predictive threat assessments, data analysis, and forward planning, it’s time for a change, Baiati advises. 


Your Million-Dollar IIoT Strategy is Being Sabotaged by Hundred-Dollar Radios

The ambition is clear: to create hyper-efficient, data-driven operations in a market expected to exceed $1.6 billion by 2030. Yet, a fundamental paradox lies at the heart of this transformation. While we architect complex digital twins and deploy sophisticated AI models, the foundational tools entrusted to our most valuable asset—the frontline workforce—are often decades old, disconnected, and failing at an alarming rate. ... Data shows that one in four organizations loses more than an entire day of productivity every month simply dealing with broken technology. The primary culprits are as predictable as they are preventable: nearly half of workers cite battery problems (48.4%) and physical damage (46.8%) as the most common causes of failure. ... While conversations about this crisis often focus on pay and career paths, Relay’s research reveals a more immediate, tangible cause: the daily frustration of using broken tools. 1 in 4 frontline workers already feel their equipment is second-class compared to what their corporate counterparts use, and a staggering 43% of workers saying they’d be less likely to quit if guaranteed access to modern, automatically upgraded devices. ... Beyond reliability, it’s important to address the data black hole created by legacy, disconnected tools. Every day, frontline teams generate thousands of hours of spoken communication—a rich stream of unstructured data filled with maintenance alerts, safety concerns, and process bottlenecks. 


Ask the Experts: Validate, don't just migrate

"Refactoring code is certainly a big undertaking. And if you start before you have good hygiene and governance, then you're just setting yourself up for failure. Similarly, if you haven't tagged properly, you have no way to attribute it to the project, and that becomes a cost problem." ... "If you do conclude [that migration is necessary], then you really must make sure the application is architected right. A lot of times, these workloads weren't designed for the cloud world, so you must adapt them and deliberately architect them for a cloud workload. "[To prepare a mission-critical application], it's key to look at the appropriateness, operating system [and] licenses. Sometimes, there are licenses tied to CPUs or other things that might introduce issues for you as well, so regression, latency and performance testing will be mandatory. ... "[IT leaders must also understand] the risks and costs associated with taking things into the cloud, and the pros and cons of that versus leaving it alone. Because old stuff, whether it was [procured] yesterday or five years ago, is inherently going to be vulnerable from a cybersecurity standpoint. Risk No. 2 is interoperability and compatibility, because old stuff doesn't talk to new stuff. And the third one is supportability, because it's hard to find old people to support old systems. ... "Sometimes, people have the false sense that if it's in cloud, then I'm all set. Everything is available, and everything is highly redundant. And it is, if you design [the application] with those things in mind.


Heineken CISO champions a new risk mindset to unlock innovation

Starting as an auditor and later leading a cyber defense team. It’s easy to fall into the black-and-white trap of being the function that always says “no” or speaks in cryptic tech jargon. It’s a scary world out there with so many attacks happening in every industry. The classical reaction of most security professionals is to tighten defences and impose even more rules. ... CISOs need to shift the mindset from pure compliance to asking: How does our cyber strategy support the business and its values? What calculated risks do we want the business to take? Where do we need their attention and help to embed security into the DNA of our people and our company? ... Be visible and approachable. Share the lessons that shaped you as a leader, what worked, what didn’t, and the principles that guide your decisions. I’m passionate about building diverse teams where everyone gets the same opportunities, no matter age, gender, or background. Diversity makes us stronger, and when there’s trust and openness, it sparks mentoring, coaching, and knowledge sharing. Make coaching and mentoring non-negotiable, and carve out time for it. It’s easy to push aside when you’re busy putting out security fires, but neglecting people’s growth and well-being is a big miss. Be authentic and vulnerable, walk the talk. Share the real stories, including failures and what made you stronger. Too often, people focus only on titles, certifications, and tech skills.


Data-Driven Enterprise: How Companies Turn Data into Strategic Advantage

A data-driven enterprise is not defined by the number of dashboards or analytics tools it owns. It’s defined by its ability to turn raw information into intelligent action. True data-driven organizations embed data thinking into every level of decision-making from boardroom strategy to day-to-day operations. ... A modern data architecture is not a single platform, but an interconnected ecosystem designed to balance agility, governance, and scalability. ... As organizations mature in their data journey, they are moving away from rigid, centralized models that rely on a single source of truth. While centralization once ensured control, it often created bottlenecks slowing down innovation and limiting agility.  ... We are entering an era of data agents self-learning systems capable of autonomously detecting anomalies, assessing risks, and forecasting trends in real time. These intelligent agents will soon become the invisible workforce of the enterprise, operating across domains: predicting supply chain disruptions, optimizing IT performance, personalizing customer journeys, and ensuring compliance through continuous monitoring. Their actions will reshape not only operations but also how organizations think about governance, accountability, and human oversight. For architects, this shift represents both a challenge and an extraordinary opportunity. The role is evolving from that of a data custodian focused on structure and governance to an ecosystem designer who engineers environments where data and AI can coexist, learn, and continuously create value.


10 benefits of an optimized third-party IT services portfolio

By entrusting day-to-day IT operations to trusted providers, organizations can reallocate internal resources toward higher-value initiatives such as digital transformation, automation, and product innovation. This accelerates adoption of emerging technologies, and allows internal teams to deepen business expertise, strengthen cross-functional collaboration, and focus on driving growth where it matters most. ... A well-structured third-party IT services portfolio can provide flexibility to scale up or down based on business needs. This is particularly valuable for CEOs who need to adapt to changing market conditions and seize growth opportunities. Securing talent in the market today is challenging and time consuming, so tapping into the talent pools of your strategic IT services partner base allows organizations to leverage their bench strength to fill immediate needs for talent. ... IT service providers continuously invest in advanced tech and talent development, enabling clients to benefit from cutting-edge innovations without bearing the full cost of adoption. As AI, automation, and cybersecurity evolve, providers offer the subject matter expertise and tools organizations need to stay ahead of disruption. ... With operational stability ensured through a balance of internal talent and trusted third parties, CIOs can dedicate more focus to long-term strategic initiatives that fuel growth and innovation. 


Modernizing SOCs with Agentic AI and Human-in-the-Loop: A Guide to CISOs

Traditional SOCs were not built for today’s speed and scale. Alert fatigue, manual investigations, disconnected tools, and talent shortages all contribute to the operational drag. Many security leaders are stuck in a reactive loop with no clear path to improvement. ... Legacy SOCs rely heavily on outdated technologies and rule-based detection, generating high volumes of alerts, many of which are false positives, leading to analyst burnout. Analysts are compelled to manually inspect and triage a deluge of meaningless signals, making the entire effort unsustainable. ... Before transformation can happen, one needs to understand where one stands. This can be accomplished with key benchmarking metrics for SOC performance, such as MTTD (Mean time to detect), MTTR (Mean time to respond), case closure rates, and tool effectiveness. ... Agentic AI represents the next evolution of AI-powered cybersecurity, which is modular, explainable, and autonomous. Through a coordinated system of AI agents, the Agentic SOC continuously responds and adapts to the evolving security environment in real time. It is designed to accelerate threat detection, investigation, and response by 10x, bringing speed, precision, and clarity to every function of SecOps. Agentic AI is the technology shift that changes the game. Unlike traditional automation, Agentic AI is decision-oriented, self-improving, and always operating with human-in-the-loop for oversight.


3 SOC Challenges You Need to Solve Before 2026

2026 will mark a pivotal shift in cybersecurity. Threat actors are moving from experimenting with AI to making it their primary weapon, using it to scale attacks, automate reconnaissance, and craft hyper-realistic social engineering campaigns. ... Attackers have mastered evasion. ClickFix campaigns trick employees into pasting malicious PowerShell commands by themselves. LOLBins are abused to hide malicious behavior. Multi-stage phishing hides behind QR codes, CAPTCHAs, rewritten URLs, and fake installers. Traditional sandboxes stall because they can't click "Next," solve challenges, or follow human-dependent flows. Result? Low detection rates for the exact threats exploding in 2025 and beyond. ... Thousands of daily alerts, mostly false positives. An average SOC handles 11,000 alerts daily, with only 19% worth investigating, according to the 2024 SANS SOC Survey. Tier 1 analysts drown in noise, escalating everything because they lack context. Every alert becomes a research project. Every investigation starts from zero. Burnout hits hard. Turnover doubles, morale tanks, and real threats hide in the backlog. By 2026, AI-orchestrated attacks will flood systems even faster, turning alert fatigue into a full-blown crisis. ... From a financial leadership perspective, security spending often feels like a black hole: money is spent, but risk reduction is hard to quantify. SOCs are challenged to justify investments, especially when security teams seem to be a cost center without clear profit or business-driving impact.


Digital surveillance tools are reshaping workplace privacy, GAO warns

Privacy concerns intensify when surveillance data feeds into automated systems that evaluate performance, set productivity metrics, or flag workers for potential discipline. GAO found that employers often rely on flawed benchmarks and incomplete measurements. Tools rarely capture the full range of work performed, such as research, mentoring, reading, or off-screen tasks, and frequently misinterpret normal behavior as inefficiency. When employers trust these tools “at face value,” the report notes, workers can be unfairly labeled unproductive or noncompliant despite doing their jobs well. ... Meanwhile, past federal efforts to issue guidance on reducing surveillance related harms such as transparency practices, human oversight, and safeguards against discriminatory impacts have been rescinded or paused since January by the Trump administration as agencies reassess their policy priorities. GAO also notes that existing federal privacy protections are narrow. The Electronic Communications Privacy Act restricts covert interception of communications, but it does not cover most forms of digital monitoring, such as keystroke logging, location tracking, biometric data collection, or algorithmic productivity scoring. ... The report concludes that while digital surveillance can improve safety, efficiency, and health monitoring, its benefits depend wholly on how employers use it.


How to avoid becoming an “AI-first” company with zero real AI usage

A competitor declared they’re going AI-first. Another publishes a case study about replacing support with LLMs. And a third shares a graph showing productivity gains. Within days, boardrooms everywhere start echoing the same message: “We should be doing this. Everyone else already is, and we can’t fall behind.” So the work begins. Then come the task forces, the town halls, the strategy docs and the targets. Teams are asked to contribute initiatives. But if you’ve been through this before, you know there’s often a difference between what companies announce and what they actually do. Because press releases don’t mention the pilots that stall, or the teams that quietly revert to the old way, or even the tools that get used once and abandoned. ... By then, your company’s AI-first mandate will have set into motion departmental initiatives, vendor contracts and maybe even some new hires with “AI” in their titles. The dashboards will be green, and the board deck will have a whole slide on AI. But in the quiet spaces where your actual work happens, what will have meaningfully changed? Maybe you'll be like the teams that never stopped their quiet experiments. ... That’s invisible architecture of genuine progress: Patient, and completely uninterested in performance. It doesn't make for great LinkedIn posts, and it resists grand narratives. But it transforms companies in ways that truly last. Every organization is standing at the same crossroads right now: Look like you’re innovating, or create a culture that fosters real innovation.

Daily Tech Digest - October 17, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett



AI Agents Transform Enterprise Application Development

There's now discussion about the agent development life cycle and the need to supervise or manage AI agent developers - calling for agent governance and infrastructure changes. New products, services and partnerships announced in the past few weeks support this trend. ... Enterprises were cautious about entrusting public models and agents with intellectual property. But the partnership with Anthropic could make models more trustworthy. "Enterprises are looking for AI they can actually trust with their code, their data and their day-to-day operations," said Mike Krieger, chief product officer at Anthropic. ... Embedding agentic AI within the fabric of enterprise architecture enables organizations to unlock transformative agility, reduce cognitive load and accelerate innovation - without compromising trust, compliance or control - says an IBM report titled "Architecting secure enterprise AI agents with MCP." Developers adopted globally recognized models such as Capability Maturity Model Integration, or CMMI, and CMMI-DEV as paths to improve the software development and maintenance processes. ... Enterprises must be prepared to implement radical process and infrastructure changes to successfully adopt AI agents in software delivery. AI agents must be managed by a central governance framework to enable complete visibility into agents, agent performance monitoring and security.


There’s no such thing as quantum incident response – and that changes everything

CISOs are directing attention to have quantum security risks added to the corporate risk register. It belongs there. But the problem to be solved is not a quick fix, despite what some snake oil salesmen might be pushing. There is no simple configuration checkbox on AWS or Azure or GCP where you “turn on” post-quantum cryptography (PQC) and then you’re good to go. ... Without significant engagement from developers, QA teams and product owners, the quantum decryption risk will remain in play. You cannot transfer this risk by adding more cyber insurance policy coverage. The entire cyber insurance industry itself is in a bit of an existential doubt situation regarding whether cybersecurity can reasonably be insured against, given the systemic impacts of supply chain attacks that cascade across entire industries. ...The moment when a cryptographically relevant quantum computer comes into existence won’t arrive with fanfare or bombast. Hence, the idea of the silent boom. But by then, it will be too late for incident response. What you should do Monday morning: Start that data classification exercise. Figure out what needs protecting for the long term versus what has a shorter shelf life. In the world of DNS, we have Time To Live (TTL) that declares how long a resolver can cache a response. Think of a “PQC TTL” for your sensitive data, because not everything needs 30-year protection.


Hackers Use Blockchain to Hide Malware in Plain Sight

At least two hacking groups are using public blockchains to conceal and control malware in ways that make their operations nearly impossible to dismantle, shows research from Google's Threat Intelligence Group. ... The technique, known as EtherHiding, embeds malicious instructions in blockchain smart contracts rather than traditional servers. Since the blockchain is decentralized and immutable, attackers gain what the researchers call a "bulletproof" infrastructure. The development signals an "escalation in the threat landscape," said Robert Wallace, consulting leader at Mandiant, which is part of Google Cloud. Hackers have found a method "resistant to law enforcement takedowns" that and can be "easily modified for new campaigns." ... The group over time expanded its architecture from a single smart contract to a three-tier system mimicking a software "proxy pattern." This allows rapid updates without touching the compromised sites. One contract acts as a router, another fingerprints the victim's system and a third holds encrypted payload data and decryption keys. A single blockchain transaction, costing as little as a dollar in network fees, can change lure URLs or encryption keys across thousands of infected sites. The researchers said the threat actor used social engineering tricks like fake Cloudflare verification or Chrome update prompts to persuade victims to run malicious commands.


Everyone’s adopting AI, few are managing the risk

Across industries, many organizations are caught in what AuditBoard calls the “middle maturity trap.” Teams are active, frameworks are updated, and risks are logged, but progress fades after early success. When boards include risk oversight as a standing agenda item and align on shared performance goals, activity becomes consistent and forward-looking. When governance and ownership are unclear, adoption slows and collaboration fades. ... Many enterprises are adopting or updating risk frameworks, but implementation depth varies. The typical organization maps its controls to several frameworks, while leading firms embed thousands of requirements into daily operations. The report warns that “surface compliance” is common. Breadth without depth leaves gaps that only appear during audits or disruptions. Mature programs treat frameworks as living systems that evolve with business and regulatory change. ... The findings show that many organizations are investing heavily in risk management and AI, but maturity depends less on technology and more on integration. Advanced organizations use governance to connect teams and turn data into foresight. AuditBoard’s research suggests that as AI becomes more embedded in enterprise systems, risk leaders will need to move beyond activity and focus on consistency. Those that do will be better positioned to anticipate change and turn risk management into a strategic advantage.


A mini-CrowdStrike moment? Windows 11 update cripples dev environments

The October 2025 cumulative update, (KB5066835), addressed security issues in Windows operating systems (OSes), but also appears to have blocked Windows’ ability to talk within itself. Localhost allows apps and services to communicate internally without using internet or external network access. Developers use the function to develop, test, and debug websites and apps locally on a Windows machine before releasing them to the public. ... When localhost stops working, entire application development environments can be impacted or “even grind to a halt,” causing internal processes and services to fail and stop communicating, he pointed out. This means developers are unable to test or run web applications locally. This issue is really about “denial of service,” where tools and processes dependent on internal loopback services break, he noted. Developers can’t debug locally, and automated testing processes can fail. At the same time, IT departments are left to troubleshoot, field an influx of service tickets, roll back patches, and look for workarounds. “This bug is definitely disruptive enough to cause delays, lost productivity, and frustration across teams,” said Avakian. ... This type of issue underscores the importance of quality control and thorough testing by third-party suppliers and vendors before releasing updates to commercial markets, he said. Not doing so can have significant downstream impacts and “erode trust” in the update process while making teams more cautious about patching.


How Banks of Every Size Can Put AI to Work, and Take Back Control

For smaller banks and credit unions, the AI conversation begins with math. They want the same digital responsiveness as larger competitors but can’t afford the infrastructure or staffing that traditionally make that possible. The promise of AI, especially low-code and automated implementation, changes that equation. What once required teams of engineers months of coding can now be deployed out-of-the-box, configured and pushed live in a day. That shift finally brings digital innovation within reach for smaller institutions that had long been priced out of it. But even when self-service tools are available, many institutions still rely on outside help for routine changes or maintenance. For these players, the first question is whether they’re willing or able to take product dev work inhouse, even with "AI inside"; the next question is whether they can find partners that can meet them on their own terms. ... For mid-sized players, the AI opportunity centers on reclaiming control. These institutions typically have strong internal teams and clear strategic ideas, yet they remain bound by vendor SLAs that slow innovation. The gap between what they can envision and what they can deliver is wide. AI-driven orchestration tools, especially those that let internal teams configure and launch digital products directly, can help close that gap. By removing layers of technical dependency, mid-sized institutions can move from periodic rollouts to something closer to iterative improvement. 


Why your AI is failing — and how a smarter data architecture can fix it

Traditional enterprises operate four separate, incompatible technology stacks, each optimized for different computing eras, not for AI reasoning capabilities. ... When you try to deploy AI across these fragmented stacks, chaos follows. The same business data gets replicated across systems with different formats and validation rules. Semantic relationships between business entities get lost during integration. Context critical for intelligent decision-making gets stripped away to optimize for system performance. AI systems receive technically clean datasets that are semantically impoverished and contextually devoid of meaning. ... As organizations begin shaping their enterprise general intelligence (EGI) architecture, critical operational intelligence remains trapped in disconnected silos. Engineering designs live in PLM systems, isolated from the ERP bill of materials. Quality metrics sit locked in MES platforms with no linkage to supplier performance data. Process parameters exist independently of equipment maintenance records. ... Enterprises solving the data architecture challenge gain sustainable competitive advantages. AI deployment timelines are measured in weeks rather than months. Decision accuracy reaches enterprise-grade reliability. Intelligence scales across all business domains. Innovation accelerates as AI creates new capabilities rather than just automating existing processes.


Under the hood of AI agents: A technical guide to the next frontier of gen AI

With agents, authorization works in two directions. First, of course, users require authorization to run the agents they’ve created. But as the agent is acting on the user’s behalf, it will usually require its own authorization to access networked resources. There are a few different ways to approach the problem of authorization. One is with an access delegation algorithm like OAuth, which essentially plumbs the authorization process through the agentic system. ... Agents also need to remember their prior interactions with their clients. If last week I told the restaurant booking agent what type of food I like, I don’t want to have to tell it again this week. The same goes for my price tolerance, the sort of ambiance I’m looking for, and so on. Long-term memory allows the agent to look up what it needs to know about prior conversations with the user. Agents don’t typically create long-term memories themselves, however. Instead, after a session is complete, the whole conversation passes to a separate AI model, which creates new long-term memories or updates existing ones. ... Agents are a new kind of software system, and they require new ways to think about observing, monitoring and auditing their behavior. Some of the questions we ask will look familiar: Whether the agents are running fast enough, how much they’re costing, how many tool calls they’re making and whether users are happy. 


Data Is the New Advantage – If You Can Hold On To It

Proprietary data has emerged as one of the most valuable assets for enterprises—and increasingly, the expectation is that data must be stored indefinitely, ready to fuel future models, insights, and innovations as the technology continues to evolve. ... Globally, data architects, managers, and protectors are in uncharted territory. The arrival of generative AI has proven just how unpredictable and fast-moving technological leaps can be – and if there’s one thing the past few years have taught us, it’s that we can’t know what comes next. The only way to prepare is to ensure proprietary data is not just stored but preserved indefinitely. Tomorrow’s breakthroughs – whether in AI, analytics, or some other yet-unimagined technology – will depend on the depth and quality of the data you have today, and how well you can utilize the storage technologies of your choice to serve your data usage and workflow needs. ... The lesson is clear: don’t get left behind, because your competitors are learning these lessons as well. The enterprises that thrive in this next era of digital innovation will be those that recognize the enduring value of their data. That means keeping it all and planning to keep it forever. By embracing hybrid storage strategies that combine the strengths of tape, cloud, and on-premises systems, organizations can rise to the challenge of exponential growth, protect themselves from evolving threats, and ensure they are ready for whatever comes next. In the age of AI, your competitive advantage won’t just come from your technology stack.


Why women are leading the next chapter of data centers

Working her way up through finance and operations into large-scale digital infrastructure, Xiao’s career reflects a steady ascent across disciplines, including senior roles as president of Chindata Group and CFO at Shanghai Wangsu. These roles sharpened her ability to translate high-level strategy into expansion, particularly in the demanding data center sector. ... Today, she shapes BDC’s commercial playbook, which includes setting capital priorities, driving cost-efficient delivery models, and embedding resilience and sustainability into every development decision. In mission-critical industries like data centers, repeatability is a challenge. Every market has unique variables – land, power, water, regulatory frameworks, contractor ecosystems, and community engagement. ... For the next wave of talent, building credibility in the data center industry requires more than technical expertise. Engaging in forums, networks, and industry resources not only earns recognition and respect but also broadens knowledge and sharpens perspective. ... Peer networks within hyperscaler and operator communities, Xiao notes, are invaluable for exchanging insights and challenging assumptions. “Industry conferences, cross-company working groups, government-industry task forces, and ecosystem media engagements all matter. And for bench strength, I value partnerships with local technology innovators and digital twin or AI firms that help us run safer, greener facilities,” Xiao explains.