Daily Tech Digest - October 22, 2025


Quote for the day:

"Good content isn't about good storytelling. It's about telling a true story well." -- Ann Handley



When yesterday’s code becomes today’s threat

A striking new supply chain attack is sending shockwaves through the developer community: a worm-style campaign dubbed “Shai-Hulud” has compromised at least 187 npm packages, including the tinycolor package that has 2 million hits weekly, and spreading to other maintainers' packages. The malicious payload modifies package manifests, injects malicious files, repackages, and republishes — thereby infecting downstream projects. This incident underscores a harsh reality: even code released weeks, months, or even years ago can become dangerous once a dependency in its chain has been compromised. ... Sign your code: All packages/releases should use cryptographic signing. This allows users to verify the origin and integrity of what they are installing. Verify signatures before use: When pulling in dependencies, CI/CD pipelines, and even local dev setups, include a step to check that the signature matches a trusted publisher and that the code wasn’t tampered with. SBOMs are your map of exposure: If you have a Software Bill of Materials for your project(s), you can query it for compromised packages. Find which versions/packages have been modified — even retroactively — so you can patch, remove, or isolate them. Continuous monitoring of risk posture: It's not enough to secure when you ship. You need alerts when any dependency or component’s risk changes: new vulnerabilities, suspicious behavior, misuse of credentials, or signs that a trusted package may have been modified after release.


Cloud Sovereignty: Feature. Bug. Feature. Repeat!

Cloud sovereignty isn’t just a buzzword anymore, argues Kushwaha. “It’s a real concern for businesses across the world. The pattern is clear. The cloud isn’t a one-size-fits-all solution anymore. Companies are starting to realise that sometimes control, cost, and compliance matter more than convenience.” ... Cloud sovereignty is increasingly critical due to the evolving geopolitical scenario, government and industry-specific regulations, and vendor lock-ins with heavy reliance on hyperscalers. The concept has gained momentum and will continue to do so because technology has become pervasive and critical for running a state/country and any misuse by foreign actors can cause major repercussions, the way Bavishi sees it. Prof. Bhatt captures that true digital sovereignty is a distant dream and achieving this requires a robust ecosystem for decades. This isn’t counterintuitive; it’s evolution, as Kushwaha epitomises. “The cloud’s original promise was one of freedom. Today, when it comes to the cloud, freedom means more control. Businesses investing heavily in digital futures can’t afford to ignore the fine print in hyperscaler contracts or the reach of foreign laws. Sovereignty is the foundation for building safely in a fragmented world.” ... Organisations have recognised the risks of digital dependencies and are looking for better options. There is no turning back, Karlitschek underlines.


Securing AI to Benefit from AI

As organizations begin to integrate AI into defensive workflows, identity security becomes the foundation for trust. Every model, script, or autonomous agent operating in a production environment now represents a new identity — one capable of accessing data, issuing commands, and influencing defensive outcomes. If those identities aren't properly governed, the tools meant to strengthen security can quietly become sources of risk. The emergence of Agentic AI systems make this especially important. These systems don't just analyze; they may act without human intervention. They triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. ... AI systems are capable of assisting human practitioners like an intern that never sleeps. However, it is critical for security teams to differentiate what to automate from what to augment. Some tasks benefit from full automation, especially those that are repeatable, measurable, and low-risk if an error occurs. ... Threat enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes where consistency outperforms creativity. By contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully grasp. Here, AI should assist by surfacing indicators, suggesting next steps, or summarizing findings while practitioners retain decision authority. Finding that balance requires maturity in process design. 


The Unkillable Threat: How Attackers Turned Blockchain Into Bulletproof Malware Infrastructure

When EtherHiding emerged in September 2023 as part of the CLEARFAKE campaign, it introduced a chilling reality: attackers no longer need vulnerable servers or hackable domains. They’ve found something far better—a global, decentralized infrastructure that literally cannot be shut down. ... When victims visit the infected page, the loader queries a smart contract on Ethereum or BNB Smart Chain using a read-only function call. ... Forget everything you know about disrupting cybercrime infrastructure. There is no command-and-control server to raid. No hosting provider to subpoena. No DNS to poison. The malicious code exists simultaneously everywhere and nowhere, distributed across thousands of blockchain nodes worldwide. As long as Ethereum or BNB Smart Chain operates—and they’re not going anywhere—the malware persists. Traditional law enforcement tactics, honed over decades of fighting cybercrime, suddenly encounter an immovable object. You cannot arrest a blockchain. You cannot seize a smart contract. You cannot compel a decentralized network to comply. ... The read-only nature of payload retrieval is perhaps the most insidious feature. When the loader queries the smart contract, it uses functions that don’t create transactions or blockchain records. 


New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning

Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs to engage in lengthy reasoning without incurring the prohibitive computational costs that currently limit such tasks. The team’s implementation, an environment named Delethink, structures the reasoning chain into fixed-size chunks, breaking the scaling problem that plagues very long LLM responses. Initial estimates show that for a 1.5B parameter model, this method can cut the costs of training by more than two-thirds compared to standard approaches. ... The researchers compared this to models trained with the standard LongCoT-RL method. Their findings indicate that the model trained with Delethink could reason up to 24,000 tokens, and matched or surpassed a LongCoT model trained with the same 24,000-token budget on math benchmarks. On other tasks like coding and PhD-level questions, Delethink also matched or slightly beat its LongCoT counterpart. “Overall, these results indicate that Delethink uses its thinking tokens as effectively as LongCoT-RL with reduced compute,” the researchers write. The benefits become even more pronounced when scaling beyond the training budget. 


The dazzling appeal of the neoclouds

While their purpose-built design gives them an advantage for AI workloads, neoclouds also bring complexities and trade-offs. Enterprises need to understand where these platforms excel and plan how to integrate them most effectively into broader cloud strategies. Let’s explore why this buzzword demands your attention and how to stay ahead in this new era of cloud computing. ... Neoclouds, unburdened by the need to support everything, are outpacing hyperscalers in areas like agility, pricing, and speed of deployment for AI workloads. A shortage of GPUs and data center capacity also benefits neocloud providers, which are smaller and nimbler, allowing them to scale quickly and meet growing demand more effectively. This agility has made them increasingly attractive to AI researchers, startups, and enterprises transitioning to AI-powered technologies. ... Neoclouds are transforming cloud computing by offering purpose-built, cost-effective infrastructure for AI workloads. Their price advantages will challenge traditional cloud providers’ market share, reshape the industry, and change enterprise perceptions, fueled by their expected rapid growth. As enterprises find themselves at the crossroads of innovation and infrastructure, they must carefully assess how neoclouds can fit into their broader architectural strategies. 


Wi-Fi 8 is coming — and it’s going to make AI a lot faster

Unlike previous generations of Wi-Fi that competed on peak throughput numbers, Wi-Fi 8 prioritizes consistent performance under challenging conditions. The specification introduces coordinated multi-access point features, dynamic spectrum management, and hardware-accelerated telemetry designed for AI workloads at the network edge. ... A core part of the Wi-Fi 8 architecture is an approach known as Ultra High Reliability (UHR). This architectural philosophy targets the 99th percentile user experience rather than best-case scenarios. The innovation addresses AI application requirements that demand symmetric bandwidth, consistent sub-5-millisecond latency and reliable uplink performance. ... Wi-Fi 8 introduces Extended Long Range (ELR) mode specifically for IoT devices. This feature uses lower data rates with more robust coding to extend coverage. The tradeoff accepts reduced throughput for dramatically improved range. ELR operates by increasing symbol duration and using lower-order modulation. This improves the link budget for battery-powered sensors, smart home devices and outdoor IoT deployments. ... Wi-Fi 8 enhances roaming to maintain sub-millisecond handoff latency. The specification includes improved Fast Initial Link Setup (FILS) and introduces coordinated roaming decisions across the infrastructure. Access points share client context information before handoff. 


Life, death, and online identity: What happens to your online accounts after death?

Today, we lack the tools (protocols) and the regulations to enable digital estate management at scale. Law and regulation can force a change in behavior by large providers. However, lacking effective protocols to establish a mechanism to identify the decedent’s chosen individuals who will manage their digital estate, every service will have to design their own path. This creates an exceptional burden on individuals planning their digital estate, and on individuals who manage the digital estates of the deceased. ... When we set out to write this paper, we wanted to influence the large technology and social media platforms, politicians, regulators, estate planners, and others who can help change the status quo. Further, we hoped to influence standards development organizations, such as the OpenID Foundation and the Internet Engineering Task Force (IETF), and their members. As standards developers in the realm of identity, we have an obligation to the people we serve to consider identity from birth to death and beyond, to ensure every human receives the respect they deserve in life and in death. Additionally, we wrote the planning guide to help individuals plan for their own digital estate. By giving people the tools to help describe, document, and manage their digital estates proactively, we can raise more awareness and provide tools to help protect individuals at one of the most vulnerable moments of their lives.


5 steps to help CIOs land a board seat

Serving on a board isn’t an extension of an operational role. One issue CIOs face is not understanding the difference between executive management and governance, Stadolnik says. “They’re there to advise, not audit or lead the current company’s CIO,” he adds. In the boardroom, the mandate is to provide strategy, governance, and oversight, not execution. That shift, Stadolnik says, can be jarring for tech leaders who’ve spent their careers driving operational results. ... “There were some broad risk areas where having strong technical leadership was valuable, but it was hard for boards to carve out a full seat just for that, which is why having CIO-plus roles was very beneficial,” says Cullivan. The issue of access is another uphill battle for CIOs. As Payne found, the network effect can play a huge role in seeking a board role. But not every IT leader has the right kind of network that can open the door to these opportunities. ... Boards expect directors to bring scope across business disciplines and issues, not just depth in one functional area. Stadolnik encourages CIOs to utilize their strategic orientation, results focus, and collaborative and influence skills to set themselves up for additional responsibilities like procurement, supply chain, shared services, and others. “It’s those executive leadership capabilities that will unlock broader roles,” he says. Experience in those broader roles bolsters a CIO’s board résumé and credibility.


Microservices Without Meltdown: 7 Pragmatic Patterns That Stick

A good sniff test: can we describe the service’s job in one short sentence, and does a single team wake up if it misbehaves? If not, we’ve drawn mural art, not an interface. Start with a small handful of services you can name plainly—orders, payments, catalog—then pressure-test them with real flows. When a request spans three services just to answer a simple question, that’s a hint we’ve sliced too thin or coupled too often. ... Microservices live and die by their contracts. We like contracts that are explicit, versioned, and backwards-friendly. “Backwards-friendly” means old clients keep working for a while when we add fields or new behaviors. For HTTP APIs, OpenAPI plus consistent error formats makes a huge difference. ... We need timeouts and retries that fit our service behavior, or we’ll turn small hiccups into big outages. For east-west traffic, a service mesh or smart gateway helps us nudge traffic safely and set per-route policies. We’re fans of explicit settings instead of magical defaults. ... Each service owns its tables; cross-service read needs go through APIs or asynchronous replication. When a write spans multiple services, aim for a sequence of local commits with compensating actions instead of distributed locks. Yes, we’re describing sagas without the capes: do the smallest thing, record it durably, then trigger the next hop. 

Daily Tech Digest - October 21, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone


The teacher is the new engineer: Inside the rise of AI enablement and PromptOps

Enterprises should onboard AI agents as deliberately as they onboard people — with job descriptions, training curricula, feedback loops and performance reviews. This is a cross-functional effort across data science, security, compliance, design, HR and the end users who will work with the system daily. ... Don’t let your AI’s first “training” be with real customers. Build high-fidelity sandboxes and stress-test tone, reasoning and edge cases — then evaluate with human graders. ... As onboarding matures, expect to see AI enablement managers and PromptOps specialists in more org charts, curating prompts, managing retrieval sources, running eval suites and coordinating cross-functional updates. Microsoft’s internal Copilot rollout points to this operational discipline: Centers of excellence, governance templates and executive-ready deployment playbooks. These practitioners are the “teachers” who keep AI aligned with fast-moving business goals. ... In a future where every employee has an AI teammate, the organizations that take onboarding seriously will move faster, safer and with greater purpose. Gen AI doesn’t just need data or compute; it needs guidance, goals, and growth plans. Treating AI systems as teachable, improvable and accountable team members turns hype into habitual value.


How CIOs Can Unlock Business Agility with Modular Cloud Architectures

A modular cloud architecture is one that makes a variety of discrete cloud services available on demand. The services are hosted across multiple cloud platforms, and different units within the business can pick and choose among specific services to meet their needs. ... At a high level, the main challenge stemming from a modular cloud architecture is that it adds complexity to an organization's cloud strategy. The more cloud services the CIO makes available, the harder it becomes to ensure that everyone is using them in a secure, efficient, cost-effective way. This is why a pivot toward a modular cloud strategy must be accompanied by governance and management practices that keep these challenges in check. ... As they work to ensure that the business can consume a wide selection of cloud services efficiently and securely, IT leaders may take inspiration from a practice known as platform engineering, which has grown in popularity in recent years. Platform engineering is the establishment of approved IT solutions that a business's internal users can access on a self-service basis, usually via a type of portal known as an internal developer platform. Historically, organizations have used platform engineering primarily to provide software developers with access to development tools and environments, not to manage cloud services. But the same sort of approach could help to streamline access to modular, composable cloud solutions.


8 platform engineering anti-patterns

Establishing a product mindset also helps drive improvement of the platform over time. “Start with a minimum viable platform to iterate and adapt based on feedback while also considering the need to measure the platform’s impact,” says Platform Engineering’s Galante. ... Top-down mandates for new technologies can easily turn off developers, especially when they alter existing workflows. Without the ability to contribute and iterate, the platform drifts from developer needs, prompting workarounds. ... “The feeling of being heard and understood is very important,” says Zohar Einy, CEO at Port, provider of a developer portal. “Users are more receptive to the portal once they know it’s been built after someone asked about their problems.” By performing user research and conducting developer surveys up front, platform engineers can discover the needs of all stakeholders and create platforms that mesh better with existing workflows and benefit productivity. ... Although platform engineering case studies from large companies, like Spotify, Expedia, or American Airlines, look impressive on paper, it doesn’t mean their strategies will transfer well to other organizations, especially those with mid-size or small-scale environments. ... Platform engineering requires more energy beyond a simple rebrand. “I’ve seen teams simply being renamed from operations or infrastructure teams to platform engineering teams, with very little change or benefit to the organization,” says Paula Kennedy


How Ransomware’s Data Theft Evolution is Rewriting Cyber Insurance Risk Models

Traditional cyber insurance risk models assume ransomware means encrypted files and brief business interruptions. The shift toward data theft creates complex claim scenarios that span multiple coverage lines and expose gaps in traditional policy structures. When attackers steal data rather than just encrypting it, the resulting claims can simultaneously trigger business interruption coverage, professional liability protection, regulatory defense coverage and crisis management. Each coverage line may have different limits, deductibles and exclusions, creating complicated interactions that claims adjusters struggle to parse. Modern business relationships are interconnected, which amplifies complications. A data breach at one organization can trigger liability claims from business partners, regulatory investigations across multiple jurisdictions, and contractual disputes with vendors and customers. Dependencies on third-party services create cascading exposures that traditional risk models fail to capture. ... The insurance implications are profound. Manual risk assessment processes cannot keep pace with the volume and sophistication of AI-enhanced attacks. Carriers still relying on traditional underwriting approaches face a fundamental mismatch of human-speed risk evaluation against machine-speed threat deployment.


Network security devices endanger orgs with ’90s era flaws“

Attackers are not trying to do the newest and greatest thing every single day,” watchTowr’s Harris explains. “They will do what works at scale. And we’ve now just seen that phishing has become objectively too expensive or too unsuccessful at scale to justify the time investment in deploying mailing infrastructure, getting domains and sender protocols in place, finding ways to bypass EDR, AV, sandboxes, mail filters, etc. It is now easier to find a 1990s-tier vulnerability in a border device where EDR typically isn’t deployed, exploit that, and then pivot from there.” ... “Identifying a command injection that is looking for a command string being passed to a system in some C or C++ code is not a terribly difficult thing to find,” Gross says. “But I think the trouble is understanding a really complicated appliance like these security network appliances. It’s not just like a single web application and that’s it.” This can also make it difficult for product developers themselves to understand the risks of a feature they add on one component if they don’t have a full understanding of the entire product architecture. ... Another problem? These appliances have a lot of legacy code, some that is 10 years or older. Plus, products and code bases inherited through acquisitions often means the developers who originally wrote the code might be long gone.


When everything’s connected, everything’s at risk

Treat OT changes as business changes (because they are). Involve plant managers, safety managers, and maintenance leadership in risk decisions. Be sure to test all changes in a development environment that adequately models the production environment where possible. Schedule changes during planned downtime with rollbacks ready. Build visibility passively with read-only collectors and protocol-aware monitoring to create asset and traffic maps without requiring PLC access. ... No one can predict the future. However, if the past is an indicator of the future, adversaries will continue to increasingly bypass devices and hijack cloud consoles, API tokens and remote management platforms to impact businesses on an industrial scale. Another area of risk is the firmware supply chain. Tiny devices often carry third-party code that we can’t easily patch. We’ll face more “patch by replacement” realities, where the only fix is swapping hardware. Additionally, machine identities at the edge, such as certificates and tokens, will outnumber humans by orders of magnitude. The lifecycle and privileges of those identities are the new perimeter. From a threat perspective, we will see an increasing number of ransomware attacks targeting physical disruption to increase leverage for the threat actors, as well as private 5G/smart facilities that, if misconfigured, propagate risk faster than any LAN ever has.


Software engineering foundations for the AI-native era

As developers begin composing software instead of coding line by line, they will need API-enabled composable components and services to stitch together. Software engineering leaders should begin by defining a goal to achieve a composable architecture that is based on modern multiexperience composable applications, APIs and loosely coupled API-first services. ... Software engineering leaders should support AI-ready data by organizing enterprise data assets for AI use. Generative AI is most useful when the LLM is paired with context-specific data. Platform engineering and internal developer portals provide the vehicles by which this data can be packaged, found and integrated by developers. The urgent demand for AI-ready data to support AI requires evolutionary changes to data management and upgrades to architecture, platforms, skills and processes. Critically, Model Context Protocol (MCP) needs to be considered. ... Software engineers can become risk-averse unless they are given the freedom, psychological safety and environment for risk taking and experimentation. Leaders must establish a culture of innovation where their teams are eager to experiment with AI technologies. This also applies in software product ownership, where experiments and innovation lead to greater optimization of the value delivered to customers.


What Does a 'Sovereign Cloud' Really Mean?

First, a sovereign cloud could be approached as a matter of procurement: Canada could shift its contract from US tech companies that currently dominate the approved list to non-American alternatives. At present, eight cloud service providers (CSPs) are approved for use by the Canadian government, seven of which are American. Accordingly, there is a clear opportunity to diversify procurement, particularly towards European CSPs, as suggested by the government’s ongoing discussions with France’s OVH Cloud. ... Second, a sovereign cloud could be defined as cloud infrastructure that is not only located in Canada and insulated from foreign legal access, but also owned by Canadian entities. Practically speaking, this would mean procuring services from domestic companies, a step the government has already taken with ThinkOn, the only non-American company CSP on the government’s approved list. ... Third, perhaps true cloud sovereignty might require more direct state intervention and a publicly built and maintained cloud. The Canadian government could develop in-house capacities for cloud computing and exercise the highest possible degree of control over government data. A dedicated Crown corporation could be established to serve the government’s cloud computing needs. ... No matter how we approach it, cloud sovereignty will be costly. 


Big Tech’s trust crisis: Why there is now the need for regulatory alignment

When companies deploy AI features primarily to establish market position rather than solve user problems, they create what might be termed ‘trust debt’ – a technical and social liability that compounds over time. This manifests in several ways, including degraded user experience, increased attack surfaces, and regulatory friction that ultimately impacts system performance and scalability. ... The emerging landscape of AI governance frameworks, from the EU AI Act to ISO 42001, shows an attempt to codify engineering best practices for managing algorithmic systems at scale. These standards address several technical realities, including bias in training data, security vulnerabilities in model inference, and intellectual property risks in data processing pipelines. Organisations implementing robust AI governance frameworks achieve regulatory compliance while adopting proven system design patterns that reduce operational risk. ... The technical implementation of trust requires embedding privacy and security considerations throughout the development lifecycle – what security engineers call ‘shifting left’ on governance. This approach treats regulatory compliance as architectural requirements that shape system design from inception. Companies that successfully integrate governance into their technical architecture find that compliance becomes a byproduct of good engineering practices which, over time, creates a series of sustainable competitive advantages.


The most sustainable data center is the one that’s already built: The business case for a ‘retrofit first’ mandate

From a sustainability standpoint, reusing and retrofitting legacy infrastructure is the single most impactful step our industry can take. Every megawatt of IT load that’s migrated into an existing site avoids the manufacturing, transport, and installation of new chillers, pumps, generators, piping, conduit, and switchgear and prevents the waste disposal associated with demolition. Sectors like healthcare, airports, and manufacturing have long proven that, with proper maintenance, mechanical and electrical systems can operate reliably for 30–50 years, and distribution piping can last a century. The data center industry – known for redundancy and resilience – can and should follow suit. The good news is that most data centers were built to last. ... When executed strategically, retrofits can reduce capital costs by 30–50 percent compared to greenfield construction, while accelerating time to market by months or even years. They also strengthen ESG reporting credibility, proving that sustainability and profitability can coexist. ... At the end of the day, I agree with Ms. Kass – the cleanest data center is the one that does not need to be built. For those that are already built, reusing and revitalizing the infrastructure we already have is not just a responsible environmental choice, it’s a sound business strategy that conserves capital, accelerates deployment, and aligns our industry’s growth with society’s expectations.

Daily Tech Digest - October 19, 2025


; Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How CIOs Can Close the IT Workforce Skills Gap for an AI-First Organization

Deliberately building AI skills among existing talent, rather than searching outside the organization for new hires or leaving skills development to chance, can help develop the desired institutional knowledge and build an IT-resilient workforce. AI-first is a strategic approach that guides the use of AI technology within an enterprise or a unit within it, with the intention of maximizing the benefits from AI. IT organizations must maintain ongoing skills development to be successful as an AI-first organization. ... In developing the future-state competency map, CIOs must include AI-specific skills and competencies, ensuring each role has measurable expectations aligned with the company’s strategic objectives related to AI. CIO must also partner with HR to design and establish AI literacy programs. While HR leaders are experts in scaling learning initiatives and standardizing tools, CIOs have more insight into foundational AI skills, training, and technical support required in the enterprise. CIOs should regularly review whether their teams’ AI capabilities contribute to faster product launches or improved customer insights. ... Addressing employees’ key concerns is a critical step for any AI change management initiative to be successful. AI is fundamentally changing traditional workplace operating models by democratizing access to technology, generating insights, and changing the relationship between people and technology.


20 Strategies To Strengthen Your Crisis Management Playbook

The regular review and refinement of protocols ensures alignment when a scenario arises. At our company, we centralize contacts, prepare for a range of scenarios and set outreach guidelines. This enables rapid response, timely updates and meaningful support, which safeguards trust and strengthens relationships with employees, stakeholders and clients. ... Unintended consequences often arise when stakeholder expectations are left out of crisis planning. Leaders should bake audience insights into their playbooks early—not after headlines hit. Anticipating concerns builds trust and gives you the clarity and credibility to lead through the tough moments. ... Know when to do nothing. Sometimes the instinct to respond immediately leads to increased confusion and puts your brand even further under the microscope. The best crisis managers know when to stop, see how things play out and respond accordingly (if at all), all while preparing for a variety of scenarios behind the scenes. ... Act like a board of directors. A crisis is not an event; it's a stress test of brand, enterprise and reputation infrastructure and resilience. Crisis plans must align with business continuity, incident response and disaster recovery plans. Marketing and communications must co-lead with the exec team, legal, ops and regulatory to guide action before commercial, brand equity and reputation risk escalates.


Abstract or die: Why AI enterprises can't afford rigid vector stacks

Without portability, organizations stagnate. They have technical debt from recursive code paths, are hesitant to adopt new technology and cannot move prototypes to production at pace. In effect, the database is a bottleneck rather than an accelerator. Portability, or the ability to move underlying infrastructure without re-encoding the application, is ever more a strategic requirement for enterprises rolling out AI at scale. ... Instead of having application code directly bound to some specific vector backend, companies can compile against an abstraction layer that normalizes operations like inserts, queries and filtering. This doesn't necessarily eliminate the need to choose a backend; it makes that choice less rigid. Development teams can start with DuckDB or SQLite in the lab, then scale up to Postgres or MySQL for production and ultimately adopt a special-purpose cloud vector DB without having to re-architect the application. ... What's happening in the vector space is one example of a bigger trend: Open-source abstractions as critical infrastructure; In data formats: Apache Arrow; In ML models: ONNX; In orchestration: Kubernetes; In AI APIs: Any-LLM and other such frameworks. These projects succeed, not by adding new capability, but by removing friction. They enable enterprises to move more quickly, hedge bets and evolve along with the ecosystem. Vector DB adapters continue this legacy, transforming a high-speed, fragmented space into infrastructure that enterprises can truly depend on. ...


AWS's New Security VP: A Turning Point for AI Cybersecurity Leadership?

"As we move forward into 2026, the breadth and depth of AI opportunities, products, and threats globally present a paradigm shift in cyber defense," Lohrmann said. He added that he was encouraged by AWS's recognition of the need for additional focus and attention on these cyberthreats. ... "Agentic AI attackers can now operate with a 'reflection loop' so they are effectively self-learning from failed attacks and modifying their attack approach automatically," said Simon Ratcliffe, fractional CIO at Freeman Clarke. "This means the attacks are faster and there are more of them … putting overwhelming pressure on CISOs to respond." ... "I think the CISO's role will evolve to meet the broader governance ecosystem, bringing together AI security specialists, data scientists, compliance officers, and ethics leads," she said, adding cybersecurity's mantra that AI security is everyone's business. "But it demands dedicated expertise," she said. "Going forward, I hope that organizations treat AI governance and assurance as integral parts of cybersecurity, not siloed add-ons." ... In Liebig's opinion, the future of cybersecurity leadership looks less hierarchical than it does now. "As for who owns that risk, I believe the CISO remains accountable, but new roles are emerging to operationalize AI integrity -- model risk officers, AI security architects, and governance engineers," he explained. "The CISO's role should expand horizontally, ensuring AI aligns to enterprise trust frameworks, not stand apart from them."


The Top 5 Technology Trends For 2026

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world.  ... Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. While this trend might not appear to noticeably affect us in our day-to-day lives, the impact on business, industry and science will begin to take shape in noticeable ways.


How Successful CTOs Orchestrate Business Results at Every Stage

As companies mature, their technical needs shift from building for the present to a long-term vision, strategic partnerships, and leveraging technology to drive business goals. The Strategist CTO combines deep technical acumen with business acumen and a deep understanding of the customer journey. This leader collaborates with other executives on strategic planning, but always through the lens of where customers are heading, not strictly where technology is going.  ... For large enterprises with complex ecosystems and large customer bases, stability, security, and operational efficiency are paramount. This is where the Guardian CTO safeguards the customer experience through technical excellence.This leader oversees all aspects of technical infrastructure, ensuring the reliability, security, and availability of core technology assets with a clear understanding that every decision directly impacts customer trust. ... While these operational models often align with company growth stages, they aren't rigid. A company's needs can shift rapidly due to market conditions, competitive pressures, or unexpected challenges, and customer expectations can evolve just as quickly. ... The most successful companies create environments where technical leadership evolves in response to changing business needs, empowering technical leaders to pivot their focus from building to strategizing, or from innovating to safeguarding, as circumstances demand.


Financial services seek balance of trust, inclusion through face biometrics advances

Advances in the flexibility of face biometric liveness, deepfake detection and cross-sectoral collaboration represent the latest measures against fraud in remote financial services. A digital bank in the Philippines is integrating iProov’s face biometrics and liveness detection, OneConnect and a partner are entering a sandbox to work on protecting against deepfakes, and an event held by Facephi in Mexico explored the challenges of financial services trying to maintain digital trust while advancing inclusion. ... The Philippine digital bank will deploy advanced liveness detection tools as part of a new risk-based authentication strategy. “Our mission is to uplift the lives of all Filipinos through a secure, trusted, and accessible digital bank for all Filipinos, and that requires deploying resilient infrastructure capable of addressing sophisticated fraud,” said Russell Hernandez, chief information security officer at UnionDigital Bank. “As we shift toward risk-based authentication, we need a flexible and future-ready solution. iProov’s internationally proven ability to deliver ease of use, speed, and high security assurance – backed by reliable vendor support – ensures we can evolve our fraud defenses while sustaining customer trust and confidence.” ... The Mexican government has launched several initiatives to standardize digital identity infrastructure, including Llave MX — a single sign-on platform for public services — and the forthcoming National Digital Identity Document, designed to harmonize verification across sectors.


Why context, not just data, will define the future of AI in finance

Raw intelligence in AI and its ability to crunch numbers and process data is only one part of the equation. What it fundamentally lacks is wisdom, which comes from context. In areas like personal finance, building powerful models with deep domain knowledge is critical. The challenges range from misinterpretation of data to regulatory oversights that directly affect value for customers. That’s why at Intuit, we put “context at the core of AI.” This means moving beyond generic datasets to build specialised Financial Large Language Models (LLMs) trained on decades of anonymised financial expertise. It’s about understanding the interconnected journey of our customers across our ecosystem—from the freelancer managing invoices in QuickBooks to that same individual filing taxes with TurboTax, to them monitoring their financial health on Credit Karma. ... In the age of GenAI, craftsmanship in engineering is being redefined. It’s no longer just about writing every line of code or building models from scratch, but about architecting robust, extensible systems that empower others to innovate. The very soul of engineering is transcending code to become the art of architecture. The measure of excellence is no longer found in the meticulous construction of every model, but in the visionary design of systems that empower domain experts to innovate. With tools like GenStudio and GenUX abstracting complexity, the engineer’s role isn’t diminished but elevated. They evolve from builders of applications to architects of innovation ecosystems. 


The modernization mirage: CIOs must see through it to play the long game

Enterprise architecture, in too many organizations, has been reduced to frameworks: TOGAF, Zachman, FEAF. These models provide structure but rarely move capital or inspire investor trust. Boards don’t want frameworks. They want influence. That’s why I developed the Architecture Influence Flywheel — a practical model I use in board and transformation discussions. It rests on three pivots - Outcomes: Every architectural choice must tie directly to board-level priorities — growth, resilience, efficiency. ... Relationships: CIOs must serve as business-technology translators. Express progress not in technical jargon, but in investor language — return on capital, return on innovation, margin expansion and risk mitigation. ... Visible wins: Influence grows through undeniable demonstrations. A system that cuts onboarding time by 40%, an AI model that reduces fraud losses or an audit process that clears in half the time — these visible wins build momentum. ... Technologies rise and fall. Frameworks evolve. Titles shift. But one principle endures: What leaders tolerate defines their legacy. Playing the long game requires CIOs to ask uncomfortable questions:Will we tolerate AI models we cannot explain to regulators? Will we tolerate unchecked cloud sprawl without financial discipline? Will we tolerate compliance as a box-ticking exercise rather than a growth enabler? 


What Is Cybersecurity Platformization?

Cybersecurity platformization is a strategic response to this complexity. It’s the move from a collection of disparate point solutions to a single, unified platform that integrates multiple security functions. Dickson describes it as the “canned integration of security tools so that they work together holistically to make the installation, maintenance and operation easier for the end customer across various tools in the security stack.” ... The most significant hidden cost of a fragmented, multitool security strategy is labor. Managing disconnected tools is a resource strain on an organization, as it requires individuals with specialized skills for each tool. This includes the labor-intensive task of managing API integrations and manually coding “shims,” or integrations to translate data between different tools, which often have separate protocols and proprietary interfaces, Dukes says. Beyond the cost of personnel, there’s the operational complexity.  ... One of the most immediate benefits of adopting a platform approach is cost reduction. This includes not only the reduction in licensing fees but also a reduction in the operational complexity and the number of specialized employees needed. ... Another key benefit is the well-worn concept of a “single pane of glass,” a single dashboard that enables IT security teams to have easier management and reporting. Instead of multiple tools with different interfaces and data formats, a unified platform streamlines everything into a single, cohesive view.

Daily Tech Digest - October 17, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett



AI Agents Transform Enterprise Application Development

There's now discussion about the agent development life cycle and the need to supervise or manage AI agent developers - calling for agent governance and infrastructure changes. New products, services and partnerships announced in the past few weeks support this trend. ... Enterprises were cautious about entrusting public models and agents with intellectual property. But the partnership with Anthropic could make models more trustworthy. "Enterprises are looking for AI they can actually trust with their code, their data and their day-to-day operations," said Mike Krieger, chief product officer at Anthropic. ... Embedding agentic AI within the fabric of enterprise architecture enables organizations to unlock transformative agility, reduce cognitive load and accelerate innovation - without compromising trust, compliance or control - says an IBM report titled "Architecting secure enterprise AI agents with MCP." Developers adopted globally recognized models such as Capability Maturity Model Integration, or CMMI, and CMMI-DEV as paths to improve the software development and maintenance processes. ... Enterprises must be prepared to implement radical process and infrastructure changes to successfully adopt AI agents in software delivery. AI agents must be managed by a central governance framework to enable complete visibility into agents, agent performance monitoring and security.


There’s no such thing as quantum incident response – and that changes everything

CISOs are directing attention to have quantum security risks added to the corporate risk register. It belongs there. But the problem to be solved is not a quick fix, despite what some snake oil salesmen might be pushing. There is no simple configuration checkbox on AWS or Azure or GCP where you “turn on” post-quantum cryptography (PQC) and then you’re good to go. ... Without significant engagement from developers, QA teams and product owners, the quantum decryption risk will remain in play. You cannot transfer this risk by adding more cyber insurance policy coverage. The entire cyber insurance industry itself is in a bit of an existential doubt situation regarding whether cybersecurity can reasonably be insured against, given the systemic impacts of supply chain attacks that cascade across entire industries. ...The moment when a cryptographically relevant quantum computer comes into existence won’t arrive with fanfare or bombast. Hence, the idea of the silent boom. But by then, it will be too late for incident response. What you should do Monday morning: Start that data classification exercise. Figure out what needs protecting for the long term versus what has a shorter shelf life. In the world of DNS, we have Time To Live (TTL) that declares how long a resolver can cache a response. Think of a “PQC TTL” for your sensitive data, because not everything needs 30-year protection.


Hackers Use Blockchain to Hide Malware in Plain Sight

At least two hacking groups are using public blockchains to conceal and control malware in ways that make their operations nearly impossible to dismantle, shows research from Google's Threat Intelligence Group. ... The technique, known as EtherHiding, embeds malicious instructions in blockchain smart contracts rather than traditional servers. Since the blockchain is decentralized and immutable, attackers gain what the researchers call a "bulletproof" infrastructure. The development signals an "escalation in the threat landscape," said Robert Wallace, consulting leader at Mandiant, which is part of Google Cloud. Hackers have found a method "resistant to law enforcement takedowns" that and can be "easily modified for new campaigns." ... The group over time expanded its architecture from a single smart contract to a three-tier system mimicking a software "proxy pattern." This allows rapid updates without touching the compromised sites. One contract acts as a router, another fingerprints the victim's system and a third holds encrypted payload data and decryption keys. A single blockchain transaction, costing as little as a dollar in network fees, can change lure URLs or encryption keys across thousands of infected sites. The researchers said the threat actor used social engineering tricks like fake Cloudflare verification or Chrome update prompts to persuade victims to run malicious commands.


Everyone’s adopting AI, few are managing the risk

Across industries, many organizations are caught in what AuditBoard calls the “middle maturity trap.” Teams are active, frameworks are updated, and risks are logged, but progress fades after early success. When boards include risk oversight as a standing agenda item and align on shared performance goals, activity becomes consistent and forward-looking. When governance and ownership are unclear, adoption slows and collaboration fades. ... Many enterprises are adopting or updating risk frameworks, but implementation depth varies. The typical organization maps its controls to several frameworks, while leading firms embed thousands of requirements into daily operations. The report warns that “surface compliance” is common. Breadth without depth leaves gaps that only appear during audits or disruptions. Mature programs treat frameworks as living systems that evolve with business and regulatory change. ... The findings show that many organizations are investing heavily in risk management and AI, but maturity depends less on technology and more on integration. Advanced organizations use governance to connect teams and turn data into foresight. AuditBoard’s research suggests that as AI becomes more embedded in enterprise systems, risk leaders will need to move beyond activity and focus on consistency. Those that do will be better positioned to anticipate change and turn risk management into a strategic advantage.


A mini-CrowdStrike moment? Windows 11 update cripples dev environments

The October 2025 cumulative update, (KB5066835), addressed security issues in Windows operating systems (OSes), but also appears to have blocked Windows’ ability to talk within itself. Localhost allows apps and services to communicate internally without using internet or external network access. Developers use the function to develop, test, and debug websites and apps locally on a Windows machine before releasing them to the public. ... When localhost stops working, entire application development environments can be impacted or “even grind to a halt,” causing internal processes and services to fail and stop communicating, he pointed out. This means developers are unable to test or run web applications locally. This issue is really about “denial of service,” where tools and processes dependent on internal loopback services break, he noted. Developers can’t debug locally, and automated testing processes can fail. At the same time, IT departments are left to troubleshoot, field an influx of service tickets, roll back patches, and look for workarounds. “This bug is definitely disruptive enough to cause delays, lost productivity, and frustration across teams,” said Avakian. ... This type of issue underscores the importance of quality control and thorough testing by third-party suppliers and vendors before releasing updates to commercial markets, he said. Not doing so can have significant downstream impacts and “erode trust” in the update process while making teams more cautious about patching.


How Banks of Every Size Can Put AI to Work, and Take Back Control

For smaller banks and credit unions, the AI conversation begins with math. They want the same digital responsiveness as larger competitors but can’t afford the infrastructure or staffing that traditionally make that possible. The promise of AI, especially low-code and automated implementation, changes that equation. What once required teams of engineers months of coding can now be deployed out-of-the-box, configured and pushed live in a day. That shift finally brings digital innovation within reach for smaller institutions that had long been priced out of it. But even when self-service tools are available, many institutions still rely on outside help for routine changes or maintenance. For these players, the first question is whether they’re willing or able to take product dev work inhouse, even with "AI inside"; the next question is whether they can find partners that can meet them on their own terms. ... For mid-sized players, the AI opportunity centers on reclaiming control. These institutions typically have strong internal teams and clear strategic ideas, yet they remain bound by vendor SLAs that slow innovation. The gap between what they can envision and what they can deliver is wide. AI-driven orchestration tools, especially those that let internal teams configure and launch digital products directly, can help close that gap. By removing layers of technical dependency, mid-sized institutions can move from periodic rollouts to something closer to iterative improvement. 


Why your AI is failing — and how a smarter data architecture can fix it

Traditional enterprises operate four separate, incompatible technology stacks, each optimized for different computing eras, not for AI reasoning capabilities. ... When you try to deploy AI across these fragmented stacks, chaos follows. The same business data gets replicated across systems with different formats and validation rules. Semantic relationships between business entities get lost during integration. Context critical for intelligent decision-making gets stripped away to optimize for system performance. AI systems receive technically clean datasets that are semantically impoverished and contextually devoid of meaning. ... As organizations begin shaping their enterprise general intelligence (EGI) architecture, critical operational intelligence remains trapped in disconnected silos. Engineering designs live in PLM systems, isolated from the ERP bill of materials. Quality metrics sit locked in MES platforms with no linkage to supplier performance data. Process parameters exist independently of equipment maintenance records. ... Enterprises solving the data architecture challenge gain sustainable competitive advantages. AI deployment timelines are measured in weeks rather than months. Decision accuracy reaches enterprise-grade reliability. Intelligence scales across all business domains. Innovation accelerates as AI creates new capabilities rather than just automating existing processes.


Under the hood of AI agents: A technical guide to the next frontier of gen AI

With agents, authorization works in two directions. First, of course, users require authorization to run the agents they’ve created. But as the agent is acting on the user’s behalf, it will usually require its own authorization to access networked resources. There are a few different ways to approach the problem of authorization. One is with an access delegation algorithm like OAuth, which essentially plumbs the authorization process through the agentic system. ... Agents also need to remember their prior interactions with their clients. If last week I told the restaurant booking agent what type of food I like, I don’t want to have to tell it again this week. The same goes for my price tolerance, the sort of ambiance I’m looking for, and so on. Long-term memory allows the agent to look up what it needs to know about prior conversations with the user. Agents don’t typically create long-term memories themselves, however. Instead, after a session is complete, the whole conversation passes to a separate AI model, which creates new long-term memories or updates existing ones. ... Agents are a new kind of software system, and they require new ways to think about observing, monitoring and auditing their behavior. Some of the questions we ask will look familiar: Whether the agents are running fast enough, how much they’re costing, how many tool calls they’re making and whether users are happy. 


Data Is the New Advantage – If You Can Hold On To It

Proprietary data has emerged as one of the most valuable assets for enterprises—and increasingly, the expectation is that data must be stored indefinitely, ready to fuel future models, insights, and innovations as the technology continues to evolve. ... Globally, data architects, managers, and protectors are in uncharted territory. The arrival of generative AI has proven just how unpredictable and fast-moving technological leaps can be – and if there’s one thing the past few years have taught us, it’s that we can’t know what comes next. The only way to prepare is to ensure proprietary data is not just stored but preserved indefinitely. Tomorrow’s breakthroughs – whether in AI, analytics, or some other yet-unimagined technology – will depend on the depth and quality of the data you have today, and how well you can utilize the storage technologies of your choice to serve your data usage and workflow needs. ... The lesson is clear: don’t get left behind, because your competitors are learning these lessons as well. The enterprises that thrive in this next era of digital innovation will be those that recognize the enduring value of their data. That means keeping it all and planning to keep it forever. By embracing hybrid storage strategies that combine the strengths of tape, cloud, and on-premises systems, organizations can rise to the challenge of exponential growth, protect themselves from evolving threats, and ensure they are ready for whatever comes next. In the age of AI, your competitive advantage won’t just come from your technology stack.


Why women are leading the next chapter of data centers

Working her way up through finance and operations into large-scale digital infrastructure, Xiao’s career reflects a steady ascent across disciplines, including senior roles as president of Chindata Group and CFO at Shanghai Wangsu. These roles sharpened her ability to translate high-level strategy into expansion, particularly in the demanding data center sector. ... Today, she shapes BDC’s commercial playbook, which includes setting capital priorities, driving cost-efficient delivery models, and embedding resilience and sustainability into every development decision. In mission-critical industries like data centers, repeatability is a challenge. Every market has unique variables – land, power, water, regulatory frameworks, contractor ecosystems, and community engagement. ... For the next wave of talent, building credibility in the data center industry requires more than technical expertise. Engaging in forums, networks, and industry resources not only earns recognition and respect but also broadens knowledge and sharpens perspective. ... Peer networks within hyperscaler and operator communities, Xiao notes, are invaluable for exchanging insights and challenging assumptions. “Industry conferences, cross-company working groups, government-industry task forces, and ecosystem media engagements all matter. And for bench strength, I value partnerships with local technology innovators and digital twin or AI firms that help us run safer, greener facilities,” Xiao explains.