Daily Tech Digest - October 21, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone


The teacher is the new engineer: Inside the rise of AI enablement and PromptOps

Enterprises should onboard AI agents as deliberately as they onboard people — with job descriptions, training curricula, feedback loops and performance reviews. This is a cross-functional effort across data science, security, compliance, design, HR and the end users who will work with the system daily. ... Don’t let your AI’s first “training” be with real customers. Build high-fidelity sandboxes and stress-test tone, reasoning and edge cases — then evaluate with human graders. ... As onboarding matures, expect to see AI enablement managers and PromptOps specialists in more org charts, curating prompts, managing retrieval sources, running eval suites and coordinating cross-functional updates. Microsoft’s internal Copilot rollout points to this operational discipline: Centers of excellence, governance templates and executive-ready deployment playbooks. These practitioners are the “teachers” who keep AI aligned with fast-moving business goals. ... In a future where every employee has an AI teammate, the organizations that take onboarding seriously will move faster, safer and with greater purpose. Gen AI doesn’t just need data or compute; it needs guidance, goals, and growth plans. Treating AI systems as teachable, improvable and accountable team members turns hype into habitual value.


How CIOs Can Unlock Business Agility with Modular Cloud Architectures

A modular cloud architecture is one that makes a variety of discrete cloud services available on demand. The services are hosted across multiple cloud platforms, and different units within the business can pick and choose among specific services to meet their needs. ... At a high level, the main challenge stemming from a modular cloud architecture is that it adds complexity to an organization's cloud strategy. The more cloud services the CIO makes available, the harder it becomes to ensure that everyone is using them in a secure, efficient, cost-effective way. This is why a pivot toward a modular cloud strategy must be accompanied by governance and management practices that keep these challenges in check. ... As they work to ensure that the business can consume a wide selection of cloud services efficiently and securely, IT leaders may take inspiration from a practice known as platform engineering, which has grown in popularity in recent years. Platform engineering is the establishment of approved IT solutions that a business's internal users can access on a self-service basis, usually via a type of portal known as an internal developer platform. Historically, organizations have used platform engineering primarily to provide software developers with access to development tools and environments, not to manage cloud services. But the same sort of approach could help to streamline access to modular, composable cloud solutions.


8 platform engineering anti-patterns

Establishing a product mindset also helps drive improvement of the platform over time. “Start with a minimum viable platform to iterate and adapt based on feedback while also considering the need to measure the platform’s impact,” says Platform Engineering’s Galante. ... Top-down mandates for new technologies can easily turn off developers, especially when they alter existing workflows. Without the ability to contribute and iterate, the platform drifts from developer needs, prompting workarounds. ... “The feeling of being heard and understood is very important,” says Zohar Einy, CEO at Port, provider of a developer portal. “Users are more receptive to the portal once they know it’s been built after someone asked about their problems.” By performing user research and conducting developer surveys up front, platform engineers can discover the needs of all stakeholders and create platforms that mesh better with existing workflows and benefit productivity. ... Although platform engineering case studies from large companies, like Spotify, Expedia, or American Airlines, look impressive on paper, it doesn’t mean their strategies will transfer well to other organizations, especially those with mid-size or small-scale environments. ... Platform engineering requires more energy beyond a simple rebrand. “I’ve seen teams simply being renamed from operations or infrastructure teams to platform engineering teams, with very little change or benefit to the organization,” says Paula Kennedy


How Ransomware’s Data Theft Evolution is Rewriting Cyber Insurance Risk Models

Traditional cyber insurance risk models assume ransomware means encrypted files and brief business interruptions. The shift toward data theft creates complex claim scenarios that span multiple coverage lines and expose gaps in traditional policy structures. When attackers steal data rather than just encrypting it, the resulting claims can simultaneously trigger business interruption coverage, professional liability protection, regulatory defense coverage and crisis management. Each coverage line may have different limits, deductibles and exclusions, creating complicated interactions that claims adjusters struggle to parse. Modern business relationships are interconnected, which amplifies complications. A data breach at one organization can trigger liability claims from business partners, regulatory investigations across multiple jurisdictions, and contractual disputes with vendors and customers. Dependencies on third-party services create cascading exposures that traditional risk models fail to capture. ... The insurance implications are profound. Manual risk assessment processes cannot keep pace with the volume and sophistication of AI-enhanced attacks. Carriers still relying on traditional underwriting approaches face a fundamental mismatch of human-speed risk evaluation against machine-speed threat deployment.


Network security devices endanger orgs with ’90s era flaws“

Attackers are not trying to do the newest and greatest thing every single day,” watchTowr’s Harris explains. “They will do what works at scale. And we’ve now just seen that phishing has become objectively too expensive or too unsuccessful at scale to justify the time investment in deploying mailing infrastructure, getting domains and sender protocols in place, finding ways to bypass EDR, AV, sandboxes, mail filters, etc. It is now easier to find a 1990s-tier vulnerability in a border device where EDR typically isn’t deployed, exploit that, and then pivot from there.” ... “Identifying a command injection that is looking for a command string being passed to a system in some C or C++ code is not a terribly difficult thing to find,” Gross says. “But I think the trouble is understanding a really complicated appliance like these security network appliances. It’s not just like a single web application and that’s it.” This can also make it difficult for product developers themselves to understand the risks of a feature they add on one component if they don’t have a full understanding of the entire product architecture. ... Another problem? These appliances have a lot of legacy code, some that is 10 years or older. Plus, products and code bases inherited through acquisitions often means the developers who originally wrote the code might be long gone.


When everything’s connected, everything’s at risk

Treat OT changes as business changes (because they are). Involve plant managers, safety managers, and maintenance leadership in risk decisions. Be sure to test all changes in a development environment that adequately models the production environment where possible. Schedule changes during planned downtime with rollbacks ready. Build visibility passively with read-only collectors and protocol-aware monitoring to create asset and traffic maps without requiring PLC access. ... No one can predict the future. However, if the past is an indicator of the future, adversaries will continue to increasingly bypass devices and hijack cloud consoles, API tokens and remote management platforms to impact businesses on an industrial scale. Another area of risk is the firmware supply chain. Tiny devices often carry third-party code that we can’t easily patch. We’ll face more “patch by replacement” realities, where the only fix is swapping hardware. Additionally, machine identities at the edge, such as certificates and tokens, will outnumber humans by orders of magnitude. The lifecycle and privileges of those identities are the new perimeter. From a threat perspective, we will see an increasing number of ransomware attacks targeting physical disruption to increase leverage for the threat actors, as well as private 5G/smart facilities that, if misconfigured, propagate risk faster than any LAN ever has.


Software engineering foundations for the AI-native era

As developers begin composing software instead of coding line by line, they will need API-enabled composable components and services to stitch together. Software engineering leaders should begin by defining a goal to achieve a composable architecture that is based on modern multiexperience composable applications, APIs and loosely coupled API-first services. ... Software engineering leaders should support AI-ready data by organizing enterprise data assets for AI use. Generative AI is most useful when the LLM is paired with context-specific data. Platform engineering and internal developer portals provide the vehicles by which this data can be packaged, found and integrated by developers. The urgent demand for AI-ready data to support AI requires evolutionary changes to data management and upgrades to architecture, platforms, skills and processes. Critically, Model Context Protocol (MCP) needs to be considered. ... Software engineers can become risk-averse unless they are given the freedom, psychological safety and environment for risk taking and experimentation. Leaders must establish a culture of innovation where their teams are eager to experiment with AI technologies. This also applies in software product ownership, where experiments and innovation lead to greater optimization of the value delivered to customers.


What Does a 'Sovereign Cloud' Really Mean?

First, a sovereign cloud could be approached as a matter of procurement: Canada could shift its contract from US tech companies that currently dominate the approved list to non-American alternatives. At present, eight cloud service providers (CSPs) are approved for use by the Canadian government, seven of which are American. Accordingly, there is a clear opportunity to diversify procurement, particularly towards European CSPs, as suggested by the government’s ongoing discussions with France’s OVH Cloud. ... Second, a sovereign cloud could be defined as cloud infrastructure that is not only located in Canada and insulated from foreign legal access, but also owned by Canadian entities. Practically speaking, this would mean procuring services from domestic companies, a step the government has already taken with ThinkOn, the only non-American company CSP on the government’s approved list. ... Third, perhaps true cloud sovereignty might require more direct state intervention and a publicly built and maintained cloud. The Canadian government could develop in-house capacities for cloud computing and exercise the highest possible degree of control over government data. A dedicated Crown corporation could be established to serve the government’s cloud computing needs. ... No matter how we approach it, cloud sovereignty will be costly. 


Big Tech’s trust crisis: Why there is now the need for regulatory alignment

When companies deploy AI features primarily to establish market position rather than solve user problems, they create what might be termed ‘trust debt’ – a technical and social liability that compounds over time. This manifests in several ways, including degraded user experience, increased attack surfaces, and regulatory friction that ultimately impacts system performance and scalability. ... The emerging landscape of AI governance frameworks, from the EU AI Act to ISO 42001, shows an attempt to codify engineering best practices for managing algorithmic systems at scale. These standards address several technical realities, including bias in training data, security vulnerabilities in model inference, and intellectual property risks in data processing pipelines. Organisations implementing robust AI governance frameworks achieve regulatory compliance while adopting proven system design patterns that reduce operational risk. ... The technical implementation of trust requires embedding privacy and security considerations throughout the development lifecycle – what security engineers call ‘shifting left’ on governance. This approach treats regulatory compliance as architectural requirements that shape system design from inception. Companies that successfully integrate governance into their technical architecture find that compliance becomes a byproduct of good engineering practices which, over time, creates a series of sustainable competitive advantages.


The most sustainable data center is the one that’s already built: The business case for a ‘retrofit first’ mandate

From a sustainability standpoint, reusing and retrofitting legacy infrastructure is the single most impactful step our industry can take. Every megawatt of IT load that’s migrated into an existing site avoids the manufacturing, transport, and installation of new chillers, pumps, generators, piping, conduit, and switchgear and prevents the waste disposal associated with demolition. Sectors like healthcare, airports, and manufacturing have long proven that, with proper maintenance, mechanical and electrical systems can operate reliably for 30–50 years, and distribution piping can last a century. The data center industry – known for redundancy and resilience – can and should follow suit. The good news is that most data centers were built to last. ... When executed strategically, retrofits can reduce capital costs by 30–50 percent compared to greenfield construction, while accelerating time to market by months or even years. They also strengthen ESG reporting credibility, proving that sustainability and profitability can coexist. ... At the end of the day, I agree with Ms. Kass – the cleanest data center is the one that does not need to be built. For those that are already built, reusing and revitalizing the infrastructure we already have is not just a responsible environmental choice, it’s a sound business strategy that conserves capital, accelerates deployment, and aligns our industry’s growth with society’s expectations.

Daily Tech Digest - October 19, 2025


; Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How CIOs Can Close the IT Workforce Skills Gap for an AI-First Organization

Deliberately building AI skills among existing talent, rather than searching outside the organization for new hires or leaving skills development to chance, can help develop the desired institutional knowledge and build an IT-resilient workforce. AI-first is a strategic approach that guides the use of AI technology within an enterprise or a unit within it, with the intention of maximizing the benefits from AI. IT organizations must maintain ongoing skills development to be successful as an AI-first organization. ... In developing the future-state competency map, CIOs must include AI-specific skills and competencies, ensuring each role has measurable expectations aligned with the company’s strategic objectives related to AI. CIO must also partner with HR to design and establish AI literacy programs. While HR leaders are experts in scaling learning initiatives and standardizing tools, CIOs have more insight into foundational AI skills, training, and technical support required in the enterprise. CIOs should regularly review whether their teams’ AI capabilities contribute to faster product launches or improved customer insights. ... Addressing employees’ key concerns is a critical step for any AI change management initiative to be successful. AI is fundamentally changing traditional workplace operating models by democratizing access to technology, generating insights, and changing the relationship between people and technology.


20 Strategies To Strengthen Your Crisis Management Playbook

The regular review and refinement of protocols ensures alignment when a scenario arises. At our company, we centralize contacts, prepare for a range of scenarios and set outreach guidelines. This enables rapid response, timely updates and meaningful support, which safeguards trust and strengthens relationships with employees, stakeholders and clients. ... Unintended consequences often arise when stakeholder expectations are left out of crisis planning. Leaders should bake audience insights into their playbooks early—not after headlines hit. Anticipating concerns builds trust and gives you the clarity and credibility to lead through the tough moments. ... Know when to do nothing. Sometimes the instinct to respond immediately leads to increased confusion and puts your brand even further under the microscope. The best crisis managers know when to stop, see how things play out and respond accordingly (if at all), all while preparing for a variety of scenarios behind the scenes. ... Act like a board of directors. A crisis is not an event; it's a stress test of brand, enterprise and reputation infrastructure and resilience. Crisis plans must align with business continuity, incident response and disaster recovery plans. Marketing and communications must co-lead with the exec team, legal, ops and regulatory to guide action before commercial, brand equity and reputation risk escalates.


Abstract or die: Why AI enterprises can't afford rigid vector stacks

Without portability, organizations stagnate. They have technical debt from recursive code paths, are hesitant to adopt new technology and cannot move prototypes to production at pace. In effect, the database is a bottleneck rather than an accelerator. Portability, or the ability to move underlying infrastructure without re-encoding the application, is ever more a strategic requirement for enterprises rolling out AI at scale. ... Instead of having application code directly bound to some specific vector backend, companies can compile against an abstraction layer that normalizes operations like inserts, queries and filtering. This doesn't necessarily eliminate the need to choose a backend; it makes that choice less rigid. Development teams can start with DuckDB or SQLite in the lab, then scale up to Postgres or MySQL for production and ultimately adopt a special-purpose cloud vector DB without having to re-architect the application. ... What's happening in the vector space is one example of a bigger trend: Open-source abstractions as critical infrastructure; In data formats: Apache Arrow; In ML models: ONNX; In orchestration: Kubernetes; In AI APIs: Any-LLM and other such frameworks. These projects succeed, not by adding new capability, but by removing friction. They enable enterprises to move more quickly, hedge bets and evolve along with the ecosystem. Vector DB adapters continue this legacy, transforming a high-speed, fragmented space into infrastructure that enterprises can truly depend on. ...


AWS's New Security VP: A Turning Point for AI Cybersecurity Leadership?

"As we move forward into 2026, the breadth and depth of AI opportunities, products, and threats globally present a paradigm shift in cyber defense," Lohrmann said. He added that he was encouraged by AWS's recognition of the need for additional focus and attention on these cyberthreats. ... "Agentic AI attackers can now operate with a 'reflection loop' so they are effectively self-learning from failed attacks and modifying their attack approach automatically," said Simon Ratcliffe, fractional CIO at Freeman Clarke. "This means the attacks are faster and there are more of them … putting overwhelming pressure on CISOs to respond." ... "I think the CISO's role will evolve to meet the broader governance ecosystem, bringing together AI security specialists, data scientists, compliance officers, and ethics leads," she said, adding cybersecurity's mantra that AI security is everyone's business. "But it demands dedicated expertise," she said. "Going forward, I hope that organizations treat AI governance and assurance as integral parts of cybersecurity, not siloed add-ons." ... In Liebig's opinion, the future of cybersecurity leadership looks less hierarchical than it does now. "As for who owns that risk, I believe the CISO remains accountable, but new roles are emerging to operationalize AI integrity -- model risk officers, AI security architects, and governance engineers," he explained. "The CISO's role should expand horizontally, ensuring AI aligns to enterprise trust frameworks, not stand apart from them."


The Top 5 Technology Trends For 2026

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world.  ... Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. While this trend might not appear to noticeably affect us in our day-to-day lives, the impact on business, industry and science will begin to take shape in noticeable ways.


How Successful CTOs Orchestrate Business Results at Every Stage

As companies mature, their technical needs shift from building for the present to a long-term vision, strategic partnerships, and leveraging technology to drive business goals. The Strategist CTO combines deep technical acumen with business acumen and a deep understanding of the customer journey. This leader collaborates with other executives on strategic planning, but always through the lens of where customers are heading, not strictly where technology is going.  ... For large enterprises with complex ecosystems and large customer bases, stability, security, and operational efficiency are paramount. This is where the Guardian CTO safeguards the customer experience through technical excellence.This leader oversees all aspects of technical infrastructure, ensuring the reliability, security, and availability of core technology assets with a clear understanding that every decision directly impacts customer trust. ... While these operational models often align with company growth stages, they aren't rigid. A company's needs can shift rapidly due to market conditions, competitive pressures, or unexpected challenges, and customer expectations can evolve just as quickly. ... The most successful companies create environments where technical leadership evolves in response to changing business needs, empowering technical leaders to pivot their focus from building to strategizing, or from innovating to safeguarding, as circumstances demand.


Financial services seek balance of trust, inclusion through face biometrics advances

Advances in the flexibility of face biometric liveness, deepfake detection and cross-sectoral collaboration represent the latest measures against fraud in remote financial services. A digital bank in the Philippines is integrating iProov’s face biometrics and liveness detection, OneConnect and a partner are entering a sandbox to work on protecting against deepfakes, and an event held by Facephi in Mexico explored the challenges of financial services trying to maintain digital trust while advancing inclusion. ... The Philippine digital bank will deploy advanced liveness detection tools as part of a new risk-based authentication strategy. “Our mission is to uplift the lives of all Filipinos through a secure, trusted, and accessible digital bank for all Filipinos, and that requires deploying resilient infrastructure capable of addressing sophisticated fraud,” said Russell Hernandez, chief information security officer at UnionDigital Bank. “As we shift toward risk-based authentication, we need a flexible and future-ready solution. iProov’s internationally proven ability to deliver ease of use, speed, and high security assurance – backed by reliable vendor support – ensures we can evolve our fraud defenses while sustaining customer trust and confidence.” ... The Mexican government has launched several initiatives to standardize digital identity infrastructure, including Llave MX — a single sign-on platform for public services — and the forthcoming National Digital Identity Document, designed to harmonize verification across sectors.


Why context, not just data, will define the future of AI in finance

Raw intelligence in AI and its ability to crunch numbers and process data is only one part of the equation. What it fundamentally lacks is wisdom, which comes from context. In areas like personal finance, building powerful models with deep domain knowledge is critical. The challenges range from misinterpretation of data to regulatory oversights that directly affect value for customers. That’s why at Intuit, we put “context at the core of AI.” This means moving beyond generic datasets to build specialised Financial Large Language Models (LLMs) trained on decades of anonymised financial expertise. It’s about understanding the interconnected journey of our customers across our ecosystem—from the freelancer managing invoices in QuickBooks to that same individual filing taxes with TurboTax, to them monitoring their financial health on Credit Karma. ... In the age of GenAI, craftsmanship in engineering is being redefined. It’s no longer just about writing every line of code or building models from scratch, but about architecting robust, extensible systems that empower others to innovate. The very soul of engineering is transcending code to become the art of architecture. The measure of excellence is no longer found in the meticulous construction of every model, but in the visionary design of systems that empower domain experts to innovate. With tools like GenStudio and GenUX abstracting complexity, the engineer’s role isn’t diminished but elevated. They evolve from builders of applications to architects of innovation ecosystems. 


The modernization mirage: CIOs must see through it to play the long game

Enterprise architecture, in too many organizations, has been reduced to frameworks: TOGAF, Zachman, FEAF. These models provide structure but rarely move capital or inspire investor trust. Boards don’t want frameworks. They want influence. That’s why I developed the Architecture Influence Flywheel — a practical model I use in board and transformation discussions. It rests on three pivots - Outcomes: Every architectural choice must tie directly to board-level priorities — growth, resilience, efficiency. ... Relationships: CIOs must serve as business-technology translators. Express progress not in technical jargon, but in investor language — return on capital, return on innovation, margin expansion and risk mitigation. ... Visible wins: Influence grows through undeniable demonstrations. A system that cuts onboarding time by 40%, an AI model that reduces fraud losses or an audit process that clears in half the time — these visible wins build momentum. ... Technologies rise and fall. Frameworks evolve. Titles shift. But one principle endures: What leaders tolerate defines their legacy. Playing the long game requires CIOs to ask uncomfortable questions:Will we tolerate AI models we cannot explain to regulators? Will we tolerate unchecked cloud sprawl without financial discipline? Will we tolerate compliance as a box-ticking exercise rather than a growth enabler? 


What Is Cybersecurity Platformization?

Cybersecurity platformization is a strategic response to this complexity. It’s the move from a collection of disparate point solutions to a single, unified platform that integrates multiple security functions. Dickson describes it as the “canned integration of security tools so that they work together holistically to make the installation, maintenance and operation easier for the end customer across various tools in the security stack.” ... The most significant hidden cost of a fragmented, multitool security strategy is labor. Managing disconnected tools is a resource strain on an organization, as it requires individuals with specialized skills for each tool. This includes the labor-intensive task of managing API integrations and manually coding “shims,” or integrations to translate data between different tools, which often have separate protocols and proprietary interfaces, Dukes says. Beyond the cost of personnel, there’s the operational complexity.  ... One of the most immediate benefits of adopting a platform approach is cost reduction. This includes not only the reduction in licensing fees but also a reduction in the operational complexity and the number of specialized employees needed. ... Another key benefit is the well-worn concept of a “single pane of glass,” a single dashboard that enables IT security teams to have easier management and reporting. Instead of multiple tools with different interfaces and data formats, a unified platform streamlines everything into a single, cohesive view.

Daily Tech Digest - October 17, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett



AI Agents Transform Enterprise Application Development

There's now discussion about the agent development life cycle and the need to supervise or manage AI agent developers - calling for agent governance and infrastructure changes. New products, services and partnerships announced in the past few weeks support this trend. ... Enterprises were cautious about entrusting public models and agents with intellectual property. But the partnership with Anthropic could make models more trustworthy. "Enterprises are looking for AI they can actually trust with their code, their data and their day-to-day operations," said Mike Krieger, chief product officer at Anthropic. ... Embedding agentic AI within the fabric of enterprise architecture enables organizations to unlock transformative agility, reduce cognitive load and accelerate innovation - without compromising trust, compliance or control - says an IBM report titled "Architecting secure enterprise AI agents with MCP." Developers adopted globally recognized models such as Capability Maturity Model Integration, or CMMI, and CMMI-DEV as paths to improve the software development and maintenance processes. ... Enterprises must be prepared to implement radical process and infrastructure changes to successfully adopt AI agents in software delivery. AI agents must be managed by a central governance framework to enable complete visibility into agents, agent performance monitoring and security.


There’s no such thing as quantum incident response – and that changes everything

CISOs are directing attention to have quantum security risks added to the corporate risk register. It belongs there. But the problem to be solved is not a quick fix, despite what some snake oil salesmen might be pushing. There is no simple configuration checkbox on AWS or Azure or GCP where you “turn on” post-quantum cryptography (PQC) and then you’re good to go. ... Without significant engagement from developers, QA teams and product owners, the quantum decryption risk will remain in play. You cannot transfer this risk by adding more cyber insurance policy coverage. The entire cyber insurance industry itself is in a bit of an existential doubt situation regarding whether cybersecurity can reasonably be insured against, given the systemic impacts of supply chain attacks that cascade across entire industries. ...The moment when a cryptographically relevant quantum computer comes into existence won’t arrive with fanfare or bombast. Hence, the idea of the silent boom. But by then, it will be too late for incident response. What you should do Monday morning: Start that data classification exercise. Figure out what needs protecting for the long term versus what has a shorter shelf life. In the world of DNS, we have Time To Live (TTL) that declares how long a resolver can cache a response. Think of a “PQC TTL” for your sensitive data, because not everything needs 30-year protection.


Hackers Use Blockchain to Hide Malware in Plain Sight

At least two hacking groups are using public blockchains to conceal and control malware in ways that make their operations nearly impossible to dismantle, shows research from Google's Threat Intelligence Group. ... The technique, known as EtherHiding, embeds malicious instructions in blockchain smart contracts rather than traditional servers. Since the blockchain is decentralized and immutable, attackers gain what the researchers call a "bulletproof" infrastructure. The development signals an "escalation in the threat landscape," said Robert Wallace, consulting leader at Mandiant, which is part of Google Cloud. Hackers have found a method "resistant to law enforcement takedowns" that and can be "easily modified for new campaigns." ... The group over time expanded its architecture from a single smart contract to a three-tier system mimicking a software "proxy pattern." This allows rapid updates without touching the compromised sites. One contract acts as a router, another fingerprints the victim's system and a third holds encrypted payload data and decryption keys. A single blockchain transaction, costing as little as a dollar in network fees, can change lure URLs or encryption keys across thousands of infected sites. The researchers said the threat actor used social engineering tricks like fake Cloudflare verification or Chrome update prompts to persuade victims to run malicious commands.


Everyone’s adopting AI, few are managing the risk

Across industries, many organizations are caught in what AuditBoard calls the “middle maturity trap.” Teams are active, frameworks are updated, and risks are logged, but progress fades after early success. When boards include risk oversight as a standing agenda item and align on shared performance goals, activity becomes consistent and forward-looking. When governance and ownership are unclear, adoption slows and collaboration fades. ... Many enterprises are adopting or updating risk frameworks, but implementation depth varies. The typical organization maps its controls to several frameworks, while leading firms embed thousands of requirements into daily operations. The report warns that “surface compliance” is common. Breadth without depth leaves gaps that only appear during audits or disruptions. Mature programs treat frameworks as living systems that evolve with business and regulatory change. ... The findings show that many organizations are investing heavily in risk management and AI, but maturity depends less on technology and more on integration. Advanced organizations use governance to connect teams and turn data into foresight. AuditBoard’s research suggests that as AI becomes more embedded in enterprise systems, risk leaders will need to move beyond activity and focus on consistency. Those that do will be better positioned to anticipate change and turn risk management into a strategic advantage.


A mini-CrowdStrike moment? Windows 11 update cripples dev environments

The October 2025 cumulative update, (KB5066835), addressed security issues in Windows operating systems (OSes), but also appears to have blocked Windows’ ability to talk within itself. Localhost allows apps and services to communicate internally without using internet or external network access. Developers use the function to develop, test, and debug websites and apps locally on a Windows machine before releasing them to the public. ... When localhost stops working, entire application development environments can be impacted or “even grind to a halt,” causing internal processes and services to fail and stop communicating, he pointed out. This means developers are unable to test or run web applications locally. This issue is really about “denial of service,” where tools and processes dependent on internal loopback services break, he noted. Developers can’t debug locally, and automated testing processes can fail. At the same time, IT departments are left to troubleshoot, field an influx of service tickets, roll back patches, and look for workarounds. “This bug is definitely disruptive enough to cause delays, lost productivity, and frustration across teams,” said Avakian. ... This type of issue underscores the importance of quality control and thorough testing by third-party suppliers and vendors before releasing updates to commercial markets, he said. Not doing so can have significant downstream impacts and “erode trust” in the update process while making teams more cautious about patching.


How Banks of Every Size Can Put AI to Work, and Take Back Control

For smaller banks and credit unions, the AI conversation begins with math. They want the same digital responsiveness as larger competitors but can’t afford the infrastructure or staffing that traditionally make that possible. The promise of AI, especially low-code and automated implementation, changes that equation. What once required teams of engineers months of coding can now be deployed out-of-the-box, configured and pushed live in a day. That shift finally brings digital innovation within reach for smaller institutions that had long been priced out of it. But even when self-service tools are available, many institutions still rely on outside help for routine changes or maintenance. For these players, the first question is whether they’re willing or able to take product dev work inhouse, even with "AI inside"; the next question is whether they can find partners that can meet them on their own terms. ... For mid-sized players, the AI opportunity centers on reclaiming control. These institutions typically have strong internal teams and clear strategic ideas, yet they remain bound by vendor SLAs that slow innovation. The gap between what they can envision and what they can deliver is wide. AI-driven orchestration tools, especially those that let internal teams configure and launch digital products directly, can help close that gap. By removing layers of technical dependency, mid-sized institutions can move from periodic rollouts to something closer to iterative improvement. 


Why your AI is failing — and how a smarter data architecture can fix it

Traditional enterprises operate four separate, incompatible technology stacks, each optimized for different computing eras, not for AI reasoning capabilities. ... When you try to deploy AI across these fragmented stacks, chaos follows. The same business data gets replicated across systems with different formats and validation rules. Semantic relationships between business entities get lost during integration. Context critical for intelligent decision-making gets stripped away to optimize for system performance. AI systems receive technically clean datasets that are semantically impoverished and contextually devoid of meaning. ... As organizations begin shaping their enterprise general intelligence (EGI) architecture, critical operational intelligence remains trapped in disconnected silos. Engineering designs live in PLM systems, isolated from the ERP bill of materials. Quality metrics sit locked in MES platforms with no linkage to supplier performance data. Process parameters exist independently of equipment maintenance records. ... Enterprises solving the data architecture challenge gain sustainable competitive advantages. AI deployment timelines are measured in weeks rather than months. Decision accuracy reaches enterprise-grade reliability. Intelligence scales across all business domains. Innovation accelerates as AI creates new capabilities rather than just automating existing processes.


Under the hood of AI agents: A technical guide to the next frontier of gen AI

With agents, authorization works in two directions. First, of course, users require authorization to run the agents they’ve created. But as the agent is acting on the user’s behalf, it will usually require its own authorization to access networked resources. There are a few different ways to approach the problem of authorization. One is with an access delegation algorithm like OAuth, which essentially plumbs the authorization process through the agentic system. ... Agents also need to remember their prior interactions with their clients. If last week I told the restaurant booking agent what type of food I like, I don’t want to have to tell it again this week. The same goes for my price tolerance, the sort of ambiance I’m looking for, and so on. Long-term memory allows the agent to look up what it needs to know about prior conversations with the user. Agents don’t typically create long-term memories themselves, however. Instead, after a session is complete, the whole conversation passes to a separate AI model, which creates new long-term memories or updates existing ones. ... Agents are a new kind of software system, and they require new ways to think about observing, monitoring and auditing their behavior. Some of the questions we ask will look familiar: Whether the agents are running fast enough, how much they’re costing, how many tool calls they’re making and whether users are happy. 


Data Is the New Advantage – If You Can Hold On To It

Proprietary data has emerged as one of the most valuable assets for enterprises—and increasingly, the expectation is that data must be stored indefinitely, ready to fuel future models, insights, and innovations as the technology continues to evolve. ... Globally, data architects, managers, and protectors are in uncharted territory. The arrival of generative AI has proven just how unpredictable and fast-moving technological leaps can be – and if there’s one thing the past few years have taught us, it’s that we can’t know what comes next. The only way to prepare is to ensure proprietary data is not just stored but preserved indefinitely. Tomorrow’s breakthroughs – whether in AI, analytics, or some other yet-unimagined technology – will depend on the depth and quality of the data you have today, and how well you can utilize the storage technologies of your choice to serve your data usage and workflow needs. ... The lesson is clear: don’t get left behind, because your competitors are learning these lessons as well. The enterprises that thrive in this next era of digital innovation will be those that recognize the enduring value of their data. That means keeping it all and planning to keep it forever. By embracing hybrid storage strategies that combine the strengths of tape, cloud, and on-premises systems, organizations can rise to the challenge of exponential growth, protect themselves from evolving threats, and ensure they are ready for whatever comes next. In the age of AI, your competitive advantage won’t just come from your technology stack.


Why women are leading the next chapter of data centers

Working her way up through finance and operations into large-scale digital infrastructure, Xiao’s career reflects a steady ascent across disciplines, including senior roles as president of Chindata Group and CFO at Shanghai Wangsu. These roles sharpened her ability to translate high-level strategy into expansion, particularly in the demanding data center sector. ... Today, she shapes BDC’s commercial playbook, which includes setting capital priorities, driving cost-efficient delivery models, and embedding resilience and sustainability into every development decision. In mission-critical industries like data centers, repeatability is a challenge. Every market has unique variables – land, power, water, regulatory frameworks, contractor ecosystems, and community engagement. ... For the next wave of talent, building credibility in the data center industry requires more than technical expertise. Engaging in forums, networks, and industry resources not only earns recognition and respect but also broadens knowledge and sharpens perspective. ... Peer networks within hyperscaler and operator communities, Xiao notes, are invaluable for exchanging insights and challenging assumptions. “Industry conferences, cross-company working groups, government-industry task forces, and ecosystem media engagements all matter. And for bench strength, I value partnerships with local technology innovators and digital twin or AI firms that help us run safer, greener facilities,” Xiao explains.

Daily Tech Digest - October 16, 2025


Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle



Major network vendors team to advance Ethernet for scale-up AI networking

“AI workloads are re-shaping modern data center architectures, and networking solutions must evolve to meet the growing demands,” wrote Martin Lund, executive vice president of Cisco’s common hardware group, in a blog post about the news. “ESUN brings together AI infrastructure operators and vendors to align on open standards, incorporate best practices, and accelerate innovation in Ethernet solutions for scale-up networking.” ESUN will focus solely on open, standards-based Ethernet switching and framing for scale-up networking—excluding host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies. The group will expand the development and interoperability of XPU network interfaces and Ethernet switch ASICs for scale-up networks, the OCP stated in a blog: “The Initial focus will be on L2/L3 Ethernet framing and switching, enabling robust, lossless, and error-resilient single-hop and multi-hop topologies.” ... “Scale-Up” AI fabrics (SAIF) provide high-bandwidth, low-latency physical network interconnectivity and enhanced memory interaction between nearby AI processors,” Garter wrote. “Current implementations of SAIF are vendor-proprietary platforms, and there are proximity limitations (typically, SAIF is confined to only a rack or row). In most scenarios, Gartner recommends using Ethernet when connecting multiple SAIF systems together. We believe the scale, performance and supportability of Ethernet is optimal.”


Moving Beyond Awareness: How Threat Hunting Builds Readiness

The best defense begins before the first alert. Proactive threat hunting identifies the conditions that allow an attack to form and addresses them early. It moves security from passive observation to a clear understanding of where exposure originates. This move from observation to proactive understanding forms the core of a modern security program: Continuous Threat Exposure Management (CTEM). Instead of a one-time project, a CTEM program provides a structured, repeatable framework to continuously model threats, validate controls, and secure the business. For organizations ready to build this capability, A Practical Guide to Getting Started With CTEM offers a clear roadmap. ... Security Awareness Month reminds us that awareness is an essential step. Yet real progress begins when awareness leads to action. Awareness is only as powerful as the systems that measure and validate it. Proactive threat hunting turns awareness into readiness by keeping attention fixed on what matters most - the weak points that form the basis for tomorrow's attacks. Awareness teaches people to see risk. Threat hunting proves whether the risk still exists. Together they form a continuous cycle that keeps security viable long after awareness campaigns end. This October, the question for every organization is not how many employees completed the training, but how confident you are that your defenses would hold today if someone tested them. Awareness builds understanding. Readiness delivers protection.


Beyond the checklist: Building adaptive GRC frameworks for agentic AI

We must move GRC governance from a periodic, human-driven activity to an adaptive, continuous and context-aware operational capability embedded directly within the agentic AI platform. The first critical step involves implementing real-time governance and telemetry. This means we stop relying solely on endpoint logs that only tell us what the agent did and instead focus on integrating monitoring into the agent’s operating environment to capture why and how. ... The RCV is a structured, cryptographic record of the factors that drove the agent’s choice. It includes not just the data inputs, but also the specific model parameters, the weighted objectives used at that moment, the counterfactuals considered and, crucially, the specific GRC constraints the agent accessed and applied during its deliberation. ... Finally, we must address the “big red button” problem inherent in human-in-the-loop override. For agentic AI, this button cannot be a simple off switch, which would halt critical operations and cause massive disruption. The override must be non-obstructive and highly contextual, as detailed in OECD Principles on AI: Accountability and human oversight. ... We are entering an era where our systems will act on our behalf with little or no human intervention. My priority — and yours — must be to ensure that the autonomy of the AI does not translate into an absence of accountability.


Beyond Productivity: AI’s Role in Creating Hyper-Personalized and Inclusive Employee Experiences

Generative AI enhances employee experiences by analyzing unstructured information, understanding natural language and interpreting intent. Agentic AI takes this further by acting as a centralized, intelligent interface – integrating data sources, maintaining contextual awareness, adapting to individual goals and autonomously executing tasks – minimizing the need for employees to navigate multiple systems or support channels. From onboarding to learning, wellness, feedback, and career progression, it provides a seamless connected experience. Furthermore, AI systems can continuously learn from an employee’s behavior, preferences, and goals to provide real-time, tailored experiences. ... As powerful as AI is, it’s success in employee experience hinges on how well it aligns with human-centric values. Personalization must never feel intrusive, and inclusivity efforts must be grounded in empathy, transparency, and consent. Enterprises must adopt a responsible AI approach – ensuring fairness, explainability, and ethical data use. Employees should have clarity on how AI systems work, how data is used, and how decisions are made. Moreover, they should always have the option to challenge or override AI-driven outcomes. Leadership, HR, and IT teams must work together to create governance frameworks that reinforce trust – because even the most advanced AI fails if employees don’t feel seen, respected, and safe.


5 ideas to help bridge the genAI skills gap

Instead of focusing narrowly on technical skills, UST has shifted its training toward cultivating adaptable mindsets. “We want to develop curiosity, critical thinking, and creativity — skills that aren’t easily replaced by AI,” said Prasad, stressing that traditional classroom-style learning is insufficient when the competitive environment demands experimentation and rapid application. Employees are given access to a range of AI tools such as GitHub Copilot, Google Gemini, and Cursor, and encouraged to experiment safely in R&D environments. ... Rather than pulling people out of their daily job for separate training sessions, the company embeds training directly into daily workflows at the points where people are likely to be confronted with the need for learning material. Digital adoption platforms like Whatfix provide in-system nudges and tips directly in the tools recruiters use, guiding them in real time. Recruiting system training is integrated within the application. Users don’t know they’re interacting with a digital coach that’s training them to use the system and its AI features, such as candidate sourcing, resume analysis, and client outreach, effectively. According to Busch, the payoff is measurable: “How-to” support questions have been reduced 95% since implementing workflow learning.


Digital transformation works best when co-owned — but only if you do it right

All too often, the CIO has gone in alone to the CFO, CEO, or board to argue the benefits of a digital project in order to obtain funding. A sounder approach is to confirm the need for a digital solution to a particular business problem with the CxO in charge of that business area, and to then go in together to the budget meeting so that both the technology and the business values can be effectively presented. Secondly, there is no reason the IT budget must bear the full costs of a co-owned project. ... A first step for CxOs and CIOs toward a new, unified value creation paradigm is to root out the historical roadblocks that stand in the way of executive cooperation. CxOs must fully engage in digital projects from start to finish, and CIOs must be willing to accept co-star (instead of star) billing in projects. Most CIOs are making this shift in thinking, but CxOs still lag in project participation. Second, CIOs must gain CxO hard-dollar budget commitments for digital projects. When both co-fund and advocate for digital projects in front of the board, CEO, and CFO, both have skin in the game. Third, co-assign executive leadership responsibilities for key project milestones. The CxO might be responsible for defining the business use case and what a specific digital solution must deliver, while the CIO might be responsible for developing the solution.


Australian legislators spar with platforms, each other over age assurance laws

If there’s one thing every platform can agree on when it comes to age assurance, it’s that biometric age verification measures are a good idea – but probably just not for them. The latest to suggest that maybe they aren’t subject to the law are TikTok and Snapchat. The companies have reportedly made the case to Australia’s eSafety Commissioner that there are potential legal workarounds to Australia’s incoming social media regulations, which will prohibit users under 16 from having accounts. ... “We’re doing these things, ultimately, for the good of young people in Australia. It will span television, radio, digital. There will be some on billboards near schools around the country. They’ll see it on TV. They’ll see it online. They’ll see it, ironically, on social media, because until the 10th of December, it is legal for kids to be on social media. And if that’s where they are, that’s where we need to talk to them about what this means and why we’re doing it.” ... There is, in questioning from Senator David Shoebridge of the Australian Greens, an apparent desire to assign blame to age verification providers. He argues that Australia’s privacy laws aren’t yet ready to accommodate such data collection, in that Australia’s 1988 Privacy Act doesn’t include requirements for the deletion of data. He asks about workarounds, like masks and VPNs.


5 Must-Follow Rules of Every Elite SOC: CISO’s Checklist

Even the best analysts can’t detect everything alone. When communication breaks down and teams work in silos, critical context slips away; alerts are missed, work gets repeated, and investigations slow to a crawl. That’s why collaboration has become a core part of modern SOC performance. Inside the ANY.RUN sandbox, the Teamwork feature lets analysts join the same live workspace, share results in real time, and coordinate across roles without switching tools. Team leads can assign tasks, monitor progress, and track productivity; all from a single interface that keeps the team aligned, no matter the time zone. ... Every SOC knows the feeling; too many alerts, too many clicks, not enough time. Analysts lose hours on repetitive actions: opening files, running scripts, clicking through pop-ups, or solving CAPTCHAs just to trigger hidden payloads. With Automated Interactivity inside the ANY.RUN sandbox, all those steps happen automatically. The system opens malicious links hidden behind QR codes, interacts with fake installers, solves CAPTCHAs, and performs other routine actions; no human input needed. The sandbox handles these interactions on its own, exposing every stage of the attack chain in a fraction of the time. ... Even the best detection tools miss things. False negatives happen all the time; a file marked “safe” can still hide malicious behavior deep in its code or trigger only under specific conditions.


Identifying risky candidates: Practical steps for security leaders

Today’s fraudsters and malicious insiders often leave digital breadcrumbs outside a traditional organization’s direct visibility. Hiring teams cannot connect those breadcrumbs on their own, and they should partner with the security team to surface hidden affiliations, past fraudulent activities, or concerning behavioral patterns as a part of the overall candidate assessment. ... Outside-the-firewall checks are especially important in a remote or hybrid work environment where face-to-face verification is limited. The practical takeaway is that companies need to broaden their visibility: the more you combine traditional HR processes with external digital risk signals and collaborate across internal teams, the harder it becomes for a fraudulent candidate to work within your company undetected. ... Employees under stress or facing job insecurity may become more prone to misconduct, either through negligence or malice. Those with declining performance reviews, who are facing disciplinary action, or that have presented resistance to security upgrades are worth closer scrutiny. Employees that give notice of resignation should be keenly watched for unauthorized activity. ... The definition of insider threat is shifting. Where once the focus was on accidental misconfigurations or negligence, today it increasingly includes malicious acts, fraud, and hybrid cases where dissatisfaction or personal pressures drive risky behavior.


CISO Conversations: Are Microsoft’s Deputy CISOs a Signpost to the Future?

Microsoft may be unique in its size and complexity. But the difficulties faced by its CISO, Igor Tsyganskiy, are the same as those faced by all CISOs – just writ much larger. The expansion of the CISO role from governance (security), to include compliance (legal), internal app and external product development (engineering), integration with business leaders (business knowledge and communication skills), artificial intelligence (data scientist) and more, implies the solution adopted Tsyganskiy should be considered by all CISOs. ... It is encouraging that both top Microsoft dCISOs believe that such career success can be achieved by anyone with the right attitude. “Personally, I like to understand technology to a deep level. But it isn’t absolutely essential,” explains Russinovich. “You can delegate things, just like Igor is delegating his need for deep understanding of everything to a pool of dCISOs. Some level of technical understanding will always be crucial, because otherwise you’re just completely disconnected. But I think you can be an effective CISO without being as technically deep as I personally like to be.” Johnson agrees that you can have a successful career in cyber without prior cyber qualifications. “You need to have the aptitude. You need to be willing to learn every day. You need to be willing to accept what you don’t know, and you need to network,” she says.