Daily Tech Digest - December 16, 2025


Quote for the day:

"Worry less, smile more. Don't regret, just learn and grow." -- @Pilotspeaker


The battle for agent connectivity: Can MCP survive the enterprise?

"MCP is the UI for agents. The future of asking ChatGPT to book an Uber and have a pizza available when you arrive at the hotel only works if we have the connectivity," said Dag Calafell III, director of Technology Innovation at MCA Connect, an IT consultancy for manufacturers. But while seamless connectivity might be the Holy Grail for consumer apps, critics argue that it is irrelevant -- or even dangerous -- for the enterprise. ... Notably, MCP has significant backing from prominent companies, including Google, OpenAI, Microsoft and its creator, Anthropic. Indeed, Calafell argued that while there are competitors out there, "MCP is winning" precisely because it has seen significant adoption by large software providers. Still, MCP clearly has significant issues -- mostly because it's in its infancy. MCP's rapidly evolving specification, uneven tooling, unclear security and governance controls, and lack of standardized memory, debugging, and orchestration make it better for experimentation than reliable enterprise use today. ... "There is room to innovate with a security-first 'MCP-like' standard that is resource aware, with trusted catalogues, privileges, scopes, etc. These would either be built on top of MCP, a sort of MCP v2, or introduced as part of a new protocol," said Liav Caspi, co-founder and CTO at Legit Security. And, of course, there remains an evolving trend that the AI industry will take an entirely different direction.


Digital Twin in Railways: A Practical Solution to Managing Complex Rail Systems

In the context of railways, digital twins are being deployed to improve asset lifecycle management, predictive maintenance, and infrastructure planning. By integrating inputs from IoT devices and advanced analytics platforms, these models help engineers monitor structural health, detect anomalies, and plan maintenance before failures occur. ... As the scale and complexity of rail networks continue to grow, the use of digital twins offers a unified, comprehensive view of interconnected assets, which empowers rail operators with faster decision-making and better coordination across departments. This technology is gradually becoming a core component of smart railway ecosystems. ... The architecture of a digital twin in railway systems is built upon the integration of multiple digital technologies, including Building Information Modelling (BIM), the Internet of Things (IoT), Geographic Information Systems (GIS), and data analytics platforms. Together, these technologies create a unified framework that connects the physical and digital environments of railway infrastructure and operations. ... The integration of operational data, including train movements, energy consumption, and passenger flows, allows operators to simulate different scenarios and optimise timetables, headways, and energy use. In dense networks such as urban metro systems, this contributes to improved punctuality and efficient energy utilisation.


Stop mimicking and start anchoring

It’s a fundamental truth that most CIOs are ignoring in their rush to emulate Big Tech playbooks. The result is a systematic misallocation of resources based on a fundamental misunderstanding of how value creation works across industries. ... the strategic value of IT should be measured by how effectively it addresses industry-specific value creation. Different industries have vastly different technology intensity and value-creation dynamics. In our view, CIOs must therefore resist trend-driven decisions and view IT investment through their industry’s value-creation to sharpen competitive edge. To understand why IT strategies diverge across industries shaped by sectoral realities and maturity differences, we need to examine how business models shape the role of technology. ... funding business outcomes rather than chasing technology fads is easier said than done. It’s difficult to unravel the maze created by the relentless march of technological hype versus the grounded reality of business. But the role of IT is not universal; its business relevance changes from one industry to another. ... Long-term value from emerging technologies comes from grounded application, not blind adoption. In the race to transform, the wisest CIOs will be those who understand that the best technology decisions are often the ones that honour, rather than abandon the fundamental nature of their business. The future belongs not to those who adopt the most tech, but to those who adopt the right tech for the right reasons.


Build vs buy is dead — AI just killed it

Ssomething fundamental has changed: AI has made building accessible to everyone. What used to take weeks now takes hours, and what used to require fluency in a programming language now requires fluency in plain English.When the cost and complexity of building collapse this dramatically, the old framework goes down with them. It’s not build versus buy anymore. It’s something stranger that we haven't quite found the right words for. ... And it's not some future state. This is already happening. Right now, somewhere, a customer rep is using AI to fix a product issue they spotted minutes ago. Somewhere else, a finance team is prototyping their own analytical tools because they've realized they can iterate faster than they can write up requirements for engineering. Somewhere, a team is realizing that the boundary between technical and non-technical was always more cultural than fundamental. The companies that embrace this shift will move faster and spend smarter. They’ll know their operations more deeply than any vendor ever could. They'll make fewer expensive mistakes, and buy better tools because they actually understand what makes tools good. The companies that stick to the old playbook will keep sitting through vendor pitches, nodding along at budget-friendly proposals. They’ll debate timelines, and keep mistaking professional decks for actual solutions. Until someone on their own team pops open their laptop, says, “I built a version of this last night. Want to check it out?,”


Quantum Tech Hits Its “Transistor Moment,” Scientists Say

“This transformative moment in quantum technology is reminiscent of the transistor’s earliest days,” said lead author David Awschalom, the Liew Family Professor of molecular engineering and physics at the University of Chicago, and director of the Chicago Quantum Exchange and the Chicago Quantum Institute. “The foundational physics concepts are established, functional systems exist, and now we must nurture the partnerships and coordinated efforts necessary to achieve the technology’s full, utility-scale potential. How will we meet the challenges of scaling and modular quantum architectures?” ... Although advanced prototypes have demonstrated system operation and public cloud access, their raw performance remains early in development. For example, many meaningful applications, including large-scale quantum chemistry simulations, could require millions of physical qubits with error performance far beyond what is technologically viable today. ... “While semiconductor chips in the 1970s were TLR-9 for that time, they could do very little compared with today’s advanced integrated circuits,” he said. “Similarly, a high TRL for quantum technologies today does not indicate that the end goal has been achieved, nor does it indicate that the science is done and only engineering remains. Rather, it reflects a significant, yet relatively modest, system-level demonstration has been achieved—one that still must be substantially improved and scaled to realize the full promise.”


Before you build your first enterprise AI app

Model weights are becoming undifferentiated heavy lifting, the boring infrastructure that everyone needs but no one wants to manage. Whether you use Anthropic, OpenAI, or an open weights model like Llama, you are getting a level of intelligence that is good enough for 90% of enterprise tasks. The differences are marginal for a first version. The “best” model is usually just the one you can actually access securely and reliably. ... We used to obsess over the massive cost of training models. But for the enterprise, that is largely irrelevant. AI is all about inference now, or the application of knowledge to power applications. In other words, AI will become truly useful within the enterprise as we apply models to governed enterprise data. The best place to build up your AI muscle isn’t with some moonshot agentic system. It’s a simple retrieval-augmented generation (RAG) pipeline. What does this mean in practice? Find a corpus of boring, messy documents, such as HR policies, technical documentation, or customer support logs, and build a system that allows a user to ask a question and get an answer based only on that data. This forces you to solve the hard problems that actually build a moat for your company. ... When you build your first application, design it to keep the human in the loop. Don’t try to automate the entire process. Use the AI to generate the first draft of a report or the first pass at a SQL query, and then force a human to review and execute it. 


Cloudflare reveals AI surge & Internet ‘bot wars’ in 2025

Cloudflare reported that use of AI models and AI crawling activity increased sharply. It said crawling for model training accounted for the majority of AI crawler traffic during the year. Training-related crawlers generated traffic that reached as much as seven to eight times the level of retrieval-augmented generation and search crawlers at peak. Traffic from training crawlers was also as much as 25 times higher than AI crawlers tied to direct user actions. The company said Meta’s llama-3-8b-instruct model was the most widely used on its network. It was used by more than three times as many accounts as the next most popular models from providers such as OpenAI and Stability AI. Cloudflare added that Google’s crawling bot remained the dominant automated actor on the Internet. It said Googlebot’s crawl volume exceeded that of all other leading AI bots by a wide margin and was the largest single source of automated traffic it observed. ... Cloudflare reported a notable shift in the sectors that face the highest volume of cyber attacks. Civil society and non-profit organisations became the most attacked group for the first time. The company linked this trend to the sensitivity and financial value of the data held by such organisations. This includes personal information about donors, volunteers and beneficiaries. Cloudflare’s data also showed changes in the causes of major Internet outages. 


Who Owns AI Risk? Why Governance Begins with Architecture

But as AI systems grow more complex, so do their risks. Bias, opacity, data misuse, model drift, or even overreliance on AI outputs can all cause serious business, ethical, and reputational damage. This raises an uncomfortable question: who actually owns the risk of AI? ... AI doesn’t live in isolation. It consumes enterprise data, depends on cloud services, interacts with APIs, and influences real business processes.Governance, therefore, can’t rely on policies alone, it must be designed, structured, and embedded into the architecture itself. For instance, companies like Microsoft and Google have embedded AI governance directly into their architectural blueprints creating internal AI Ethics and Risk Committees that review model design before deployment. This proactive structure ensures compliance and builds trust long before a model reaches production. ... In other words, AI Governance is not a department, it’s an ecosystem of shared responsibility.Enterprise Architects connect the dots, Business Owners set the direction, Data Scientists implement, and Governance Boards oversee. But the real maturity comes when everyone in the organization, from the C-suite to the operational level, understands that AI is a shared asset and a shared risk. ... Modern enterprise architecture is no longer only about connecting systems. It’s about connecting responsibility. The moment artificial intelligence becomes part of the business fabric, architecture must evolve to ensure that governance isn’t something external or reactive, it’s embedded in the very design of every AI-enabled solution.


The 5 power skills every CISO needs to master in the AI era

According to the World Economic Forum’s Future of Jobs Report, nearly 40% of core job skills will change by 2030, driven primarily by AI, data and automation. For security professionals, this means that expertise in network defense, forensics and patching — while still essential — is no longer enough to create value. The real impact comes from how we interpret, communicate and apply what AI enables. ... The biggest myth in security is that technical mastery equals longevity. In truth, the more we automate, the more we value human differentiation. Success in the next decade won’t depend on how much code you can write — but on how effectively you can connect, translate and lead across systems and silos. When I look at the most resilient organizations today, they share one trait: They see cybersecurity not as a control function, but as a strategic enabler. And their leaders? They’re fluent in both algorithms and empathy. The future of cybersecurity belongs to those who build bridges — not just firewalls. Cybersecurity is no longer a war between humans and machines — it’s a collaboration between both. The organizations that succeed will be the ones that combine AI’s precision with human empathy and creative foresight. As AI handles scale, leaders must handle meaning. And that’s the true essence of power skills. The future of cybersecurity belongs to those who can blend AI’s precision with human expertise — and lead with both.


Manufacturing is becoming a test bed for ransomware shifts

“Manufacturing depends on interconnected systems where even brief downtime can stop production and ripple across supply chains,” said Alexandra Rose, Director of Threat Research, Sophos Counter Threat Unit. “Attackers exploit this pressure: despite encryption rates falling to 40%, the median ransom paid still reached $1 million. While half of manufacturers stopped attacks before encryption, recovery costs average $1.3 million and leadership stress remains high. Layered defenses, continuous visibility, and well-rehearsed response plans are essential to reduce both operational impact and financial risk,” Rose continued. Teams were able to stop attacks before encryption in a larger share of cases, which likely contributed to the decline. Early detection helped reduce disruption, although strong detection did not guarantee a smooth recovery. ... IT and security leaders in manufacturing see progress in some areas but ongoing gaps in others. Detection appears to be improving. Recovery is becoming steadier. Payment rates are declining. But operational weaknesses persist. Skills shortages, aging protections, and limited visibility into vulnerabilities continue to contribute to compromises. These factors shape outcomes as much as attacker capability. The findings also show a need for stronger internal support. Security teams are absorbing organizational and emotional strain that can affect long term performance. Manufacturing operations depend on stable systems, and teams cannot maintain stability without workloads they can manage.

Daily Tech Digest - December 14, 2025


Quote for the day:

“It is never too late to be what you might have been.” -- George Eliot


Six questions to ask when crafting an AI enablement plan

As we near the end of 2025, there are two inconvenient truths about AI that every CISO needs to take into their heart. Truth #1: Every employee who can is using generative AI tools for their job. Even when your company doesn’t provide an account for them, even when your policy forbids it, even when the employee has to pay out of pocket. Truth #2: Every employee who uses generative AI will (or likely has already) provided this AI with internal and confidential company information. ... In the case of AI, this refers to the difference between the approved business apps that are trusted to access company data and the growing number of untrusted and unmanaged apps that have access to that data without the knowledge of IT or security teams. Essentially, employees are using unmonitored devices, which can hold any number of unknown AI apps, and each of those apps can introduce a whole lot of risk to sensitive corporate data. ... Simply put, organizations cannot afford to wait any longer to get a handle on AI governance. ... So now, the job is to craft an AI enablement plan that promotes productive use and throttles reckless behaviors. ... Think back to the mid‑2000s, when SaaS crept into the enterprise through expense reports and project trackers. IT tried to blacklist unvetted domains, finance balked at credit‑card sprawl, and legal wondered whether customer data belonged on “someone else’s computer.” Eventually, we accepted that the workplace had evolved, and SaaS became essential to modern business.


Why most enterprise AI coding pilots underperform (Hint: It's not the model)

When organizations introduce agentic tools without addressing workflow and environment, productivity can decline. A randomized control study this year showed that developers who used AI assistance in unchanged workflows completed tasks more slowly, largely due to verification, rework and confusion around intent. The lesson is straightforward: Autonomy without orchestration rarely yields efficiency. ... Security and governance, too, demand a shift in mindset. AI-generated code introduces new forms of risk: Unvetted dependencies, subtle license violations and undocumented modules that escape peer review. Mature teams are beginning to integrate agentic activity directly into their CI/CD pipelines, treating agents as autonomous contributors whose work must pass the same static analysis, audit logging and approval gates as any human developer. GitHub’s own documentation highlights this trajectory, positioning Copilot Agents not as replacements for engineers but as orchestrated participants in secure, reviewable workflows. ... Under the hood, agentic coding is less a tooling problem than a data problem. Every context snapshot, test iteration and code revision becomes a form of structured data that must be stored, indexed and reused. As these agents proliferate, enterprises will find themselves managing an entirely new data layer: One that captures not just what was built, but how it was reasoned about. 


Enabling small language models to solve complex reasoning tasks

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries. ... You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results. The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. 


Key trends accelerating Industrial Secure Remote Access (ISRA) Adoption

As essential maintenance and diagnostic activities continue to shift toward remote and digital execution, they become exposed to cyber risks that were not present when plants, fleets, and factories operated as isolated, closed systems. Compounding the challenge, many industrial organizations still lack the expertise and skill sets to select and operate the proper technologies that establish secure remote connections efficiently and securely. This, unfortunately, results in operational delays and slower response in critical or emergency situations. Industrial Cyber emphasizes that controlled, identity-bound, and fully auditable access to critical tasks is key to ensuring secure remote access functions as an operational and business enabler—without introducing new pathways for malicious actors. ... Compounding the risk, OT environments frequently rely on legacy hardware that lacks modern encryption capabilities, leaving these connections especially vulnerable. By centralizing access governance, securely managing vendor credentials, streamlining access-request workflows, and maintaining consistent audit trails, industrial organizations can regain control over third-party access. ... Industrial Cyber recognizes two solutions from SSH. 1) PrivX OT is purpose-built for industrial environments. The solution provides passwordless, keyless, and just-in-time industrial secure remote access using short-lived certificates and micro-segmentation to reduce risk. 2) NQX delivers quantum-safe, high-speed network encryption for site-to-site connectivity.


Navigating AI Liability: What Businesses That Utilize AI Need to Know

Cybercriminals can now use generative AI to create extremely convincing deepfakes. These deepfakes can then be used for corporate espionage, identity theft and phishing scams. AI software may end up automatically aggregating and analyzing huge amounts of data from multiple sources. This can increase privacy invasion risks when comprehensive profiles of people are compiled without their awareness or consent. AI systems which experience glitches or malfunctions, let others have unauthorized access to them, or lack robust security could lead to sensitive data being exposed. ... It is risky for your business to publish AI-generated content because AI models are trained on vast amounts of copyrighted material. The models thus end up not always creating original material, and sometimes create material which is identical to or extremely similar to copyrighted content. “It was the AI’s fault” will not be a valid argument in court if this happens to your business. Ignorance is not a defense in a copyright infringement claim. ... Content that is fully generated by AI has no copyright protection. AI-generated content that is significantly edited by humans may receive copyright protection, but the situation is murky. Original content that is created by humans and is then slightly edited or optimized by AI will usually receive full copyright protection. A lot of businesses now document the process of content creation to prove that humans created the content and preserve copyright protection.


When the Cloud Comes Home: What DBAs Need to Know About Cloud Repatriation

One of the main drivers for cloud repatriation is cost. Early cloud migrations were often justified by projected savings because there would be no more hardware to maintain. Furthermore, the cloud promised flexible scaling and pay-as-you-go pricing. Nevertheless, for many enterprises, those savings have proven elusive. Data-intensive workloads, in particular, can rack up significant cloud bills. Every I/O operation, network transfer, and storage request adds up. When workloads are steady and predictable, the cloud’s on-demand elasticity can actually become more expensive than on-prem capacity. DBAs, who often have a front-row seat to performance and utilization metrics, can play a crucial role in identifying when cloud costs are out of alignment with business value. ... In highly regulated industries, compliance concerns are another driver. Regulations such as HIPAA, PCI-DSS, GDPR and more, require your applications and the data they access to be secure and controlled. Organizations may find that managing sensitive data in the cloud introduces risk, especially when data residency, auditability, or encryption requirements evolve. Repatriating workloads can restore a sense of control and predictability—key traits valued by DBAs. ... Today’s computing needs demand an IT architecture that embraces the cloud, but also on premises workloads, including the mainframe. Remember, data gravity attracts applications to where the data resides. 


SaaS price hikes put CIOs’ budgets in a bind

Subscription prices from major SaaS vendors have risen sharply in recent months, putting many CIOs in a bind as they struggle to stay within their IT budgets. ... While inflation may have driven some cost increases in past months, rates have since stabilized, meaning there are other factors at play, Tucciarone says. Vendors are justifying subscription price hikes with frequent product repackaging schemes, consumption-based subscription models, regional pricing adjustments, and evolving generative AI offerings, he adds. “Vendors are rationalizing this as the cost of innovation and gen AI development,” he says. ... SaaS data platforms fall into a similar category as other mission-critical applications, Aymé adds, because the cost of moving an organization’s data can be prohibitively expensive, in addition to the price of a new SaaS tool. Kunal Agarwal, CEO and cofounder of data observability platform Unravel Data, also pointed to price increases for data-related SaaS tools. Data infrastructure costs, including cloud data warehouses, lakehouses, and analytics platforms, have risen 30% to 50% in the past year, he says. Several factors are driving cost increases, including the proliferation of computing-intensive gen AI workloads and a lack of visibility into organizational consumption, he adds. “Unlike traditional SaaS, where you’re paying for seats, these platforms bill based on consumption, making costs highly variable and difficult to predict,” Agarwal says.


How to simplify enterprise cybersecurity through effective identity management

“It is challenging for a lot of organizations to get a complete picture of what their assets are and what controls apply to those assets,” Persaud says. He explains that Deloitte’s identity solution assisted the customer in connecting users with the assets they utilized. As they discovered these assets, they were able to fine-tune the security controls that were applied to each in a more refined fashion. “If the system is going to [process] financial data and other private information, we need to put the right controls in place on the identity side,” he says. “We’ve been able to bring those two pieces together by correlating discovery of assets with discovery of identity and lining that up with controls from the IT asset management system.” ... “If you think from a broader risk management perspective, this has been fundamental to our security model,” he says. The ability to simply track the locations of employees and assign risk accordingly is a significant advancement in risk monitoring for a company growing its international presence. The company looks out for instances of impossible travel, such as if an employee has entered the system in one location and then in another at a distant location that they could not have possibly reached during a specified period, an alert is raised. Security analysts also use the software to scan for risky sign-ins. If a user logs in from an IP that has been blacklisted, an alert is raised. They have increasingly relied on conditional access policies that rely on monitoring user behavior. 



When an AI Agent Says ‘I Agree,’ Who’s Consenting?

The most autonomous agents can execute a chain of actions related to a transaction—such as comparing, booking, paying, forwarding the invoice. The broader the autonomy, the tighter the frame: precise contractual rules, allow-lists, budgets, a kill-switch, clear user notices, and, where required, electronic signatures. At this point the question stops being technical and becomes legal: under what framework does each agent-made click have effect, on whose authority, and with what safeguards? European law and national laws already offer solid anchors—agency and online contracting, signatures and secure payments, fair disclosure—now joined by the newer eIDAS 2 and the AI Act. ... Under European law, an AI agent has no will of its own. It is a means of expressing—or failing to express—someone’s will. Legally, someone always consents: the user (consumer) or a representative in the civil law sense. If an agent “accepts” an offer, we are back to agency: the act binds the principal only within the authority granted; beyond that, it is unenforceable. The agent is not a new subject of law. ... Who is on the hook if consent is tainted? First, the business that designs the onboarding. Europe’s Digital Services Act (DSA) bans deceptive interfaces (“dark patterns”) that materially impair a user’s ability to make a free, informed choice. A pushy interface can support a finding of civil fraud and a regulatory breach. Second, the principal is bound only within the mandate. 


AI cybercrime agents will strike in 2026: Are defenders ready?

The prediction itself isn’t novel. What’s sobering is the math behind it—and the widening gap between how fast organisations can defend versus how quickly they’re being attacked. “The groups that convert intelligence into monetisation the fastest will set the tempo,” Rashish Pandey, VP of marketing & communications for APAC at Fortinet, told journalists at a media briefing earlier this week. “Throughput defines impact.” This isn’t about whether AI will be weaponised—that’s already happening. The urgent question is whether defenders can close what Fortinet calls the “tempo differential” before autonomous AI agents fundamentally alter the economics of cybercrime. ... The evolution extends beyond speed. Fortinet’s predictions highlight how attackers are weaponising generative AI for rapid data analysis—sifting through stolen information to identify the most valuable targets and optimal extortion strategies before defenders even detect the breach. This aligns with broader attack trends: ransomware operations increasingly blend system disruption with data theft and multi-stage extortion. Critical infrastructure sectors—healthcare, manufacturing, utilities—face heightened risk as operational technology systems become targets. ... “The ‘skills gap’ is less about scarcity and more about alignment—matching expertise to the reality of machine-speed, data-driven operations,” Pandey noted during the briefing.

Daily Tech Digest - December 12, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Escaping the transformation trap: Why we must build for continuous change, not reboots

Each new wave of innovation demands faster decisions, deeper integration and tighter alignment across silos. Yet, most organizations are still structured for linear, project-based change. As complexity compounds, the gap between what’s possible and what’s operationally sustainable continues to widen. The result is a growing adaptation gap — the widening distance between the speed of innovation and the enterprise’s capacity to absorb it. CIOs now sit at the fault line of this imbalance, confronting not only relentless technological disruption but also the limits of their organizations’ ability to evolve at the same pace. ... Technical debt has been rapidly amassing in three areas: accumulated, acquired, and emergent. The result destabilizes transformation efforts. ... Most modernization programs change the surface, not the supporting systems. New digital interfaces and analytics layers often sit atop legacy data logic and brittle integration models. Without rearchitecting the semantic and process foundations, the shared meaning behind data and decisions, enterprises modernize their appearance without improving their fitness. ... The new question is not, ‘How do we transform again?’ but ‘How do we build so we never need to?’ That requires architectures capable of sustaining and sharing meaning across every system and process, which technologists refer to as semantic interoperability.


The state of AI in 2026 – part 1

“The real race will be about purpose, measurable outcomes and return on investment. AI is no longer simply a technical challenge, it has become a business strategy,” said Zaccone. “However, this evolution comes with new risks. As agentic systems gain autonomy, securing the underlying AI infrastructure becomes critical. Standards are still emerging, but adopting strong security and governance practices early dramatically increases the likelihood of success. At the same time, AI is reshaping the risk landscape faster than regulation can adapt, which means it’s raising pressing questions around data sovereignty, compliance and access to AI-generated data across jurisdictions.” ... “Many teams now face practical limits around data quality, compute efficiency and responsible integration with existing systems. There is a clear gap between those who just wrap APIs around foundation models and those who actually optimise architectures and training pipelines. The next phase of AI is about reliability, interpretability and building systems that engineers can trust and improve over time,” Khan said. ... “To close the gap between the vision and reality of agentic AI over the next 12 months, enterprise agentic automation (EAA) will be essential. By blending dynamic AI with determinist guardrails and human-in-the-loop checkpoints, EAA empowers enterprises to automate complex, exception-heavy or cognitive work without losing control,” explained Freund.


Cybersecurity isn’t underfunded — It’s undermanaged

Of course, cybersecurity projects are often complex because they need to reach across corporate silos and geographies to deliver effective protection to the business. This is not natural in large firms, which are, almost by essence, territorial and political. But beyond that, the profile of CISOs is also a key dimension: Most are technologists by trade and background, and have spent the last decade firefighting incidents, incapable of building or delivering any kind of long-term narrative. They have not developed the type of management experience, political finesse or personal gravitas that they would require to be truly successful, now that the spotlight is firmly on them from the top of the firm. Many genuinely think that chronic under-investment in cybersecurity is the root cause of insufficient maturity levels, while it is in fact chronic execution failure linked to endemic business short-termism that is at the heart of the matter: All point to governance and cultural aspects that are the real root causes of the long-term stagnation of cybersecurity maturity levels in large firms. For the CISOs who have not integrated those cultural aspects and are almost always left out of those decisions, it breeds frustration; frustration breeds short tenures; short tenures aggravate the management and leadership mismatch: You cannot deliver much of genuine transformative impact in large firms on those timeframes.


Document databases – understanding your options

There are two decisions to take around databases today—what you choose to run, and how you choose to run it. The latter choice covers a range of different deployment options, from implementing your own instance of a technology on your own hardware and storage, through to picking a database as a service where all the infrastructure is abstracted away and you only see an API. In between, you can look at hosting your own instances in the cloud, where you manage the software while the cloud service provider runs the infrastructure, or adopt a managed service where you still decide on the design but everything else is done for you. ... The first option is to look at alternative approaches to running MongoDB itself. Alongside MongoDB-compatible APIs, you can choose to run different versions of MongoDB or alternatives to meet your document database needs. ... The second migration option is to use a service that is compatible with MongoDB’s API. For some workloads, being compatible with the API will be enough to move to another service with minimal to no impact. ... The third option is to use an alternative document database. In the world of open source, Apache CouchDB is another document database that works with JSON and can be used for projects. It is particularly useful where applications might run on mobile devices as well as cloud instances; mobile support is a feature that MongoDB has deprecated.


Why AI Fatigue Is Sending Customers Back to Humans

The pattern is familiar across industries: digital experiences that start strong, then steadily degrade as companies prioritize cost-cutting over satisfaction. In banking, this manifests in frustratingly specific ways: chatbots that loop through unhelpful responses, automated fraud alerts that lock accounts without a path to resolution, and phone trees that make reaching a human nearly impossible. ... The path forward for community banks and credit unions isn’t choosing between digital efficiency and human service or retreating to nostalgia for branch-based banking. It’s investing strategically in both. ... Geographic proximity enables genuine empathy that algorithms can’t replicate. Rajesh Patil, CEO at Digital Agents Service Organization (CUSO), offers an example: “When there’s a disaster in a community, an AI chatbot doesn’t know what happened. But a local branch employee knows and can say, ‘I understand. Let me help you.'” The most sophisticated community bank strategy uses technology to identify opportunities while humans deliver the insight. ... After decades of pursuing digital transformation, community banks and credit unions are discovering their competitive advantage was human all along. But the path forward isn’t nostalgia for branch-based banking, it’s strategic investment in both digital infrastructure and human capacity.


The Cloud Investment Paradox: Why More Spending Isn’t Delivering AI Results

There are three common gaps that stall AI progress, even after significant cloud spend. First is data architecture. Many organisations lift and shift legacy systems into the cloud without rethinking how data will flow across teams and tools. They end up with the same fragmentation problems, just in a new environment. Second is the skills gap. Research has found that 27% of organisations lack the internal expertise to harness AI’s potential. And it is not just data scientists. You need cloud architects who understand how to design environments specifically for AI workloads, not just generic compute. Third is data quality and accessibility. AI models cannot perform well without clean, consistent input. But too often, data governance is an afterthought. Only 1 in 5 organisations feel confident that their data is truly AI-ready. That is a foundational issue, not a fine-tuning one. ... Before investing in another AI pilot or data science hire, organisations should take a step back. Is the data ready? Are the pipelines in place? Do internal teams have what they need to turn compute into insight? This means prioritising data integration and governance before algorithms. It means investing in internal training and hiring with long-term capability in mind. And it means treating cloud and AI as part of the same strategy, not separate silos.


Beyond the login: Why “identity-first” security is leaking data and why “context-first” is the fix

The uncomfortable truth emerging from recent high-profile breaches is that identity-first security—when operating in isolation—is leaking data. Threat actors have evolved; they are no longer just trying to break down the door; they are cloning the keys. The reliance on static authentication events has created a dangerous blind spot. ... Standard facial recognition often looks for geometric matches—distance between eyes, shape of the nose. Deepfakes can replicate this perfectly, turning video verification into a vulnerability rather than a safeguard. To counter this, modern security must implement advanced “Liveness Detection”. It is no longer enough to match a face to a database; the system must analyse micro-expressions and texture to ensure the face belongs to a live human presence, not a digital puppet. Yet, even with these safeguards, betting the entire security posture solely on verifying who the user is, remains a risky strategy. ... To stop these leaks, security must move beyond the “Who” (Identity) and interrogate the “Where,” “What,” and “How” (Context). This requires a shift from static gates to Continuous Adaptive Trust. Context is not a single data point; it is a composite score derived from real-time telemetry. ... For technology leaders, this convergence is not just a technical upgrade; it is a strategic necessity for compliance. Frameworks like the Digital Personal Data Protection (DPDP) Act require organisations to implement “reasonable security safeguards”. 


Why Critical Infrastructure Needs Security-Forward Managed File Transfer Now

Today’s cyber attackers often use ordinary documents and files to breach organizations. Without strong security checks, it’s surprisingly easy for bad actors to cause major problems. Attacks exploit both common file formats and weaknesses in legacy operational technology (OT) environments. ... Modern managed file transfer (MFT) requires a layered security approach to effectively combat file-based threats and comply with best practices. This approach dictates that organizations must encrypt files at rest and in transit, employ strong hash checks, and use digital signing to validate the origin and integrity of files throughout their lifecycle. ... Many MFT tools incorporate multi-layered malware scanning. This works by scanning every file with multiple malware engines rather than relying on a single one, given that different engines detect different malware families and variants.​ Parallel multiscanning not only improves detection rates but also shortens the window for exploitation of zero‑day vulnerabilities and polymorphic malware. This helps to reduce the chance of false negatives before files enter sensitive networks.​ The scanning should be directly integrated into upload, download, and workflow steps so no file can move between zones without passing through a multi‑engine inspection pipeline.​ ... MFT workflows can automatically route files to a sandbox based on risk scores, file types, sender reputation, or country of origin. Then, files are only released upon passing behavioral checks.​ 


Fight AI Disinformation: A CISO Playbook for Working with Your C-Suite

Unlike misinformation or malinformation, which may be inaccurate or misleading but not necessarily harmful, disinformation is both false and designed specifically to damage organizations. It can be episodic, targeting individuals for immediate gain, such as tricking an employee into transferring funds via a deepfaked call. It can also be industrial, operating at scale to undermine brand reputation, manipulate stock prices, or probe organizational defenses over time. The attack surfaces are broad: internally, adversaries exploit corporate meeting solutions, email, and messaging platforms to bypass authentication and impersonate trusted individuals. ... Without clear ownership and cross-functional collaboration, efforts to counter disinformation are often disjointed and ineffectual. In some cases, organizations leave disinformation as an unmanaged risk, exposing themselves to episodic attacks on individuals and industrial campaigns targeting reputation and financial stability. Another common pitfall is failing to differentiate between types of information threats. CISOs should focus their resources on disinformation where intent to harm and lack of accuracy intersect, rather than attempting to police all forms of misinformation or malinformation. ... CISOs must lead the way in communicating the risks and fostering a culture of shared responsibility, engaging all employees in detection, reporting, and response. This includes developing internal tooling for monitoring and reporting, promoting transparency, and ensuring ongoing education about evolving threats.


Why AI Scaling Innovation Requires an Open Cloud Ecosystem

Developers and enterprises should have the flexibility to construct custom multi-cloud infrastructure that provides the appropriate specifications. Distributing workloads allows them to move faster on new projects without driving up infrastructure spend and overconsuming resources. It also enables them to prioritize in-country data residency for enhanced compliance and security. With an open ecosystem, developers and enterprises can stagger cloud-agnostic applications across a mosaic of public and private clouds to optimize hardware efficiency, maintain greater autonomy in data management and data security, and run applications seamlessly at the edge. This promotes innovation at all layers of the stack, from training to testing to processing, making it easier to deploy the best possible services and applications. An open ecosystem also reduces the branding and growth risks associated with hyperscaler dependence. Often, when a developer or enterprise runs their products exclusively on a single platform, they become less their own product and more an outgrowth of their hyperscaler cloud provider; instead of selling their app on its own, they sell the hyperscaler’s services. ... Supporting hyper-specific AI use cases often begets complex development demands: from hefty compute power, to multi-model frameworks, to strict data governance and pristine data quality. Even large enterprises don’t always have the resources in-house to account for these parameters.

Daily Tech Digest - December 11, 2025


Quote for the day:

"We become what we think about most of the time, and that's the strangest secret." -- Earl Nightingale



SEON Predicts Fraud’s Next Frontier: Entering the Age of Autonomous Attacks

AI has become a permanent part of the fraud landscape, but not in the way many expected. AI has transformed how we detect and prevent fraud, from adaptive risk scoring to real-time data enrichment, but full autonomy remains out of reach. Fraud detection still depends on human judgment, such as weighing intent, interpreting ambiguity, and understanding context that no model can fully replicate. Fraud prevention is a complex interplay of data, intent, and context, and that is where human reasoning continues to matter most. Analysts interpret ambiguity, weigh risk appetite, and understand social signals that no model can fully replicate. What AI can do is amplify that capability. ... The boundary between genuine and synthetic activity is blurring. Generative AI can now simulate human interaction with high accuracy, including realistic typing rhythms, believable navigation flows, and deepfake biometrics that replicate natural variance. The traditional approach of searching for the red flags no longer works when those flags can be easily fabricated. The next evolution in fraud detection will come from baselining legitimate human behaviour. By modelling how real users act over time, and looking at their rhythms, routines, and inconsistencies, we can identify the subtle deviations that synthetic agents struggle to mimic. It is the behavioural equivalent of knowing a familiar face in a crowd. Trust comes from recognition, not reaction. 


The Invisible Vault: Mastering Secrets Management in CI/CD Pipelines

In the high-speed world of modern software development, Continuous Integration and Continuous Deployment (CI/CD) pipelines are the engines of delivery. They automate the process of building, testing, and deploying code, allowing teams to ship faster and more reliably. But this automation introduces a critical challenge: How do you securely manage the "keys to the kingdom"—the API tokens, database passwords, encryption keys, and service account credentials that your applications and infrastructure require? ... A single misstep can expose your entire organization to a devastating data breach. Recent breaches in CI/CD platforms have shown how exposed organizations can be when secrets leak or pipelines are compromised. As pipelines scale, the complexity and risk grow with them. ... The cryptographic algorithms that currently secure nearly all digital communications (like RSA and Elliptic Curve Cryptography used in TLS/SSL) are vulnerable to being broken by a sufficiently powerful quantum computer. While such computers do not yet exist at scale, they represent a future threat that has immediate consequences due to "harvest now, decrypt later" attacks. ... Relevance to CI/CD Secrets Management: The primary risk is in the transport of secrets. The secure channel (TLS) established between your CI/CD runner and your Secrets Manager is the point of vulnerability. To future-proof your pipeline, you need to consider moving towards PQC-enabled protocols.


Experience Really Matters - But Now You're Fighting AI Hacks

Defenders traditionally rely on understanding the timing and ordering of events. The Anthropic incident shows that AI-driven activity occurs in extremely rapid cycles. Reconnaissance, exploit refinement and privilege escalation can occur through repeated attempts that adjust based on feedback from the environment. This creates a workflow that resembles iterative code generation rather than a series of discrete intrusion stages. Professionals must now account for an adversary that can alter its approach within seconds and can test multiple variations of the same technique without the delays associated with human effort. ... The AI attacker moved across cloud systems, identity structures, application layers and internal services. It interacted fluidly with whatever surface was available. Professionals who have worked primarily within a single domain may now need broader familiarity with adjacent layers of the stack because AI-driven activity does not limit itself to the boundaries of established specializations. ... The workforce shortage in cybersecurity will continue, but the qualifications for advancement are shifting. Organizations will look for professionals who understand both the capabilities and the limitations of AI-driven offense and defense. Those who can read an AI-generated artifact, refine an automated detection workflow, or construct an updated threat model will be positioned for leadership roles.


Is vibe coding the new gateway to technical debt?

The big idea in AI-driven development is that now we can just build applications by describing them in plain English. The funny thing is, describing what an application does is one of the hardest parts of software development; it’s called requirements gathering. ... But now we are riding a vibe. A vibe, in this case, is an unwritten requirement. It is always changing—and with AI, we can keep manifesting these whims at a good clip. But while we are projecting our intentions into code that we don’t see, we are producing hidden effects that add up to masses of technical debt. Eventually, it will all come back to bite us. ... Sure, you can try using AI to fix the things that are breaking, but have you tried it? Have you ever been stuck with an AI assistant confidently running you and your code around in circles? Even with something like Gemini CLI and DevTools integration (where the AI has access to the server and client-side outputs) it can so easily descend into a maddening cycle. In the end, you are mocked by your own unwillingness to roll up your sleeves and do some work. ... If I had to choose one thing that is the most compelling about AI-coding, it would be the ability to quickly scale from nothing. The moment when I get a whole, functioning something based on not much more than an idea I described? That’s a real thrill. Weirdly, AI also makes me feel less alone at times; like there is another voice in the room.


How to Be a Great Data Steward: Responsibilities and Best Practices

Data is often described as “a critical organizational asset,” but without proper stewardship, it can become a liability rather than an asset. Poor data management leads to inaccurate reporting, compliance violations, and reputational damage. For example, a financial institution that fails to maintain accurate customer records risks incurring regulatory penalties and causing customer dissatisfaction. ... Effective data stewardship is guided by several foundational principles: accountability, transparency, integrity, security, and ethical use. These principles ensure that data remains accurate, secure, and ethically managed across its lifecycle. ... Data stewards can be categorized into several types: business data stewards, technical data stewards, domain or lead data stewards, and operational data stewards. Each plays a unique role in maintaining data quality and compliance in conjunction with other data management professionals, technical teams, and business stakeholders. ... Data stewardship thrives on clarity. Every data steward should have well-defined responsibilities and authority levels, and each data stewardship team should have clear boundaries and expectations identified. This includes specifying who manages which datasets, who ensures compliance, and who handles data quality issues. Clear role definitions prevent duplication of effort and ensure accountability across the organization.


Time for CIOs to ratify an IT constitution

IT governance is simultaneously a massive value multiplier and a must-immediately-take-a-nap-boring topic for executives. For busy moderns, governance is as intellectually palatable as the stale cabbage on the table René Descartes once doubted. How do CIOs get key stakeholders to care passionately and appropriately about how IT decisions are made? ... Everyone agrees that one can’t have a totally centralized, my-way-or-the-highway dictatorship or a totally decentralized you-all-do-whatever-you-want, live-in-a-yurt digital commune. Has the stakeholder base become too numerous, too culturally disparate, and too attitudinally centrifugal to be governed at all? ... Has IT governance sunk to such a state of disrepair that a total rethink is necessary? I asked 30 CIOs and thought leaders what they thought about the current state of IT governance and possible paths forward. The CFO for IT at a state college in the northeast argued that if the CEO, the board of directors, and the CIO were “doing their job, a constitution would not be necessary.” The CIO at a midsize, mid-Florida city argued that writing an effective IT constitution “would be like pushing water up a wall.” ... CIOs need to have a conversation regarding IT rights, privileges, duties, and responsibilities. Are they willing to do so? ... It appears that IT governance is not a hill that CIOs are willing to expend political capital on. 


Flash storage prices are surging – why auto-tiering is now essential

Across industries and use cases, a consistent pattern emerges. The majority of data becomes cold shortly after it is created. It is written once, accessed briefly, then retained for long periods without meaningful activity. Cold data does not require low latency, high IOPS, expensive endurance ratings or premium, power-intensive performance tiers. It only needs to be stored reliably at the lowest reasonable cost. Yet during the years when flash was only marginally more expensive than HDD, many organisations placed cold data on flash systems simply because the price difference felt manageable. With today’s economics, that model can no longer scale. ... The rise in ransomware attacks also helped drive flash adoption. Organisations sought faster backups, quicker restores, and higher snapshot retention. Flash delivered these benefits, but the economics are breaking under current pricing conditions. Today, the cost of flash-based backup appliances is rising, long-term retention on flash is becoming unsustainable, and maintaining deep histories on premium media no longer aligns with budget expectations. ... The current flash pricing crisis is more than a temporary spike. It signals a long-term shift in storage economics driven by accelerating AI demand, constrained supply chains, and global data growth. The all-flash mindset of the past decade is now colliding with financial realities that organisations can no longer ignore. Cold data should not be placed on expensive media. 


AI, sustainability and talent gaps reshape industrial growth

A new study by GlobalLogic, a Hitachi Group company, in partnership with HFS Research, reveals a widening divide between industrial enterprises’ ambitions and their real-world readiness for AI, sustainability, and workforce transformation. Despite strong executive push towards modernization, skills shortages, legacy systems, and misaligned priorities continue to stall progress across key industrial segments. ... The findings lay bare the scale of transition ahead: while industries recognize AI and sustainability as foundational for future competitiveness, a lack of talent and weak integration strategies are slowing measurable impact. “Industrial leaders see AI, sustainability, and talent as top priorities, yet struggle to convert these ambitions into tangible results,” said Srini Shankar, President and CEO at GlobalLogic. ... Although operational cost reduction is the top priority today, the study finds that within two years, AI adoption and operational optimization will dominate executive focus. The industrial sector is preparing for a shift from incremental improvements to deep automation and intelligence-led models. ... “Enterprises need to embed sustainability, talent, and technology transitions into both strategy and day-to-day operations,” said Josh Matthews, HFS Research. “Clear outcomes and messaging are essential to show current and future workforces that industrial organizations are shaping — not chasing — the sustainable, tech-driven future.”


When ransomware strikes, who takes the lead -- the CIO or CISO?

"[CIOs and CISOs] will probably have different priorities for when they want to do things; the CIO is going to be more concerned [about the] business side of keeping systems operational, whereas the CISO [wants to know] where is this critical data? Is it being exfiltrated? Having a good incident response plan, planning that stuff out in advance [is necessary so both parties know] what steps they're supposed to take. "The best default to contain the attack is to pull internet connectivity. You don't want to restart a system [or] shut it down, because you can lose forensic evidence. That way, if they are exfiltrating any data, that access stops, so you can begin triaging how they got in and patch that hole up. ... the first three steps come down to confirm, contain and anchor. We want to confirm that blast radius, not hypothesize or theorize what it could be, but what is it really? You'd be surprised at how many teams burn their most valuable hour debating whether it's really ransomware. "Second, contain first, communicate second. I think there's a natural [tendency for] humans to send an all-hands email out, call an emergency meeting and even notify customers. What matters most is to triage and stop the bleeding, isolate those compromised systems and cripple the bad actor's lateral movement. ... "[The best way to contain a ransomware attack will be different for each organization depending on their architectures, controls and technology, but in general, isolate as completely as possible. 


LLM vulnerability patching skills remain limited

Because the models rely on patterns they have learned, a shift in structure can break those patterns. The model may still spot something that looks like the original flaw, but the fix it proposes may no longer land in the right place. That is why a patch that looks reasonable can still fail the exploit test. The weakness remains reachable because the model addressed only part of the issue or chose the wrong line to modify. Another pattern surfaced. When a fix for an artificial variant did appear, it often came from only one model. Others failed on the same case. This shows that each artificial variant pushed the systems in different directions, and only one model at a time managed to guess a working repair. The lack of agreement across models signals that these variants exposed gaps in the patterns the systems depend on. ... OpenAI and Meta models landed behind that mark but contributed steady fixes in several scenarios. The spread shows that gains do not come from one vendor alone. The study also checked overlap. Authentic issues showed substantial agreement between models, while artificial issues showed far less. Only two issues across the entire set were patched by one model and not by any other. This suggests that combining several models adds limited coverage. ... Researchers plan to extend this work in several ways. One direction involves combining output from different LLMs or from repeated runs of the same model, giving the patching process a chance to compare options before settling on one.

Daily Tech Digest - December 10, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie



Design in the age of AI: How small businesses are building big brands faster

Instead of hiring separate agencies for naming, logo design, and web development, small businesses are turning to unified AI platforms that handle the full early-stage design stack. Tools like Design.com merge naming, logo creation, and website generation into a single workflow — turning an entrepreneur’s first sketch into a polished brand system within minutes. ... Behind the surge in AI design tools lies a broader ecosystem shift. Companies like Canva and Wix made design accessible; the current wave — led by AI-native platforms like Design.com — is more personal and adaptive. Unlike templated platforms, these tools understand context. A restaurant founder and a SaaS startup will get not just different visuals, but different copy tones, typography systems, and user flows — automatically. “What we’re seeing,” Lynch explains, “isn’t just growth in one product category. It’s a movement toward connected creativity — where every part of the brand experience learns from every other.” ... Imagine naming a company and watching an AI instantly generate a logo, color palette, and homepage layout that all reflect the same personality. As your audience grows, the same system helps you update your visual identity or tone to match new goals — while preserving your original DNA.


Henkel CISO on the messy truth of monitoring factories built across decades

On the factory floor, it is common to find a solitary engineering workstation that holds the only up-to-date copies of critical logic files, proprietary configuration tools, and project backups. If that specific computer suffers a hardware failure or is compromised by ransomware, the maintenance team loses the ability to diagnose errors or recover the production line. ... If the internet connection is severed, or if the third-party cloud provider suffers an outage, the equipment on the floor stops working. This architecture fails because it prioritizes connectivity over local autonomy, creating a fragile ecosystem where a disruption in a remote cloud environment creates a “digital brick” out of physical machinery. ... An attacker does not need sophisticated “zero-day” exploits to compromise a fifteen-year-old human-machine interface, they often just need publicly known vulnerabilities that will never be fixed by the vendor. By compromising a peripheral camera or an outdated visualization node, they gain a persistence mechanism that security teams rarely monitor, allowing them to map the operational technology network and prepare for a disruptive attack on the critical control systems at their leisure. ... A critical question for CISOs to ask is: “Can you provide a continuously updated Software Bill of Materials for your firmware, and what is your specific process for mitigating vulnerabilities in embedded third-party libraries?”


AI churn has IT rebuilding tech stacks every 90 days

Even without full production status, the fact that so many organizations are rebuilding components of their agent tech stacks every few months demonstrates not only the speed of change in the AI landscape but also a lack of faith in agentic results, Northcutt claims. Changes in the agent tech stack range from something as simple as updating the underlying AI model’s version, to moving from a closed-source to an open-source model or changing the database where agent data is stored, he notes. In many cases, replacing one component in the stack sets off a cascade of changes downstream, he adds. ... While the speed of AI evolution can drive frequent rebuilds, part of the problem lies in the way AI models are tweaked, she says. “The deeper issue is that many agent systems rely on behaviors that sit inside the model rather than on clear rules,” Hashem explains. “When the model updates, the behavior drifts. When teams set clear steps and checks for the agent, the stack can evolve without constant breakage.” ... “What works now may become suboptimal later on,” he says. “If organizations don’t actively keep up to date and refresh their stack, they risk falling behind in performance, security, and reliability.” Constant rebuilds don’t have to create chaos, however, Balabanskyy adds. CIOs should take a layered approach to their agent stacks, he recommends, with robust version control, continuous monitoring, and a modular deployment approach.


Why Losing One Security Engineer Can Break Your Defences

When tools are hard to manage – or if you need to bundle numerous tools from different vendors together – tribal knowledge builds up in one engineer’s head. It’s unrealistic to expect them to document it. Gartner recently said that organizations use an average of 45 cybersecurity tools and called for security leaders to optimize their toolsets. And in that context, losing the one person who understands how these systems actually work is not just inconvenient: it's a structural risk. And the impact this has is seen in the data from the State of AI in Security & Development report; using numerous vendors for security tools correlates with more incidents, more time spent prioritising alerts and slower remediation. In short, a security engineer has too much on their plate, and most security tools aren’t making their job any easier. ... “Organisations tend to be all looking for the same blend of technical cloud, integration, SecOps, IAM experience but with extensive knowledge in each pillar,” says James Walsh, National Lead for Cyber, Data & Cloud UK&I at Hays. “Everyone wants the unicorn security engineer whose experience spans all of this, but it comes at too high a price for lots of organisations,” he adds. Walsh notes that hiring is often driven by teams below the CISO — such as Heads of SecOps — which can create inconsistent expectations of what a ‘fully competent’ engineer should look like.


Overload Protection: The Missing Pillar of Platform Engineering

Some limits exist to protect systems. Others enforce fairness between customers or align with contractual tiers. Regardless of the reason, these limits must be enforced predictably and transparently. ... In data-intensive environments, bottlenecks often appear in storage, compute, or queueing layers. One unbounded query or runaway job can starve others, impacting entire regions or tenants. Without a unified overload protection layer, every team becomes a potential failure domain. ... Enterprise customers often face challenges when quota systems evolve organically. Quotas are published inconsistently, counted incorrectly, or are not visible to the right teams. Both external customers and internal services need predictable limits. A centralized Quota Service solves this. It defines clear APIs for tracking and enforcing usage across tenants, resources, and time intervals.  ... When overload protection is not owned by the platform, teams reinvent it repeatedly. Each implementation behaves differently, often under pressure. The result is a fragile ecosystem where: Limits are enforced inconsistently, for example, some endpoints apply resource limits, while others run requests without enforcing any limits, leading to unpredictable behavior and downstream problems; Failures cascade unpredictably, for example, a runaway data pipeline job can saturate a shared database, delaying or failing unrelated jobs and triggering retries and alerts across teams


Is your DR plan just wishful thinking? Prove your resilience with chaos engineering

At its core, it’s about building confidence in your system’s resilience. The process starts with understanding your system's steady state, which is its normal, measurable, and healthy output. You can't know the true impact of a failure without first defining what "good" looks like. This understanding allows you to form a clear, testable hypothesis: a statement of belief that your system's steady state will persist even when a specific, turbulent condition is introduced. To test this hypothesis, you then execute a controlled action, which is a precise and targeted failure injected into the system. This isn't random mischief; it's a specific simulation of real-world failures, such as consuming all CPU on a host (resource exhaustion), adding network latency (network failure), or terminating a virtual machine (state failure). While this action is running, automated probes act as your scientific instruments, continuously monitoring the system's state to measure the effect. ... Beyond simply proving system availability, chaos engineering builds trust in your reliability metrics, ensuring that you meet your SLOs even when services become unavailable. An SLO is a specific, acceptable target level of your service's performance measured over a specified period that reflects the user's experience. SLOs aren't just internal goals; they are the bedrock of customer trust and the foundation of your contractual service level agreements (SLAs).


The data center of the future: high voltage, liquid cooled, up to 4 MW per rack

Developments such as microfluidic cooling could have a profound impact on how racks and accompanying infrastructure will be built towards the future. Also, it is not all about the type of cooling, but also about the way chips communicate with each other and communicate internally. What will the impact of an all-photonics network be on cooling, for example? The first couple of stages building that type of end-to-end connection have been completed. The interesting parts for the discussion we have here are next on the roadmap for all-photonics networks: using photonics connections between and inside silicon on boards. ... However, there are many moving parts to take into account. It will need a more dynamic approach to selling space in data centers, which is usually based on the amount of watts a customer wants. Irrespective of the actual load, the data center reserves that for the customer. If data centers need to be more dynamic, so do the contracts. ... The data center of the future will be characterized by high-density computing, liquid cooling, sustainable power sources, and a more integrated role in the grid ecosystem. As technology continues to advance, data centers will become more efficient, flexible, and environmentally responsible. That may sound like an oxymoron to many people nowadays, but it’s the only way to get to the densities we need moving forward.


Vietnam integrating biometrics into daily life in digital transformation drive

Vietnam is rapidly integrating biometrics and digital identity into everyday life, rolling out identity‑based systems across public transport, air travel and banking as part of an ambitious national digital transformation drive. New deployments in Hanoi’s metro, airports nationwide and the financial sector show how VNeID and biometric verification increasingly constitute Vietnam’s infrastructure. ... Officials argue the initiative strengthens Hanoi’s ambitions as a smart city and improves interoperability across transport modes. It also introduces a unified digital identity layer for public transit, which no other Vietnamese city can yet boast. Passenger data, operations and transactions are now centralized on a single platform, enabling targeted subsidies based on usage patterns rather than flat‑rate models. The Hanoi Metro app, available on major app stores, supports tap‑and‑go access and discounted fares for verified digital identities. ... The new rules require banks to conduct face‑to‑face identity checks and verify biometric data, such as facial information, before issuing cards to individual customers. The same requirement applies to the legal representatives of corporate clients, with limited exceptions, reports Vietnam Plus. ... Foreigners without electronic identity credentials, as well as Vietnamese nationals with undetermined citizenship status, will undergo in‑person biometric collection using data from the National Population Database. 


Why 2025 broke the manager role — and what it means for leadership ahead

Managers did far more than supervise. “They became mentors, skill-builders, culture carriers and the first line of emotional support,” Tyagi said. They coached diverse teams, supported women and marginalised groups entering new roles, and navigated talent crunches by building internal pipelines. They adopted learning apps, facilitated experience-sharing sessions and absorbed the emotional load of stretched teams. ... Sustaining morale amid continual uncertainty was the most difficult task, Tyagi said. Workloads were redistributed constantly. Managers had to reassure employees while balancing performance expectations with wellbeing. Chopra saw the same tensions. Recognition and feedback remained inconsistent. Gallup research showed a gap between managers’ belief that they offered regular feedback and employees’ experience that they rarely received it. Remote work deepened disconnection. “Creating team cohesion, trust and belonging when people are dispersed remains difficult,” she said. ... Empathy dominated the management skill-set in 2025. Transparency, communication and emotional intelligence were indispensable as uncertainty persisted. Coaching and talent development grew central, especially in organisations investing in women, new hires and marginalised communities. Chopra pointed to several non-negotiables: emotional intelligence, tech literacy, outcome-focused leadership, psychological safety, coaching and ethical awareness in technology use. 


The Missing Link in AI Scaling: Knowledge-First, Not Data-First

Organizations today need to ensure data readiness to avoid failures in model performance, system trust, and strategic alignment. To succeed, CIOs must shift from a “data-first” to a “knowledge-first” approach in order to capitalize on the true benefits of AI. ... Domain-specific reasoning capabilities provide context and meaning to data, which is crucial for professional and reliable advice. A semantic layer across silos creates unified views of all data, enabling comprehensive insights that are otherwise impossible to achieve. Another benefit is its ability to support AI governance and explainability by ensuring that AI systems are not “black boxes,” but are transparent and trustworthy. Lastly, it acts as an agentic AI backbone by orchestrating a workforce of AI agents that can execute complex tasks with reliability and context. ... Shifting to a knowledge-first architecture is not just an option, but a necessity, and is a direct challenge to the conventional data-first mindset. For decades, enterprises have focused on accumulating vast lakes of data, believing that more data inherently leads to better insights. However, this approach created fragmented, context-poor data silos. This “digital quicksand” is the root of the “Semantic Challenge” because data is siloed and heterogeneous. ... A knowledge-first approach fundamentally changes the goal from simply storing data to building an interconnected, enterprise-wide, knowledge graph. This architecture is built on the principle of “things, not strings”.