Daily Tech Digest - December 04, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Software Supply Chain Risks: Lessons from Recent Attacks

Modern applications are complex tapestries woven from proprietary code, open-source libraries, third-party APIs, and countless development tools. This interconnected web is the software supply chain, and it has become one of the most critical—and vulnerable—attack surfaces for organizations globally. Supply chain attacks are particularly insidious because they exploit trust. Organizations implicitly trust the code they import from reputable sources and the tools their developers use daily. Attackers have recognized that it's often easier to compromise a less-secure vendor or a widely-used open-source project than to attack a well-defended enterprise directly. Once an attacker infiltrates a supply chain, they gain a "force multiplier" effect. A single malicious update can be automatically pulled and deployed by thousands of downstream users, granting the attacker widespread access instantly. Recent high-profile attacks have shattered the illusion of a secure perimeter, demonstrating that a single compromised component can have catastrophic, cascading effects. ... The era of blindly trusting software components is over. The software supply chain has become a primary battleground for cyberattacks, and the consequences of negligence are severe. By learning from recent attacks and proactively implementing robust security measures like SBOMs, secure pipelines, and rigorous vendor vetting, organizations can significantly reduce their risk and build more resilient, trustworthy software.


Building Bridges, Not Barriers: The Case for Collaborative Data Governance

The collaborative data governance model preserves existing structure while improving coordination among teams through shared standards and processes. This is now more critical to be able to take advantage of AI systems. The collaborative model is an alternative with many benefits for organizations whose central governance bodies – like finance, IT, data and risk – operate in silos. Complex digital and data initiatives, as well as regulatory and ethical concerns, often span multiple domains, making close coordination across departments a necessity. While the collaborative data governance model can be highly effective for complex organizations, there are situations where it may not be appropriate. ... Rather than taking a centralized approach to managing data among multiple governance domains, a federated approach allows each domain to retain its authority while adhering to shared governance standards. In other words, local control with organization-wide cohesion. ... The collaborative governance model is a framework that promotes accessible systems and processes to the organization, rather than a series of burdensome checks and red tape. In other words, under this model, data governance is viewed as an enabler, not a blocker. ... Using effective tools such as data catalogs, policy management and collaboration spaces, shared platforms streamline governance processes and enable seamless communication and cooperation between teams.


China Researches Ways to Disrupt Satellite Internet

In an academic paper published in Chinese last month, researchers at two major Chinese universities found that the communications provided by satellite constellations could be jammed, but at great cost: To disrupt signals from the Starlink network to a region the size of Taiwan would require 1,000 to 2,000 drones, according to a research paper cited in a report in the South China Morning Post. ... Cyber- and electronic-warfare attacks against satellites are being embraced because they pose less risk of collateral damage and are less likely to escalate tensions, says Clayton Swope, deputy director for the Aerospace Security Project at the Center for Strategic and International Studies (CSIS), a Washington, DC-based policy think tank. ... The constellations are resilient to disruptions. The latest research into jamming constellation-satellite networks was published in the Chinese peer-reviewed journal Systems Engineering and Electronics on Nov. 5 with a title that translates to "Simulation research of distributed jammers against mega-constellation downlink communication transmissions," the SCMP reported. ... China is not just researching ways to disrupt communications for rival nations, but also is developing its own constellation technology to benefit from the same distributed space networks that makes Starlink, EutelSat, and others so reliable, according to the CSIS's Swope.


The Legacy Challenge in Enterprise Data

As companies face extreme complexity with multiple legacy data warehouses and disparate analytical data assets models owned by the line of business analysts, the decision-making becomes challenging when moving to cloud-based data systems for transformation and migration. Where both options are challenging, this is not a one-size-fits-all solution, and careful consideration is needed when making the decision, as this involves millions of dollars and years of critical work. ... Enterprise migrations are long journeys, not short projects. Programs typically span 18 to 24 months, cover hundreds of terabytes of data, and touch dozens of business domains. A single cutover is too risky, while endless pilots waste resources. Phased execution is the only sustainable approach. High-value domains are prioritized to demonstrate progress. Legacy and cloud often run in parallel until validation is complete. Automated validation, DevOps pipelines, and AI-assisted SQL conversion accelerate progress. To avoid burnout, teams are structured with a mix of full-time employees who work closely with business users and managed services that provide technical scale. ... Governance must be embedded from the start. Metadata catalogs track lineage and ownership. Automated validation ensures quality at every stage, not just at cutover. Role-based access controls, encryption, and masking enforce compliance. 


Through the Looking Glass: Data Stewards in the Realm of Gondor

Data Stewards are sought-after individuals today. I have seen many “data steward” job postings over the last six months and read much discussion about the role in various periodicals and postings. I have always agreed with my editor’s conviction that everyone is a data steward, accountable for the data they create, manage, and use. Nevertheless, the role of data steward, as a job and as a career, has established itself in the view of many companies as essential to improving data governance and management. ... “Information Stewardship” is a concept like Data Stewardship and may even predate it, based on my brief survey of articles on these topics. Trevor gives an excellent summary of the essence of stewardship in this context: Stewardship requires the acceptance by the user that the information belongs to the organization as a whole, not any one individual. The information should be shared as needed and monitored for changes in value. ... Data Stewards “own” data, or to be more precise, Data Stewards are responsible for the data owned by the enterprise. If the enterprise is the old-world Lord’s Estate, then the Data Stewardship Team consists of the people who watch over the lifeblood of the estate, including the shepherds who make sure the data is flowing smoothly from field to field, safe from internal and external predators, safe from inclement weather, and safe from disease. ... 


Scaling Cloud and Distributed Applications: Lessons and Strategies

Scaling extends beyond simply adding servers. When scaling occurs, the fundamental question is whether the application requires scaling due to genuine customer demand or whether upstream services experiencing queuing issues slow system response. When threads wait for responses and cannot execute, pressure increases on CPU and memory resources, triggering elastic scaling even though actual demand has not grown. ... Architecture must extend beyond documentation. Creating opinionated architecture templates assists teams in building applications that automatically inherit architectural standards. Applications deploy automatically using manifest-based definitions, so that teams can focus on business functionality rather than infrastructure tooling complexities. ... Infrastructure repaving represents a highly effective practice of systematically rebuilding infrastructure each sprint. Automated processes clean up running instances regularly. This approach enhances security by eliminating configuration drift. When drift exists or patches require application, including zero-day vulnerability fixes, all updates can be systematically incorporated. Extended operation periods create stale resources, performance degradation, and security vulnerabilities. Recreating environments at defined intervals (weekly or bi-weekly) occurs automatically. 


Why Synthetic Data Will Decide Who Wins the Next Wave of AI

Why is synthetic data suddenly so important? The simple answer is that AI has begun bumping into a glass ceiling. Real-world data doesn’t extend far enough to cover all the unlikely edge cases or every scenario that we want our models to live through. Synthetic data allows teams to code in the missing parts directly. Developers construct situations as needed. ... Building synthetic data holds the key to filling the gap when the quality or volume of data needed by AI models is not good enough, but the process to create this data is not easy. Behind the scenes, there’s an entire stack working together. We are talking about simulation engines, generative models like GANs and diffusion systems, large language models (LLMs) for text-based domains. All this creates virtual worlds for training. ... The organizations most affected by the growing need for synthetic data are those that operate in high-risk areas where there is no actual data, or the act of finding it is inefficient. Think of fully autonomous vehicles that can’t simply wait for every dangerous encounter to occur in traffic. Doctors working on a cure for rare diseases but can’t call on thousands of such cases. Trading firms that can’t wait for just the right market shock for their AI models. These teams can turn synthetic data to give them a lesson from situations that are simply not possible (or practical) in real life.


How ABB’s Approach to IT/OT Ensures Cyber Resilience

The convergence of IT and OT creates new vulnerabilities as previously isolated control systems now require integration with enterprise networks. ABB addresses this by embedding security architecture from the start rather than retrofitting it later. This includes proper network segmentation, validated patching protocols and granular access controls that enable safe data connectivity while protecting operational technology. ... On the security front, AI-driven monitoring can identify anomalous patterns in network traffic and system behavior that might indicate a breach attempt, spotting threats that traditional rule-based systems would miss. However, it's crucial to distinguish between embedded AI and Gen AI. Embedded AI in our products optimises processes with predictable, explainable outcomes. This same principle applies to security: AI systems that monitor for threats must be transparent in how they reach conclusions, allowing security teams to understand and validate alerts rather than trusting a black box. ... Secure data exchange protocols, multi-factor authentication on remote access points and validated update mechanisms all work together to enable the connectivity that digital twins require while maintaining security boundaries. The key is recognising that digital transformation and security are interdependent. Organisations investing millions in AI, digital twins or automation while neglecting cybersecurity are building on sand.


Building an MCP server is easy, but getting it to work is a lot harder

"The true power of remote MCP is realized through centralized 'agent gateways' where these servers are registered and managed. This model delivers the essential guardrails that enterprises require," Shrivastava said. That said, agent gateways do come with their own caveats. "While gateways provide security, managing a growing ecosystem of dozens or even hundreds of registered MCP tools introduces a new challenge: orchestration," he said. "The most scalable approach is to add another layer of abstraction: organizing toolchains into 'topics' based on the 'job to be done.'" ... "When a large language model is granted access to multiple external tools via the protocol, there is a significant risk that it may choose the wrong tool, misuse the correct one, or become confused and produce nonsensical or irrelevant outputs, whether through classic hallucinations or incorrect tool use," he explained. ... MCP's scaling limits also present a huge obstacle. The scaling limits exist "because the protocol was never designed to coordinate large, distributed networks of agents," said James Urquhart, field CTO and technology evangelist at Kamiwaza AI, a provider of products that orchestrate and deploy autonomous AI agents. MCP works well in small, controlled environments, but "it assumes instant responses between agents," he said -- an unrealistic expectation once systems grow and "multiple agents compete for processing time, memory or bandwidth."


The quantum clock is ticking and businesses are still stuck in prep mode

The report highlights one of the toughest challenges. Eighty one percent of respondents said their crypto libraries and hardware security modules are not prepared for post quantum integration. Many use legacy systems that depend on protocols designed long before quantum threats were taken seriously. Retrofitting these systems is not a simple upgrade. It requires changes to how keys are generated, stored and exchanged. Skills shortages compound the problem. Many security teams lack experience in testing or deploying post quantum algorithms. Vendor dependence also slows progress because businesses often cannot move forward until external suppliers update their own tooling. ... Nearly every organization surveyed plans to allocate budget toward post quantum projects within the next two years. Most expect to spend between six and ten percent of their cybersecurity budgets on research, tooling or deployment. Spending levels differ by region. More than half of US organizations plan to invest at least eleven percent, far higher than the UK and Germany. ... Contractual requirements from customers and partners are seen as the strongest motivator for adoption. Industry standards rank near the top of the list across most sectors. Many respondents also pointed to upcoming regulations and mandates as drivers. Security incidents ranked surprisingly low in the US, suggesting that market and policy signals hold more influence than hypothetical attack scenarios.

Daily Tech Digest - December 03, 2025


Quote for the day:

“The only true wisdom is knowing that you know nothing.” -- Socrates


How CISOs can prepare for the new era of short-lived TLS certificates

“Shorter certificate lifespans are a gift,” says Justin Shattuck, CSO at Resilience. “They push people toward better automation and certificate management practices, which will later be vital to post-quantum defense.” But this gift, intended to strengthen security, could turn into a curse if organizations are unprepared. Many still rely on manual tracking and renewal processes, using spreadsheets, calendar reminders, or system admins who “just know” when certificates are due to expire. ... “We’re investing in a living cryptographic inventory that doesn’t just track SSL/TLS certificates, but also keys, algorithms, identities, and their business, risk, and regulatory context within our organization and ties all of that to risk,” he says. “Every cert is tied to an owner, an expiration date, and a system dependency, and supported with continuous lifecycle-based communication with those owners. That inventory drives automated notifications, so no expiration sneaks up on us.” ... While automation is important as certificates expire more quickly, how it is implemented matters. Renewing a certificate a fixed number of days before expiration can become unreliable as lifespans change. The alternative is renewing based on a percentage of the certificate’s lifetime, and this method has an advantage: the timing adjusts automatically when the lifespan shortens. “Hard-coded renewal periods are likely to be too long at some point, whereas percentage renewal periods should be fine,” says Josh Aas.


How Enterprises Can Navigate Privacy With Clarity

There's an interesting pattern across organizations of all sizes. When we started discussing DPDPA compliance a year ago, companies fell into two buckets: those already building toward compliance and others saying they'd wait for the final rules. That "wait and see period" taught us a lot. It showed how most enterprises genuinely want to do the right thing, but they often don't know where to start. In practice, mature data protection starts with a simple question that most enterprises haven't asked themselves: What personal data do we have coming in? Which of it is truly personal data? What are we doing with it? ... The first is how enterprises understand personal data itself. I tell clients not to view personal data as a single item but as part of an interconnected web. Once one data point links to another, information that didn't seem personal becomes personal because it's stored together or can be easily connected. ... The second gap is organizational visibility. Some teams process personal data in ways others don't know about. When we speak with multiple teams, there's often a light bulb moment where everyone realizes that data processing is happening in places they never expected. The third gap is third-party management. Some teams may share data under basic commercial arrangements or collect it through processes that seem routine. An IT team might sign up for a new hosting service without realizing it will store customer personal data. 


How to succeed as an independent software developer

Income for freelance developers varies depending on factors such as location, experience, skills, and project type. Average pay for a contractor is about $111,800 annually, according to ZipRecruiter, with top earners making potentially more than $151,000. ... “One of the most important ways to succeed as an independent developer is to treat yourself like a business,” says Darian Shimy, CEO of FutureFund, a fundraising platform built for K-12 schools, and a software engineer by trade. “That means setting up an LLC or sole proprietorship, separating your personal and business finances, and using invoicing and tax tools that make it easier to stay compliant,” Shimy says. ... “It was a full-circle moment, recognition not just for coding expertise, but for shaping how developers learn emerging technologies,” Kapoor says. “Specialization builds identity. Once your expertise becomes synonymous with progress in a field, opportunities—whether projects, media, or publishing—start coming to you.” ... Freelancers in any field need to know how to communicate well, whether it’s through the written word or conversations with clients and colleagues. If a developer communicates poorly, even great talent might not make the difference in landing gigs. ... A portfolio of work tells the story of what you bring to the table. It’s the main way to showcase your software development skills and experience, and is a key tool in attracting clients and projects. 


AI in 5 years: Preparing for intelligent, automated cyber attacks

Cybercriminals are increasingly experimenting with autonomous AI-driven attacks, where machine agents independently plan, coordinate, and execute multi-stage campaigns. These AI systems share intelligence, adapt in real time to defensive measures, and collaborate across thousands of endpoints — functioning like self-learning botnets without human oversight. ... Recent “vibe hacking” cases showed how threat actors embedded social-engineering goals directly into AI configurations, allowing bots to negotiate, deceive, and persist autonomously. As AI voice cloning becomes indistinguishable from the real thing, verifying identity will shift from who is speaking to how behaviourally consistent their actions are, a fundamental change in digital trust models. ... Unlike traditional threats, machine-made attacks learn and adapt continuously. Every failed exploit becomes training data, creating a self-improving threat ecosystem that evolves faster than conventional defences. Check Point Research notes that AI-driven tools like Hexstrike-AI framework, originally built for red-team testing, was weaponised within hours to exploit Citrix NetScaler zero-days. These attacks also operate with unprecedented precision. ... Make DevSecOps a standard part of your AI strategy. Automate security checks across your CI/CD pipeline to detect insecure code, exposed secrets, and misconfigurations before they reach production. 


Threat intelligence programs are broken, here is how to fix them

“An effective threat intelligence program is the cornerstone of a cybersecurity governance program. To put this in place, companies must implement controls to proactively detect emerging threats, as well as have an incident handling process that prioritizes incidents automatically based on feeds from different sources. This needs to be able to correlate a massive amount of data and provide automatic responses to enhance proactive actions,” says Carlos Portuguez ... Product teams, fraud teams, governance and compliance groups, and legal counsel often make decisions that introduce new risk. If they do not share those plans with threat intelligence leaders, PIRs become outdated. Security teams need lines of communication that help them track major business initiatives. If a company enters a new region, adopts a new cloud platform, or deploys an AI capability, the threat model shifts. PIRs should reflect that shift. ... Manual analysis cannot keep pace with the volume of stolen credentials, stealer logs, forum posts, and malware data circulating in criminal markets. Security engineering teams need automation to extract value from this material. ... Measuring threat intelligence remains a challenge for organizations. The report recommends linking metrics directly to PIRs. This prevents metrics that reward volume instead of impact. ... Threat intelligence should help guide enterprise risk decisions. It should influence control design, identity practices, incident response planning, and long term investment.


Europe’s Digital Sovereignty Hinges on Smarter Regulation for Data Access

Europe must seek to better understand, and play into, the reality of market competition in the AI sector. Among the factors impacting AI innovation, access to computing power and data are widely recognized as most crucial. While some proposals have been made to address the former, such as making the continent’s supercomputers available to AI start-ups, little has been proposed with regard to addressing the data access challenge. ... By applying the requirement to AI developers independently of their provenance, the framework ensures EU competitiveness is not adversely impacted. On the contrary, the approach would enable EU-based AI companies to innovate with legal certainty, avoiding the cost and potential chilling effect of lengthy lawsuits compared to their US competitors. Additionally, by putting the onus on copyright owners to make their content accessible, the framework reduces the burden for AI companies to find (or digitize) training material, which affects small companies most. ... Beyond addressing a core challenge in the AI market, the example of the European Data Commons highlights how government action is not just a zero-sum game between fostering innovation and setting regulatory standards. By scrapping its digital regulation in the rush to boost the economy and gain digital sovereignty, the EU is surrendering its longtime ambition and ability to shape global technology in its image.


New training method boosts AI multimodal reasoning with smaller, smarter datasets

Recent advances in reinforcement learning with verifiable rewards (RLVR) have significantly improved the reasoning abilities of large language models (LLMs). RLVR trains LLMs to generate chain-of-thought (CoT) tokens (which mimic the reasoning processes humans use) before generating the final answer. This improves the model’s capability to solve complex reasoning tasks such as math and coding. Motivated by this success, researchers have applied similar RL-based methods to large multimodal models (LMMs), showing that the benefits can extend beyond text to improve visual understanding and problem-solving across different modalities. ... According to Zhang, the step-by-step process fundamentally changes the reliability of the model's outputs. "Traditional models often 'jump' directly to an answer, which means they explore only a narrow portion of the reasoning space," he said. "In contrast, a reasoning-first approach forces the model to explicitly examine multiple intermediate steps... [allowing it] to traverse much deeper paths and arrive at answers with far more internal consistency." ... The researchers also found that token efficiency is crucial. While allowing a model to generate longer reasoning steps can improve performance, excessive tokens reduce efficiency. Their results show that setting a smaller "reasoning budget" can achieve comparable or even better accuracy, an important consideration for deploying cost-effective enterprise applications.


Why Firms Can’t Ignore Agentic AI

The danger posed by agentic AI stems from its ability carry out specific tasks with limited oversight. “When you give autonomy to a machine to operate within certain bounds, you need to be confident of two things: That it has been provided with excellent context so it knows how to make the right decisions – and that it is only completing the task asked of it, without using the information it’s been trusted with for any other purpose,” James Flint, AI practice lead at Securys, said. Mike Wilkes, enterprise CISO, Aikido Security, describes agentic AI as “giving a black box agent the ability to plan, act, and adapt on its own.” “In most companies that now means a new kind of digital insider risk with highly-privileged access to code, infrastructure, and data,” he warns. When employees start to use the technology without guardrails, shadow agentic AI introduces a number of risks. ... Adding to the risk, agentic AI is becoming easier to build and deploy. This will allow more employees to experiment with AI agents – often outside IT oversight, creating new governance and security challenges, says Mistry. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic that provides an open standard for orchestrating connections between AI assistants and data sources. By streamlining the work of development and security teams, this can “turbocharge productivity,” but it comes with caveats, says Pieter Danhieux, co-founder and CEO of Secure Code Warrior.


Why supply chains are the weakest link in today’s cyber defenses

One of the key reasons is that attackers want to make the best return on their efforts, and have learned that one of the easiest ways into a well-defended enterprise is through a partner. No thief would attempt to smash down the front door of a well-protected building if they could steal a key and slip in through the back. There’s also the advantage of scale: one company providing IT, HR, accounting or sales services to multiple customers may have fewer resources to protect itself, that’s the natural point of attack. ... When the nature of cyber risks changes so quickly, yearly audits of suppliers can’t provide the most accurate evidence of their security posture. The result is an ecosystem built on trust, where compliance often becomes more of a comfort blanket. Meanwhile, attackers are taking advantage of the lag between each audit cycle, moving far faster than the verification processes designed to stop them. Unless verification evolves into a continuous process, we’ll keep trusting paperwork while breaches continue to spread through the supply chain. ... Technology alone won’t fix the supply chain problem, and a change in mindset is also needed. Too many boards are still distracted by the next big security trend, while overlooking the basics that actually reduce breaches. Breach prevention needs to be measured, reported and prioritized just like any other business KPI. 


How AI Is Redefining Both Business Risk and Resilience Strategy

When implemented across prevention and response workflows, automation reduces human error, frees analysts’ time and preserves business continuity during high-pressure events. One applicable example includes automated data-restore sequences, which validate backup integrity before bringing systems online. Another example involves intelligent network rerouting that isolates subnets while preserving service. Organizations that deploy AI broadly across prevention and response report significantly lower breach costs. ... Biased AI models can produce skewed outputs which lead to poor decisions during a crisis. When a model is trained on limited or biased historical data, it can favor certain groups, locations or signals and then recommend actions overlook real need. In practical terms, this can mean an automated triage system that routes emergency help away from underserved neighborhoods. ... Turn risk controls into operational patterns. Use staged deployments, automated rollback triggers and immutable model artifacts that map to code and data versions. Those practices reduce the likelihood an unseen model change will result in a system outage. Next, pair AI systems with fallbacks for critical flows. This step ensures core services can continue if models fail. Monitoring should also be a consideration. It should display model metrics, such as drift and input distribution, alongside business measures, including latency and error rates. 

Daily Tech Digest - December 02, 2025


Quote for the day:

"I am not a product of my circumstances. I am a product of my decisions." -- Stephen Covey



The CISO’s paradox: Enabling innovation while managing risk

When security understands revenue goals, customer promises and regulatory exposure, guidance becomes specific and enabling. Begin by embedding a security liaison with each product squad so there is always a known face to engage in identity, data flows, logging and encryption decisions as they form. We should not want to see engineers opening two-week tickets for a simple question. There should be open “office hours,” chat channels and quick calls so they can get immediate feedback on decisions like API design, encryption requirements and regional data moves. ... Show up at sprint planning and early design reviews to ask the questions that matter — authentication paths, least-privilege access, logging coverage and how changes will be monitored in production through SIEM and EDR. When security officers sit at the same table, the conversation changes from “Can we do this?” to “How do we do this securely?” and better outcomes follow from day one. ... When developers deploy code multiple times a day, a “final security review” before launch just wouldn’t work. This traditional, end-of-line gating model doesn’t just block innovation but also fails to catch real-world risks. To be effective, security must be embedded during development, not just inspected after. ... This discipline must further extend into production. Even with world-class DevSecOps, we know a zero-day or configuration drift can happen. 


Resilience Means Fewer Recoveries, Not Faster Ones

Resilience has become one of the most overused words in management. Leaders praise teams for “pushing through” and “bouncing back,” as if the ability to absorb endless strain were proof of strength. But endurance and resilience are not the same. Endurance is about surviving pressure. Resilience is about designing systems so people don’t break under it. Many organizations don’t build resilience; they simply expect employees to endure more. The result is a quiet crisis of exhaustion disguised as dedication. Teams appear committed but are running on fumes. ... In most organizations, a small group carries the load when things get tough — the dependable few who always say yes to the most essential tasks. That pattern is unsustainable. Build redundancy into the system by cross-training roles, rotating responsibilities, and decentralizing authority. The goal isn’t to reduce pressure to zero; it’s to distribute it evenly enough so that no one person becomes the safety net for everyone else. ... Too many managers equate resilience with recovery, celebrating those who saved the day after the crisis is over. But true resilience shows up before the crisis hits. Observe your team to recognize the people who spot problems early, manage risks quietly, or improve workflows so that breakdowns don’t happen. Crisis prevention doesn’t create dramatic stories, but it builds the calm, predictable environment that allows innovation to thrive.


Facial Recognition’s Trust Problem

Surveillance by facial recognition is almost always in a public setting, so it’s one-to-many. There is a database and many cameras (usually a large number of cameras – an estimated one million in London and more than 30,000 in New York). These cameras capture images of people and compare them to the database of known images to identify individuals. The owner of the database may include watchlists comprising ‘people of interest’, so the ability to track persons of interest from one camera to another is included. But the process of capturing and using the images is almost always non-consensual. People don’t know when, where or how their facial image was first captured, and they don’t know where their data is going downstream or how it is used after initial capture. Nor are they usually aware of the facial recognition cameras that record their passage through the streets. ... Most people are wary of facial recognition systems. They are considered personally intrusive and privacy invasive. Capturing a facial image and using it for unknown purposes is not something that is automatically trusted. And yet it is not something that can be ignored – it’s part of modern life and will continue to be so. In the two primary purposes of facial recognition – access authentication and the surveillance of public spaces – the latter is the least acceptable. It is used for the purpose of public safety but is fundamentally insecure. What exists now can be, and has been, hijacked by criminals for their own purposes. 


The Urgent Leadership Playbook for AI Transformation

Banking executives talk enthusiastically about AI. They mention it frequently in investor presentations, allocate budgets to pilot programs, and establish innovation labs. Yet most institutions find themselves frozen between recognition of AI’s potential and the organizational will to pursue transformation aggressively. ... But waiting for perfect clarity guarantees competitive disadvantage. Even if only 5% of banks successfully embed AI across operations — and the number will certainly grow larger — these institutions will alter industry dynamics sufficiently to render non-adopters progressively irrelevant. Early movers establish data advantages, algorithmic sophistication, and operational efficiencies that create compounding benefits difficult for followers to overcome. ... The path from today’s tentative pilots to tomorrow’s AI-first institution follows a proven playbook developed by "future-built" companies in other sectors that successfully generate measurable value from AI at enterprise scale. ... Scaling AI requires reimagining organizational structures around technology-human collaboration based on three-layer guardrails: agent policy layers defining permissible actions, assurance layers providing controls and audit trails, and human responsibility layers assigning clear ownership for each autonomous domain.


Creative cybersecurity strategies for resource-constrained institutions

There’s a well-worn phrase that gets repeated whenever budgets are tight: “We have to do more with less.” I’ve never liked it because it suggests the team wasn’t already giving maximum effort. Instead, the goal should be to “use existing resources more effectively.” ... When you understand the users’ needs and learn how they want to work, you can recommend solutions that are both secure and practical. You don’t need to be an expert in every research technology. Start by paying attention to the services offered by cloud providers and vendors. They constantly study user pain points and design tools to address them. If you see a cloud service that makes it easier to collect, store, or share scientific data, investigate what makes it attractive. ... First, understand how your policies and controls affect the work. Security shouldn’t be developed in a vacuum. If you don’t understand the impact on researchers, developers, or operational teams, your controls may not be designed and implemented in a manner that helps enable the business. Second, provide solutions, don’t just say no. A security team that only rejects ideas will be thought of as a roadblock, and users will do their best to avoid engagement. A security team that helps people achieve their goals securely becomes one that is sought out, and ultimately ensures the business is more secure.


Architecting Intelligence: A Strategic Framework for LLM Fine-Tuning at Scale

As organizations race to harness the transformative power of Large Language Models, a critical gap has emerged between experimental implementations and production-ready AI systems. While prompt engineering offers a quick entry point, enterprises seeking competitive advantage must architect sophisticated fine-tuning pipelines that deliver consistent, domain-specific intelligence at scale. The landscape of LLM deployment presents three distinct approaches for fine-tuning, each with architectural implications. The answer lies in understanding the maturity curve of AI implementation. ... Fine-tuning represents the architectural apex of AI implementation. Rather than relying on prompts or external knowledge bases, fine-tuning modifies the AI model itself by continuing its training on domain-specific data. This embeds organizational knowledge, reasoning patterns, and domain expertise directly into the model’s parameters. Think of it this way: a general-purpose AI model is like a talented generalist who reads widely but lacks deep expertise. ... The decision involves evaluating several factors. Model scale matters because larger models generally offer better performance but demand more computational resources. An organization must balance the quality improvements of a 70-billion-parameter model against the infrastructure costs and latency implications. 


How smart tech innovation is powering the next generation of the trucking industry

Real-time tracking has now become the backbone of digital trucking. These systems provide the real time updates on the location of vehicles, fuel consumption, driving behavior and performance of the engine. Fleets also make informed decisions based on data to directly influence operational efficiencies. Furthermore, the IoT-enabled ‘one app’ solution monitors cargo temperature, location, and overall load conditions throughout the journey. ... Now, with AI driven algorithms, fleet managers anticipate most optimal routes via analysis of historical demand, weather patterns, and traffic. AI-powered intelligent route optimization applications allow fleets to optimize fuel usage and lower travel times. Additionally, with predictive maintenance capabilities, trucking companies are less concerned about vehicle failures, because a more proactive approach is used. AI tools spot anomalies in engine data and warn the fleet owners before expensive vehicle failure occurs, improving the overall fleet operations. ... The trucking industry is transforming faster than ever before. Technologies are turning every vehicle into a connected network and digital asset. Fleets can forecast demand, optimize routes, preserve cargo quality, and ensure safety at every step. The smarter goals align seamlessly with cost saving opportunities as logistics aggregators transition from the manual heavy paperwork to the digital locker convenience


Why every business needs to start digital twinning in 2026

Digital twins have begun to stand out because they’re not generic AI stand-ins; at their best they’re structured behavioural models grounded in real customer data. They offer a dependable way to keep insights active, consistent and available on demand. That is where their true strategic value lies. More granularly, the best performing digital twins are built on raw existing customer insights – interview transcripts, survey results, and behavioural data. But rather than just summarising the data, they create a representation of how a particular individual tends to think. Their role isn’t to imitate someone’s exact words, but to reflect underlying logic, preferences, motivations and blind spots. ... There’s no denying the fact that organisations have had a year of big promises and disappointing AI pilots, with the result that businesses are far more selective about what genuinely moves the needle. For years, digital twinning has been used to model complex systems in engineering, aerospace and manufacturing, where failure is expensive and iteration must happen before anything becomes real. With the rise of generative AI, the idea of a digital twin has expanded. After a year of rushed AI pilots and disappointing ROI, leaders are looking for approaches that actually fit how businesses work. Digital twinning does exactly that: it builds on familiar research practices, works inside existing workflows, and lets teams explore ideas safely before committing to them.


From compliance to confidence: Redefining digital transformation in regulated enterprises

Compliance is no longer the brake on digital transformation. It is the steering system that determines how fast and how far innovation can go. ... Technology rarely fails because of a lack of innovation. It fails when organizations lack the governance maturity to scale innovation responsibly. Too often, compliance is viewed as a bottleneck. It’s a scalability accelerator when embedded early. ... When governance and compliance converge, they unlock a feedback loop of trust. Consider a payer-provider network that unified its claims, care and compliance data into a single “truth layer.” Not only did this integration reduce audit exceptions by 45%, but it also improved member-satisfaction scores because interactions became transparent and consistent. ... No transformation from compliance to confidence happens without leadership alignment. The CIO sits at the intersection of technology, policy and culture and therefore carries the greatest influence over whether compliance is reactive or proactive. ... Technology maturity alone is not enough. The workforce must trust the system. When employees understand how AI or analytics systems make decisions, they become more confident using them. ... Confidence is not the absence of regulation; it’s mastery of it. A confident enterprise doesn’t fear audits because its systems are inherently explainable. 


AI agents are already causing disasters - and this hidden threat could derail your safe rollout

Although artificial intelligence agents are all the rage these days, the world of enterprise computing is experiencing disasters in the fledgling attempts to build and deploy the technology. Understanding why this happens and how to prevent it is going to involve lots of planning in what some are calling the zero-day deliberation. "You might have hundreds of AI agents running on a user's behalf, taking actions, and, inevitably, agents are going to make mistakes," said Anneka Gupta, chief product officer for data protection vendor Rubrik. ... Gupta talked about more than just a product pitch. Fixing well-intentioned disasters is not the biggest agent issue, she said. The big picture is that agentic AI is not moving forward as it should because of zero-day issues. "Agent Rewind is a day-two issue," said Gupta. "How do we solve for these zero-day issues to start getting people moving faster -- because they are getting stuck right now." ... According to Gupta, the true problem of agent deployment is all the work that begins with the chief information security officer, CISO, the chief information officer, CIO, and other senior management to figure out the scope of agents. AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling the AI program to carry out a wider variety of actions. ... The real zero-day obstacle is how to understand what agents are supposed to be doing, and how to measure what success or failure would look like.

Daily Tech Digest - December 01, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart



Engineers for the future: championing innovation through people, purpose and progress

Across the industry, Artificial Intelligence (AI) and automation are transforming how we design, build and maintain devices, while sustainability targets are prompting businesses to rethink their operations. The challenge for engineers today is to balance technological advancement with environmental responsibility and people-centered progress. ... The industry faces an ageing workforce, so establishing new pathways into engineering has become increasingly important. Diversity, Equity & Inclusion (DE&I) initiatives play an essential role here, designed to attract more women and under-represented groups into the field. Building teams that reflect a broader mix of backgrounds and perspectives does more than close the skills gap: it drives creativity and strengthens the innovation needed to meet future challenges in areas such as AI and sustainability. Engineering has always been about solving problems, but today’s challenges, from digital transformation to decarbonization, demand an ‘innovation mindset’ that looks ahead and designs for lasting impact. ... The future of engineering will not be defined by one technological breakthrough. It will be shaped by lots of small, deliberate improvements – smarter maintenance, data-driven decisions, lower emissions, recyclability – that make systems more efficient and resilient. Progress will come from engineers who continue to refine how things work, linking technology, sustainability and human insight. 


Why data readiness defines GenAI success

Enterprises are at varying stages of maturity. Many do not yet have the strong data foundation required to support scaling AI, especially GenAI. Our Intelligent Data Management Cloud (IDMC) addresses this gap by enabling enterprises to prepare, activate, manage, and secure their data. It ensures that data is intelligent, contextual, trusted, compliant, and secure. Interestingly, organisations in regulated industries tend to be more prepared because they have historically invested heavily in data hygiene. But overall, readiness is a journey, and we support enterprises across all stages. ... The rapid adoption of agents and AI models has dramatically increased governance complexity. Many enterprises already manage tens of thousands of data tasks. In the AI era, this scales to tens of thousands of agents as well. The solution lies in a unified metadata-driven foundation. An enterprise catalog that understands entities, relationships, policies, and lineage becomes the single source of truth. This catalog does not require enterprises to consolidate immediately; it can operate across heterogeneous catalogs, but the more an enterprise consolidates, the more complexity shifts from people and processes into the catalog itself. Auto-cataloging is critical. Automatically detecting relationships, lineage, governance rules, compliance requirements, and quality constraints reduces manual overhead and ensures consistency. 


12 signs the CISO-CIO relationship is broken — and steps to fix it

“It’s critical that those in these two positions get along with each other, and that they’re not only collegial but collaborative,” he says. Yes, they each have their own domain and their own set of tasks and objectives, but the reality is that each one cannot get that work done without the other. “So they have to rely on one another, and they have to each recognize that they must rely on each other.” Moreover, it’s not just the CIO and CISO who suffer when they aren’t collegial and collaborative. Palmore and other experts say a poor CIO-CISO relationship also has a negative impact on their departments and the organization as a whole. “A strained CIO-CISO relationship often shows up as misalignment in goals, priorities, or even communication,” says Marnie Wilking, CSO at Booking.com. ... CIOs and CISOs both have incentives to improve a problematic relationship. As Lee explains, “The CIO-CISO relationship is critical. They both have to partner effectively to achieve the organization’s technology and cybersecurity goals. All tech comes with cybersecurity exposure that can impact the successful implementation of the tech and business outcomes; that’s why CIOs have to care about cybersecurity. And CISOs have to know that cybersecurity exists to achieve business outcomes. So they have to work together to achieve each other’s priorities.” CISOs can take steps to develop a better rapport with their CIOs, using the disruption happening today


Meeting AI-driven demand with flexible and scalable data centers

Analysts predict that by 2030, 80 percent of the AI workloads will be for inference rather than training, which led Aitkenhead to say that the size of the inference capacity expansion is “just phenomenal”. Additionally, neo cloud companies such as CoreWeave and G‑Core are now buying up large volumes of hyperscale‑grade capacity to serve AI workloads. To keep up with this changing landscape, IMDC is ensuring that it has access to large amounts of carbon-free power and that it has the flexible cooling infrastructure that can adapt to customers’ requirements as they change over time. ... The company is adopting a standard data center design that can accommodate both air‑based and water‑based cooling, giving customers the freedom to choose any mix of the two. The design is deliberately oversized (Aitkenhead said it can provide well over 100 percent of the cooling capacity initially needed) so it can handle rising rack densities. ... This expansion is financed entirely from Iron Mountain’s strong, cash‑generating businesses, which gives the data center arm the capital to invest aggressively while improving cost predictability and operational agility. With a revamped design construction process and a solid expansion strategy, IMDC is positioning itself to capture the surging demand for AI‑driven, high‑density workloads, ensuring it can meet the market’s steep upward curve and remain “exciting” and competitive in the years ahead.


AI Agents Lead The 8 Tech Trends Transforming Enterprise In 2026

Step aside chatbots; agents are the next stage in the evolution of enterprise AI, and 2026 will be their breakout year. ... Think of virtual co-workers, always-on assistants monitoring and adjusting processes in real-time, and end-to-end automated workflows requiring minimal human intervention. ... GenAI is moving rapidly from enterprise pilots to operational adoption, transforming knowledge workflows; generating code for software engineers, drafting contracts for legal teams, and creating schedules and action plans for project managers. ... Enterprise organizations are outgrowing generic cloud platforms and increasingly looking to adopt Industry Cloud Platforms (ICP), offering vertical solutions encompassing infrastructure, applications and data. ... This enterprise trend is driven by both the proliferation of smart, connected IoT devices and the behavioral shift to remote and hybrid working. The zero-trust edge (ZTE) concept refers to security functionality built into edge devices, from industrial machinery to smartphones, via cloud platforms, to ensure consistent administration of security functionality. ... Enterprises are responding by adopting green software engineering principles for carbon efficiency and adopting AI to monitor their activities. In 2026, the strategy is “green by design”, reflecting the integration of sustainability into enterprise DNA.


Preparing for the Quantum Future: Lessons from Singapore

While PQC holds promise, it faces challenges such as larger key sizes, the need for side-channel-resistant implementations, and limited adoption in standard protocols like Transport Layer Security (TLS) and Secure Shell (SSH). ... In contrast to PQC, QKD takes a different approach: instead of relying on mathematics, it uses the laws of quantum physics to generate and exchange encryption keys securely. If an attacker tries to intercept the key exchange, the quantum state changes, revealing the intrusion. The strength of this approach is that it is not based on mathematics and, therefore, cannot be broken because cracking it does not depend on an algorithm. QKD is specifically useful for strategic sites or large locations with important volumes of data transfers. ... Nation-scale strategies for quantum-safe networks are vital to prepare for Q-Day and ensure protection against quantum threats. To this end, Singapore has started a program called the National Quantum Safe Network (NQSN) to build a nationwide testbed and platform for quantum-safe technologies using a real-life fibre network. ... In a step towards securing future quantum threats, ST Engineering is also developing a Quantum-Safe Satellite Network for cross-border applications, supported by mobile and fixed Quantum Optical Ground Stations (Q-OGS). Space QKD will complement terrestrial QKD to form a global quantum-safe network. The last mile, which is typically copper cable, will rely on PQC for protection.


Superintelligence: Should we stop a race if we don’t actually know where the finish line is?

The term ‘superintelligence’ encapsulates the concerns raised. It refers to an AI system whose capabilities would surpass those of humans in almost every field: logical reasoning, creativity, strategic planning and even moral judgement. However, in reality, the situation is less clear-cut: no one actually knows what such an entity would be like, or how to measure it. Would it be an intelligence capable of self-improvement without supervision? An emerging consciousness? Or simply a system that performs even more efficiently than our current models? ... How can a pause be enforced globally when the world’s major powers have such divergent economic and geopolitical interests? The United States, China and the European Union are in fierce competition to dominate the strategic sector of artificial intelligence; slowing down unilaterally would risk losing a decisive advantage. However, for the signatories, the absence of international coordination is precisely what makes this pause essential.  ... Researchers themselves recognise the irony of the situation: they are concerned about a phenomenon that they cannot yet describe. Superintelligence is currently a theoretical concept, a kind of projection of our anxieties and ambitions. But it is precisely this uncertainty that warrants caution. If we do not know the exact nature of the finish line, should we really keep on racing forward without knowing what we are heading for?


Treating MCP like an API creates security blind spots

APIs generally don’t cause arbitrary, untrusted code to run in sensitive environments. MCP does though, which means you need a completely different security model. LLMs treat text as instructions, they follow whatever you feed them. MCP servers inject text into that execution text. ... Security professionals might also erroneously assume that they can trust all clients registering with their MCP servers, this is why the MCP spec is updating. MCP builders will have to update their code to receive the additional client identification metadata, as dynamic client registration and OAuth alone are not always enough.  Another trust model that is misunderstood is when MCP users confuse vendor reputation with architectural trustworthiness. ... Lastly, and most importantly, MCP is a protocol (not a product). And protocols don’t offer a built-in “trust guarantee.” Ultimately, the protocol only describes how servers and clients communicate through a unified language. ... Risks can also emerge from the names of tools within MCP servers. If tool names are too similar, the AI model can become confused and select the wrong tool. Malicious actors can exploit this in an attack vector known as Tool Impersonation or Tool Mimicry. The attacker simply adds a tool within their malicious server that tricks the AI into using it instead of a similarly named legitimate tool in another server you use. This can lead to data exfiltration, credential theft, data corruption, and other costly consequences. 


Ontology is the real guardrail: How to stop AI agents from misunderstanding your business

Building effective agentic solutions requries an ontology-based single source of truth. Ontology is a business definition of concepts, their hierarchy and relationships. It defines terms with respect to business domains, can help establish a single-source of truth for data and capture uniform field names and apply classifications to fields. An ontology may be domain-specific (healthcare or finance), or organization-specific based on internal structures. Defining an ontology upfront is time consuming, but can help standardize business processes and lay a strong foundation for agentic AI. ... Agents designed in this manner and tuned to follow an ontology can stick to guardrails and avoid hallucinations that can be caused by the large language models (LLM) powering them. For example, a business policy may define that unless all documents associated with a loan do not have verified flags set to "true," the loan status should be kept in “pending” state. Agents can work around this policy and determine what documents are needed and query the knowledge base. ... With this method, we can avoid hallucinations by enforcing agents to follow ontology-driven paths and maintain data classifications and relationships. Moreover, we can scale easily by adding new assets, relationships and policies that agents can automatically comply to, and control hallucinations by defining rules for the whole system rather than individual entities. 


The end of apps? Imagining software’s agentic future

Enterprise software vendors are scrambling to embed agents into existing applications. Oracle Corp. claims to have more than 600 embedded AI agents in its Fusion Cloud and Industry Applications. SAP says it has more than 40.  ... This shift is not simply about embedding AI into existing products, as generative AI is supplanting conventional menus and dashboards. It’s a rethinking of software’s core functions. Many experts working on the agentic future say the way software is built, packaged and used is about to change profoundly. Instead of being a set of buttons and screens, software will become a collaborator that interprets goals, orchestrates processes, adapts in real time and anticipates what users need based on their behavior and implied preferences. ... The coming changes to enterprise software will go beyond the interface. AI will force monolithic software stacks to give way to modular, composable systems stitched together by agents using standards such as the Model Control Protocol, the Agent2Agent Protocol and the Agent Communication Protocol that IBM Corp. recently donated to the Linux Foundation. “By 2028, AI agent ecosystems will enable networks of specialized agents to dynamically collaborate across multiple applications, allowing users to achieve goals without interacting with each application individually,” Gartner recently predicted.

Daily Tech Digest - November 30, 2025


Quote for the day:

"The real leader has no need to lead - he is content to point the way." -- Henry Miller



Four important lessons about context engineering

Modern LLMs operate with context windows ranging from 8K to 200K+ tokens, with some models claiming even larger windows. However, several technical realities shape how we should think about context. ... Research has consistently shown that LLMs experience attention degradation in the middle portions of long contexts. Models perform best with information placed at the beginning or end of the context window. This isn’t a bug. It’s an artifact of how transformer architectures process sequences. ... Context length impacts latency and cost quadratically in many architectures. A 100K token context doesn’t cost 10x a 10K context, it can cost 100x in compute terms, even if providers don’t pass all costs to users. ... The most important insight: more context isn’t better context. In production systems, we’ve seen dramatic improvements by reducing context size and increasing relevance. ... LLMs respond better to structured context than unstructured dumps. XML tags, markdown headers, and clear delimiters help models parse and attend to the right information. ... Organize context by importance and relevance, not chronologically or alphabetically. Place critical information early and late in the context window. ... Each LLM call is stateless. This isn’t a limitation to overcome, but an architectural choice to embrace. Rather than trying to maintain massive conversation histories, implement smart context management


What Fuels AI Code Risks and How DevSecOps Can Secure Pipelines

AI-generated code refers to code snippets or entire functions produced by Machine Learning models trained on vast datasets. While these models can enhance developer productivity by providing quick solutions, they often lack the nuanced understanding of security implications inherent in manual coding practices. ... Establishing secure pipelines is the backbone of any resilient development strategy. When code flows rapidly from development to production, every step becomes a potential entry point for vulnerabilities. Without careful controls, even well-intentioned automation can allow flawed or insecure code to slip through, creating risks that may only surface once the application is live. A secure pipeline ensures that every commit, every integration, and every deployment undergo consistent security scrutiny, reducing the likelihood of breaches and protecting both organizational assets and user trust. Security in the pipeline begins at the earliest stages of development. By embedding continuous testing, teams can identify vulnerabilities before they propagate, identifying issues that traditional post-development checks often miss. This proactive approach allows security to move in tandem with development rather than trailing behind it, ensuring that speed does not come at the expense of safety. 


The New Role of Enterprise Architecture in the AI Era

Traditional architecture assumes predictability in which once the code has shipped, systems behave in a standard way. On the contrary, AI breaks that assumption completely, given that the machine learning models continuously change as data evolves and model performance keeps fluctuating as every new dataset gets added. ... Architecture isn’t just a phase in the AI era; rather it’s a continuous cycle that must operate across various interconnected stages that follow well-defined phases. This process starts with discovery, where the teams assess and identify AI opportunities that are directly linked to the business objectives. Engage early with business leadership to define clear outcomes. Next comes design, where architects create modular blueprints for data pipelines and model deployment by reusing the proven patterns. In the delivery phase, teams execute iteratively with governance built in from the onset. Ethics, compliance and observability should be baked into the workflows, not added later as afterthoughts. Finally, adaptation keeps the system learning. Models are monitored, retrained and optimized continuously, with feedback loops connecting system behavior back to business metrics and KPIs (key performance indicators). When architecture operates this way, it becomes a living ecosystem that learns, adapts and improves with every iteration.


Quenching Data Center Thirst for Power Now Is Solvable Problem

“Slowing data center growth or prohibiting grid connection is a short-sighted approach that embraces a scarcity mentality,” argued Wannie Park, CEO and founder of Pado AI, an energy management and AI orchestration company, in Malibu, Calif. “The explosive growth of AI and digital infrastructure is a massive engine for economic, scientific, and industrial progress,” he told TechNewsWorld. “The focus should not be on stifling this essential innovation, but on making data centers active, supportive participants in the energy ecosystem.” ... Planning for the full lifecycle of a data center’s power needs — from construction through long-term operations — is essential, he continued. This approach includes having solutions in place that can keep facilities operational during periods of limited grid availability, major weather events, or unexpected demand pressures, he said. ... The ITIF report also called for the United States to squeeze more power from the existing grid without negatively impacting customers, while also building new capacity. New technology can increase supply from existing transmission lines and generators, the report explained, which can bridge the transition to an expanded physical grid. On the demand side, it added, there is spare capacity, but not at peak times. It suggested that large users, such as data centers, be encouraged to shift their demand to off-peak periods, without damaging their customers. Grids do some of that already, it noted, but much more is needed.


A Waste(d) Opportunity: How can the UK utilize data center waste heat?

Walking into the data hall, you are struck by the heat resonating from the numerous server racks, each capable of handling up to 20kW of compute. However, rather than allowing this heat to dissipate into the atmosphere, the team at QMUL had another plan. Instead, in partnership with Schneider Electric, the university deployed a novel heat reuse system. ... Large water cylinders across campus act like thermal batteries, storing hot water overnight when compute needs are constant, but demand is low, then releasing it in the morning rush. As one project lead put it, there is “no mechanical rejection. All the heat we generate here is used. The gas boilers are off or dialed down - the computing heat takes over completely.” At full capacity, the data center could supply the equivalent of nearly 4 million ten-minute showers per year. ... Walking out, it’s easy to see why Queen Mary’s project is being held up as a model for others. In the UK, however, the project is somewhat of an oddity, but through the lens of QMUL you can see a glimpse of the future, where compute is not only solving the mysteries of our universe but heating our morning showers. The question remains, though, why data center waste heat utilization projects in the UK are few and far between, and how the country can catch up to regions such as the Nordics, which has embedded waste heat utilization into the planning and construction of its data center sector.


Redefining cyber-resilience for a new era

The biggest vulnerability is still the human factor, not the technology. Many companies invest in expensive tools but overlook the behaviour and mindset of their teams. In regions experiencing rapid digital growth, that gap becomes even more visible. Phishing, credential theft and shadow IT remain common ways attackers gain access. What’s needed is a shift in culture. Cybersecurity should be seen as a shared responsibility, embedded in daily routines, not as a one-time technical solution. True resilience begins with awareness, leadership and clarity at all levels of the organisation. ... Leaders play a crucial role in shaping that future. They need to understand that cybersecurity is not about fear, but about clarity and long-term thinking. It is part of strategic leadership. The leaders who make the biggest impact will be the ones who see cybersecurity as cultural, not just technical. They will prioritise transparency, invest in ethical and explainable technology, and build teams that carry these values forward. ... Artificial Intelligence is already transforming how we detect and respond to threats, but the more important shift is about ownership. Who controls the infrastructure, the models and the data? Centralised AI, controlled by a few major companies, creates dependence and limits transparency. It becomes harder to know what drives decisions, how data is used and where vulnerabilities might exist.


Building Your Geopolitical Firewall Before You Need One

In today’s world, where regulators are rolling out data sovereignty and localization initiatives that turn every cross-border workflow into a compliance nightmare, this is no theoretical exercise. Service disruption has shifted from possibility to inevitability, and geopolitical moves can shut down operations overnight. For storage engineers and data infrastructure leaders, the challenge goes beyond mere compliance – it’s about building genuine operational independence before circumstances force your hand. ... The reality is messier than any compliance framework suggests. Data sprawls everywhere, from edge, cloud and core to laptops and mobile devices. Building walls around everything does not offer true operational independence. Instead, it’s really about having the data infrastructure flexibility to move workloads when regulations shift, when geopolitical tensions escalate, or when a foreign government’s legislative reach suddenly extends into your data center. ... When evaluating sovereign solutions, storage engineers typically focus on SLAs and certifications. However, Oostveen argues that the critical question is simpler and more fundamental: who actually owns the solution or the service provider? “If you’re truly sovereign, my view is that you (the solution provider) are a company that is owned and operated exclusively within the borders of that particular jurisdiction,” he explains.


The 5 elements of a good cybersecurity risk assessment

Companies can use a cybersecurity risk assessment to evaluate how effective their security measures are. This provides a foundation for deciding which security measures are important — and which are not. But also for deciding when a product or system is secure enough and additional measures would be excessive. When they’ve done enough cybersecurity. However, not every risk assessment fulfills this promise. ... Too often, cybersecurity risk assessments take place solely in cyberspace — but this doesn’t allow meaningful prioritizing of requirements. “Server down” is annoying, but cyber systems never exist for their own sake. That’s why risk assessments need a connection to real processes that are mission critical for the organization — or perhaps not. ... Without system understanding, there is no basis for attack modeling. Without attack modeling, there is no basis for identifying the most important requirements. It shouldn’t really be cybersecurity’s job to create system understanding. But since there is often a lack of documentation in IT, OT, or for cyber systems in general, cybersecurity is often left to provide it. And if cybersecurity is the first team to finally create an overview of all cyber systems, then it’s a result that is useful far beyond security risk assessment. ... Attack scenarios are a necessary stepping stone to move your thinking from systems and real-world impacts to meaningful security requirements — no more and no less. 


Finding Strength in Code, Part 2: Lessons from Loss and the Power of Reflection

Every problem usually has more than one solution. The engineers who grow the fastest are the ones who can look at their own mistakes without ego, list what they’re good at and what they're not, and then actually see multiple ways forward. Same with life. A loss (a pet, a breakup, whatever) is a bug that breaks your personal system. ... Solo debugging has limits. On sprawling systems, we rally the squad—frontend, backend, QA—to converge faster. Similarly, grief isn't meant for isolation. I've leaned on my network: a quick Slack thread with empathetic colleagues or a vulnerability share in my dev community. It distributes the load and uncovers blind spots you might miss on your own. ... Once a problem is solved, it is essential to communicate the solution. The list of lessons from that solution: some companies solve problems, but never put the effort into documenting the process in a way that prevents them from happening again. I know it is impossible to avoid problems, as it is impossible not to make mistakes in our lives. The true inefficiency? Skipping the "why" and "how next time." ... Borrowed from incident response, it's a structured debrief that prevents recurrence without finger-pointing. In engineering, it ensures resilience; in life, it builds emotional antifragility. There are endless flavours of postmortems—simple Markdown outlines to full-blown docs—but the gold standard is "blameless," focusing on systems over scapegoats.


Cyber resilience is a business imperative: skills and strategy must evolve

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practicing hands-on defense is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organization's resilience. Role-specific training makes this real: finance teams need to recognize invoice fraud attempts; HR should know how to handle sensitive data securely; customer service needs to spot social engineering in live interactions. ... Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organization's response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.