Daily Tech Digest - December 03, 2025


Quote for the day:

“The only true wisdom is knowing that you know nothing.” -- Socrates


How CISOs can prepare for the new era of short-lived TLS certificates

“Shorter certificate lifespans are a gift,” says Justin Shattuck, CSO at Resilience. “They push people toward better automation and certificate management practices, which will later be vital to post-quantum defense.” But this gift, intended to strengthen security, could turn into a curse if organizations are unprepared. Many still rely on manual tracking and renewal processes, using spreadsheets, calendar reminders, or system admins who “just know” when certificates are due to expire. ... “We’re investing in a living cryptographic inventory that doesn’t just track SSL/TLS certificates, but also keys, algorithms, identities, and their business, risk, and regulatory context within our organization and ties all of that to risk,” he says. “Every cert is tied to an owner, an expiration date, and a system dependency, and supported with continuous lifecycle-based communication with those owners. That inventory drives automated notifications, so no expiration sneaks up on us.” ... While automation is important as certificates expire more quickly, how it is implemented matters. Renewing a certificate a fixed number of days before expiration can become unreliable as lifespans change. The alternative is renewing based on a percentage of the certificate’s lifetime, and this method has an advantage: the timing adjusts automatically when the lifespan shortens. “Hard-coded renewal periods are likely to be too long at some point, whereas percentage renewal periods should be fine,” says Josh Aas.


How Enterprises Can Navigate Privacy With Clarity

There's an interesting pattern across organizations of all sizes. When we started discussing DPDPA compliance a year ago, companies fell into two buckets: those already building toward compliance and others saying they'd wait for the final rules. That "wait and see period" taught us a lot. It showed how most enterprises genuinely want to do the right thing, but they often don't know where to start. In practice, mature data protection starts with a simple question that most enterprises haven't asked themselves: What personal data do we have coming in? Which of it is truly personal data? What are we doing with it? ... The first is how enterprises understand personal data itself. I tell clients not to view personal data as a single item but as part of an interconnected web. Once one data point links to another, information that didn't seem personal becomes personal because it's stored together or can be easily connected. ... The second gap is organizational visibility. Some teams process personal data in ways others don't know about. When we speak with multiple teams, there's often a light bulb moment where everyone realizes that data processing is happening in places they never expected. The third gap is third-party management. Some teams may share data under basic commercial arrangements or collect it through processes that seem routine. An IT team might sign up for a new hosting service without realizing it will store customer personal data. 


How to succeed as an independent software developer

Income for freelance developers varies depending on factors such as location, experience, skills, and project type. Average pay for a contractor is about $111,800 annually, according to ZipRecruiter, with top earners making potentially more than $151,000. ... “One of the most important ways to succeed as an independent developer is to treat yourself like a business,” says Darian Shimy, CEO of FutureFund, a fundraising platform built for K-12 schools, and a software engineer by trade. “That means setting up an LLC or sole proprietorship, separating your personal and business finances, and using invoicing and tax tools that make it easier to stay compliant,” Shimy says. ... “It was a full-circle moment, recognition not just for coding expertise, but for shaping how developers learn emerging technologies,” Kapoor says. “Specialization builds identity. Once your expertise becomes synonymous with progress in a field, opportunities—whether projects, media, or publishing—start coming to you.” ... Freelancers in any field need to know how to communicate well, whether it’s through the written word or conversations with clients and colleagues. If a developer communicates poorly, even great talent might not make the difference in landing gigs. ... A portfolio of work tells the story of what you bring to the table. It’s the main way to showcase your software development skills and experience, and is a key tool in attracting clients and projects. 


AI in 5 years: Preparing for intelligent, automated cyber attacks

Cybercriminals are increasingly experimenting with autonomous AI-driven attacks, where machine agents independently plan, coordinate, and execute multi-stage campaigns. These AI systems share intelligence, adapt in real time to defensive measures, and collaborate across thousands of endpoints — functioning like self-learning botnets without human oversight. ... Recent “vibe hacking” cases showed how threat actors embedded social-engineering goals directly into AI configurations, allowing bots to negotiate, deceive, and persist autonomously. As AI voice cloning becomes indistinguishable from the real thing, verifying identity will shift from who is speaking to how behaviourally consistent their actions are, a fundamental change in digital trust models. ... Unlike traditional threats, machine-made attacks learn and adapt continuously. Every failed exploit becomes training data, creating a self-improving threat ecosystem that evolves faster than conventional defences. Check Point Research notes that AI-driven tools like Hexstrike-AI framework, originally built for red-team testing, was weaponised within hours to exploit Citrix NetScaler zero-days. These attacks also operate with unprecedented precision. ... Make DevSecOps a standard part of your AI strategy. Automate security checks across your CI/CD pipeline to detect insecure code, exposed secrets, and misconfigurations before they reach production. 


Threat intelligence programs are broken, here is how to fix them

“An effective threat intelligence program is the cornerstone of a cybersecurity governance program. To put this in place, companies must implement controls to proactively detect emerging threats, as well as have an incident handling process that prioritizes incidents automatically based on feeds from different sources. This needs to be able to correlate a massive amount of data and provide automatic responses to enhance proactive actions,” says Carlos Portuguez ... Product teams, fraud teams, governance and compliance groups, and legal counsel often make decisions that introduce new risk. If they do not share those plans with threat intelligence leaders, PIRs become outdated. Security teams need lines of communication that help them track major business initiatives. If a company enters a new region, adopts a new cloud platform, or deploys an AI capability, the threat model shifts. PIRs should reflect that shift. ... Manual analysis cannot keep pace with the volume of stolen credentials, stealer logs, forum posts, and malware data circulating in criminal markets. Security engineering teams need automation to extract value from this material. ... Measuring threat intelligence remains a challenge for organizations. The report recommends linking metrics directly to PIRs. This prevents metrics that reward volume instead of impact. ... Threat intelligence should help guide enterprise risk decisions. It should influence control design, identity practices, incident response planning, and long term investment.


Europe’s Digital Sovereignty Hinges on Smarter Regulation for Data Access

Europe must seek to better understand, and play into, the reality of market competition in the AI sector. Among the factors impacting AI innovation, access to computing power and data are widely recognized as most crucial. While some proposals have been made to address the former, such as making the continent’s supercomputers available to AI start-ups, little has been proposed with regard to addressing the data access challenge. ... By applying the requirement to AI developers independently of their provenance, the framework ensures EU competitiveness is not adversely impacted. On the contrary, the approach would enable EU-based AI companies to innovate with legal certainty, avoiding the cost and potential chilling effect of lengthy lawsuits compared to their US competitors. Additionally, by putting the onus on copyright owners to make their content accessible, the framework reduces the burden for AI companies to find (or digitize) training material, which affects small companies most. ... Beyond addressing a core challenge in the AI market, the example of the European Data Commons highlights how government action is not just a zero-sum game between fostering innovation and setting regulatory standards. By scrapping its digital regulation in the rush to boost the economy and gain digital sovereignty, the EU is surrendering its longtime ambition and ability to shape global technology in its image.


New training method boosts AI multimodal reasoning with smaller, smarter datasets

Recent advances in reinforcement learning with verifiable rewards (RLVR) have significantly improved the reasoning abilities of large language models (LLMs). RLVR trains LLMs to generate chain-of-thought (CoT) tokens (which mimic the reasoning processes humans use) before generating the final answer. This improves the model’s capability to solve complex reasoning tasks such as math and coding. Motivated by this success, researchers have applied similar RL-based methods to large multimodal models (LMMs), showing that the benefits can extend beyond text to improve visual understanding and problem-solving across different modalities. ... According to Zhang, the step-by-step process fundamentally changes the reliability of the model's outputs. "Traditional models often 'jump' directly to an answer, which means they explore only a narrow portion of the reasoning space," he said. "In contrast, a reasoning-first approach forces the model to explicitly examine multiple intermediate steps... [allowing it] to traverse much deeper paths and arrive at answers with far more internal consistency." ... The researchers also found that token efficiency is crucial. While allowing a model to generate longer reasoning steps can improve performance, excessive tokens reduce efficiency. Their results show that setting a smaller "reasoning budget" can achieve comparable or even better accuracy, an important consideration for deploying cost-effective enterprise applications.


Why Firms Can’t Ignore Agentic AI

The danger posed by agentic AI stems from its ability carry out specific tasks with limited oversight. “When you give autonomy to a machine to operate within certain bounds, you need to be confident of two things: That it has been provided with excellent context so it knows how to make the right decisions – and that it is only completing the task asked of it, without using the information it’s been trusted with for any other purpose,” James Flint, AI practice lead at Securys, said. Mike Wilkes, enterprise CISO, Aikido Security, describes agentic AI as “giving a black box agent the ability to plan, act, and adapt on its own.” “In most companies that now means a new kind of digital insider risk with highly-privileged access to code, infrastructure, and data,” he warns. When employees start to use the technology without guardrails, shadow agentic AI introduces a number of risks. ... Adding to the risk, agentic AI is becoming easier to build and deploy. This will allow more employees to experiment with AI agents – often outside IT oversight, creating new governance and security challenges, says Mistry. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic that provides an open standard for orchestrating connections between AI assistants and data sources. By streamlining the work of development and security teams, this can “turbocharge productivity,” but it comes with caveats, says Pieter Danhieux, co-founder and CEO of Secure Code Warrior.


Why supply chains are the weakest link in today’s cyber defenses

One of the key reasons is that attackers want to make the best return on their efforts, and have learned that one of the easiest ways into a well-defended enterprise is through a partner. No thief would attempt to smash down the front door of a well-protected building if they could steal a key and slip in through the back. There’s also the advantage of scale: one company providing IT, HR, accounting or sales services to multiple customers may have fewer resources to protect itself, that’s the natural point of attack. ... When the nature of cyber risks changes so quickly, yearly audits of suppliers can’t provide the most accurate evidence of their security posture. The result is an ecosystem built on trust, where compliance often becomes more of a comfort blanket. Meanwhile, attackers are taking advantage of the lag between each audit cycle, moving far faster than the verification processes designed to stop them. Unless verification evolves into a continuous process, we’ll keep trusting paperwork while breaches continue to spread through the supply chain. ... Technology alone won’t fix the supply chain problem, and a change in mindset is also needed. Too many boards are still distracted by the next big security trend, while overlooking the basics that actually reduce breaches. Breach prevention needs to be measured, reported and prioritized just like any other business KPI. 


How AI Is Redefining Both Business Risk and Resilience Strategy

When implemented across prevention and response workflows, automation reduces human error, frees analysts’ time and preserves business continuity during high-pressure events. One applicable example includes automated data-restore sequences, which validate backup integrity before bringing systems online. Another example involves intelligent network rerouting that isolates subnets while preserving service. Organizations that deploy AI broadly across prevention and response report significantly lower breach costs. ... Biased AI models can produce skewed outputs which lead to poor decisions during a crisis. When a model is trained on limited or biased historical data, it can favor certain groups, locations or signals and then recommend actions overlook real need. In practical terms, this can mean an automated triage system that routes emergency help away from underserved neighborhoods. ... Turn risk controls into operational patterns. Use staged deployments, automated rollback triggers and immutable model artifacts that map to code and data versions. Those practices reduce the likelihood an unseen model change will result in a system outage. Next, pair AI systems with fallbacks for critical flows. This step ensures core services can continue if models fail. Monitoring should also be a consideration. It should display model metrics, such as drift and input distribution, alongside business measures, including latency and error rates. 

Daily Tech Digest - December 02, 2025


Quote for the day:

"I am not a product of my circumstances. I am a product of my decisions." -- Stephen Covey



The CISO’s paradox: Enabling innovation while managing risk

When security understands revenue goals, customer promises and regulatory exposure, guidance becomes specific and enabling. Begin by embedding a security liaison with each product squad so there is always a known face to engage in identity, data flows, logging and encryption decisions as they form. We should not want to see engineers opening two-week tickets for a simple question. There should be open “office hours,” chat channels and quick calls so they can get immediate feedback on decisions like API design, encryption requirements and regional data moves. ... Show up at sprint planning and early design reviews to ask the questions that matter — authentication paths, least-privilege access, logging coverage and how changes will be monitored in production through SIEM and EDR. When security officers sit at the same table, the conversation changes from “Can we do this?” to “How do we do this securely?” and better outcomes follow from day one. ... When developers deploy code multiple times a day, a “final security review” before launch just wouldn’t work. This traditional, end-of-line gating model doesn’t just block innovation but also fails to catch real-world risks. To be effective, security must be embedded during development, not just inspected after. ... This discipline must further extend into production. Even with world-class DevSecOps, we know a zero-day or configuration drift can happen. 


Resilience Means Fewer Recoveries, Not Faster Ones

Resilience has become one of the most overused words in management. Leaders praise teams for “pushing through” and “bouncing back,” as if the ability to absorb endless strain were proof of strength. But endurance and resilience are not the same. Endurance is about surviving pressure. Resilience is about designing systems so people don’t break under it. Many organizations don’t build resilience; they simply expect employees to endure more. The result is a quiet crisis of exhaustion disguised as dedication. Teams appear committed but are running on fumes. ... In most organizations, a small group carries the load when things get tough — the dependable few who always say yes to the most essential tasks. That pattern is unsustainable. Build redundancy into the system by cross-training roles, rotating responsibilities, and decentralizing authority. The goal isn’t to reduce pressure to zero; it’s to distribute it evenly enough so that no one person becomes the safety net for everyone else. ... Too many managers equate resilience with recovery, celebrating those who saved the day after the crisis is over. But true resilience shows up before the crisis hits. Observe your team to recognize the people who spot problems early, manage risks quietly, or improve workflows so that breakdowns don’t happen. Crisis prevention doesn’t create dramatic stories, but it builds the calm, predictable environment that allows innovation to thrive.


Facial Recognition’s Trust Problem

Surveillance by facial recognition is almost always in a public setting, so it’s one-to-many. There is a database and many cameras (usually a large number of cameras – an estimated one million in London and more than 30,000 in New York). These cameras capture images of people and compare them to the database of known images to identify individuals. The owner of the database may include watchlists comprising ‘people of interest’, so the ability to track persons of interest from one camera to another is included. But the process of capturing and using the images is almost always non-consensual. People don’t know when, where or how their facial image was first captured, and they don’t know where their data is going downstream or how it is used after initial capture. Nor are they usually aware of the facial recognition cameras that record their passage through the streets. ... Most people are wary of facial recognition systems. They are considered personally intrusive and privacy invasive. Capturing a facial image and using it for unknown purposes is not something that is automatically trusted. And yet it is not something that can be ignored – it’s part of modern life and will continue to be so. In the two primary purposes of facial recognition – access authentication and the surveillance of public spaces – the latter is the least acceptable. It is used for the purpose of public safety but is fundamentally insecure. What exists now can be, and has been, hijacked by criminals for their own purposes. 


The Urgent Leadership Playbook for AI Transformation

Banking executives talk enthusiastically about AI. They mention it frequently in investor presentations, allocate budgets to pilot programs, and establish innovation labs. Yet most institutions find themselves frozen between recognition of AI’s potential and the organizational will to pursue transformation aggressively. ... But waiting for perfect clarity guarantees competitive disadvantage. Even if only 5% of banks successfully embed AI across operations — and the number will certainly grow larger — these institutions will alter industry dynamics sufficiently to render non-adopters progressively irrelevant. Early movers establish data advantages, algorithmic sophistication, and operational efficiencies that create compounding benefits difficult for followers to overcome. ... The path from today’s tentative pilots to tomorrow’s AI-first institution follows a proven playbook developed by "future-built" companies in other sectors that successfully generate measurable value from AI at enterprise scale. ... Scaling AI requires reimagining organizational structures around technology-human collaboration based on three-layer guardrails: agent policy layers defining permissible actions, assurance layers providing controls and audit trails, and human responsibility layers assigning clear ownership for each autonomous domain.


Creative cybersecurity strategies for resource-constrained institutions

There’s a well-worn phrase that gets repeated whenever budgets are tight: “We have to do more with less.” I’ve never liked it because it suggests the team wasn’t already giving maximum effort. Instead, the goal should be to “use existing resources more effectively.” ... When you understand the users’ needs and learn how they want to work, you can recommend solutions that are both secure and practical. You don’t need to be an expert in every research technology. Start by paying attention to the services offered by cloud providers and vendors. They constantly study user pain points and design tools to address them. If you see a cloud service that makes it easier to collect, store, or share scientific data, investigate what makes it attractive. ... First, understand how your policies and controls affect the work. Security shouldn’t be developed in a vacuum. If you don’t understand the impact on researchers, developers, or operational teams, your controls may not be designed and implemented in a manner that helps enable the business. Second, provide solutions, don’t just say no. A security team that only rejects ideas will be thought of as a roadblock, and users will do their best to avoid engagement. A security team that helps people achieve their goals securely becomes one that is sought out, and ultimately ensures the business is more secure.


Architecting Intelligence: A Strategic Framework for LLM Fine-Tuning at Scale

As organizations race to harness the transformative power of Large Language Models, a critical gap has emerged between experimental implementations and production-ready AI systems. While prompt engineering offers a quick entry point, enterprises seeking competitive advantage must architect sophisticated fine-tuning pipelines that deliver consistent, domain-specific intelligence at scale. The landscape of LLM deployment presents three distinct approaches for fine-tuning, each with architectural implications. The answer lies in understanding the maturity curve of AI implementation. ... Fine-tuning represents the architectural apex of AI implementation. Rather than relying on prompts or external knowledge bases, fine-tuning modifies the AI model itself by continuing its training on domain-specific data. This embeds organizational knowledge, reasoning patterns, and domain expertise directly into the model’s parameters. Think of it this way: a general-purpose AI model is like a talented generalist who reads widely but lacks deep expertise. ... The decision involves evaluating several factors. Model scale matters because larger models generally offer better performance but demand more computational resources. An organization must balance the quality improvements of a 70-billion-parameter model against the infrastructure costs and latency implications. 


How smart tech innovation is powering the next generation of the trucking industry

Real-time tracking has now become the backbone of digital trucking. These systems provide the real time updates on the location of vehicles, fuel consumption, driving behavior and performance of the engine. Fleets also make informed decisions based on data to directly influence operational efficiencies. Furthermore, the IoT-enabled ‘one app’ solution monitors cargo temperature, location, and overall load conditions throughout the journey. ... Now, with AI driven algorithms, fleet managers anticipate most optimal routes via analysis of historical demand, weather patterns, and traffic. AI-powered intelligent route optimization applications allow fleets to optimize fuel usage and lower travel times. Additionally, with predictive maintenance capabilities, trucking companies are less concerned about vehicle failures, because a more proactive approach is used. AI tools spot anomalies in engine data and warn the fleet owners before expensive vehicle failure occurs, improving the overall fleet operations. ... The trucking industry is transforming faster than ever before. Technologies are turning every vehicle into a connected network and digital asset. Fleets can forecast demand, optimize routes, preserve cargo quality, and ensure safety at every step. The smarter goals align seamlessly with cost saving opportunities as logistics aggregators transition from the manual heavy paperwork to the digital locker convenience


Why every business needs to start digital twinning in 2026

Digital twins have begun to stand out because they’re not generic AI stand-ins; at their best they’re structured behavioural models grounded in real customer data. They offer a dependable way to keep insights active, consistent and available on demand. That is where their true strategic value lies. More granularly, the best performing digital twins are built on raw existing customer insights – interview transcripts, survey results, and behavioural data. But rather than just summarising the data, they create a representation of how a particular individual tends to think. Their role isn’t to imitate someone’s exact words, but to reflect underlying logic, preferences, motivations and blind spots. ... There’s no denying the fact that organisations have had a year of big promises and disappointing AI pilots, with the result that businesses are far more selective about what genuinely moves the needle. For years, digital twinning has been used to model complex systems in engineering, aerospace and manufacturing, where failure is expensive and iteration must happen before anything becomes real. With the rise of generative AI, the idea of a digital twin has expanded. After a year of rushed AI pilots and disappointing ROI, leaders are looking for approaches that actually fit how businesses work. Digital twinning does exactly that: it builds on familiar research practices, works inside existing workflows, and lets teams explore ideas safely before committing to them.


From compliance to confidence: Redefining digital transformation in regulated enterprises

Compliance is no longer the brake on digital transformation. It is the steering system that determines how fast and how far innovation can go. ... Technology rarely fails because of a lack of innovation. It fails when organizations lack the governance maturity to scale innovation responsibly. Too often, compliance is viewed as a bottleneck. It’s a scalability accelerator when embedded early. ... When governance and compliance converge, they unlock a feedback loop of trust. Consider a payer-provider network that unified its claims, care and compliance data into a single “truth layer.” Not only did this integration reduce audit exceptions by 45%, but it also improved member-satisfaction scores because interactions became transparent and consistent. ... No transformation from compliance to confidence happens without leadership alignment. The CIO sits at the intersection of technology, policy and culture and therefore carries the greatest influence over whether compliance is reactive or proactive. ... Technology maturity alone is not enough. The workforce must trust the system. When employees understand how AI or analytics systems make decisions, they become more confident using them. ... Confidence is not the absence of regulation; it’s mastery of it. A confident enterprise doesn’t fear audits because its systems are inherently explainable. 


AI agents are already causing disasters - and this hidden threat could derail your safe rollout

Although artificial intelligence agents are all the rage these days, the world of enterprise computing is experiencing disasters in the fledgling attempts to build and deploy the technology. Understanding why this happens and how to prevent it is going to involve lots of planning in what some are calling the zero-day deliberation. "You might have hundreds of AI agents running on a user's behalf, taking actions, and, inevitably, agents are going to make mistakes," said Anneka Gupta, chief product officer for data protection vendor Rubrik. ... Gupta talked about more than just a product pitch. Fixing well-intentioned disasters is not the biggest agent issue, she said. The big picture is that agentic AI is not moving forward as it should because of zero-day issues. "Agent Rewind is a day-two issue," said Gupta. "How do we solve for these zero-day issues to start getting people moving faster -- because they are getting stuck right now." ... According to Gupta, the true problem of agent deployment is all the work that begins with the chief information security officer, CISO, the chief information officer, CIO, and other senior management to figure out the scope of agents. AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling the AI program to carry out a wider variety of actions. ... The real zero-day obstacle is how to understand what agents are supposed to be doing, and how to measure what success or failure would look like.

Daily Tech Digest - December 01, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart



Engineers for the future: championing innovation through people, purpose and progress

Across the industry, Artificial Intelligence (AI) and automation are transforming how we design, build and maintain devices, while sustainability targets are prompting businesses to rethink their operations. The challenge for engineers today is to balance technological advancement with environmental responsibility and people-centered progress. ... The industry faces an ageing workforce, so establishing new pathways into engineering has become increasingly important. Diversity, Equity & Inclusion (DE&I) initiatives play an essential role here, designed to attract more women and under-represented groups into the field. Building teams that reflect a broader mix of backgrounds and perspectives does more than close the skills gap: it drives creativity and strengthens the innovation needed to meet future challenges in areas such as AI and sustainability. Engineering has always been about solving problems, but today’s challenges, from digital transformation to decarbonization, demand an ‘innovation mindset’ that looks ahead and designs for lasting impact. ... The future of engineering will not be defined by one technological breakthrough. It will be shaped by lots of small, deliberate improvements – smarter maintenance, data-driven decisions, lower emissions, recyclability – that make systems more efficient and resilient. Progress will come from engineers who continue to refine how things work, linking technology, sustainability and human insight. 


Why data readiness defines GenAI success

Enterprises are at varying stages of maturity. Many do not yet have the strong data foundation required to support scaling AI, especially GenAI. Our Intelligent Data Management Cloud (IDMC) addresses this gap by enabling enterprises to prepare, activate, manage, and secure their data. It ensures that data is intelligent, contextual, trusted, compliant, and secure. Interestingly, organisations in regulated industries tend to be more prepared because they have historically invested heavily in data hygiene. But overall, readiness is a journey, and we support enterprises across all stages. ... The rapid adoption of agents and AI models has dramatically increased governance complexity. Many enterprises already manage tens of thousands of data tasks. In the AI era, this scales to tens of thousands of agents as well. The solution lies in a unified metadata-driven foundation. An enterprise catalog that understands entities, relationships, policies, and lineage becomes the single source of truth. This catalog does not require enterprises to consolidate immediately; it can operate across heterogeneous catalogs, but the more an enterprise consolidates, the more complexity shifts from people and processes into the catalog itself. Auto-cataloging is critical. Automatically detecting relationships, lineage, governance rules, compliance requirements, and quality constraints reduces manual overhead and ensures consistency. 


12 signs the CISO-CIO relationship is broken — and steps to fix it

“It’s critical that those in these two positions get along with each other, and that they’re not only collegial but collaborative,” he says. Yes, they each have their own domain and their own set of tasks and objectives, but the reality is that each one cannot get that work done without the other. “So they have to rely on one another, and they have to each recognize that they must rely on each other.” Moreover, it’s not just the CIO and CISO who suffer when they aren’t collegial and collaborative. Palmore and other experts say a poor CIO-CISO relationship also has a negative impact on their departments and the organization as a whole. “A strained CIO-CISO relationship often shows up as misalignment in goals, priorities, or even communication,” says Marnie Wilking, CSO at Booking.com. ... CIOs and CISOs both have incentives to improve a problematic relationship. As Lee explains, “The CIO-CISO relationship is critical. They both have to partner effectively to achieve the organization’s technology and cybersecurity goals. All tech comes with cybersecurity exposure that can impact the successful implementation of the tech and business outcomes; that’s why CIOs have to care about cybersecurity. And CISOs have to know that cybersecurity exists to achieve business outcomes. So they have to work together to achieve each other’s priorities.” CISOs can take steps to develop a better rapport with their CIOs, using the disruption happening today


Meeting AI-driven demand with flexible and scalable data centers

Analysts predict that by 2030, 80 percent of the AI workloads will be for inference rather than training, which led Aitkenhead to say that the size of the inference capacity expansion is “just phenomenal”. Additionally, neo cloud companies such as CoreWeave and G‑Core are now buying up large volumes of hyperscale‑grade capacity to serve AI workloads. To keep up with this changing landscape, IMDC is ensuring that it has access to large amounts of carbon-free power and that it has the flexible cooling infrastructure that can adapt to customers’ requirements as they change over time. ... The company is adopting a standard data center design that can accommodate both air‑based and water‑based cooling, giving customers the freedom to choose any mix of the two. The design is deliberately oversized (Aitkenhead said it can provide well over 100 percent of the cooling capacity initially needed) so it can handle rising rack densities. ... This expansion is financed entirely from Iron Mountain’s strong, cash‑generating businesses, which gives the data center arm the capital to invest aggressively while improving cost predictability and operational agility. With a revamped design construction process and a solid expansion strategy, IMDC is positioning itself to capture the surging demand for AI‑driven, high‑density workloads, ensuring it can meet the market’s steep upward curve and remain “exciting” and competitive in the years ahead.


AI Agents Lead The 8 Tech Trends Transforming Enterprise In 2026

Step aside chatbots; agents are the next stage in the evolution of enterprise AI, and 2026 will be their breakout year. ... Think of virtual co-workers, always-on assistants monitoring and adjusting processes in real-time, and end-to-end automated workflows requiring minimal human intervention. ... GenAI is moving rapidly from enterprise pilots to operational adoption, transforming knowledge workflows; generating code for software engineers, drafting contracts for legal teams, and creating schedules and action plans for project managers. ... Enterprise organizations are outgrowing generic cloud platforms and increasingly looking to adopt Industry Cloud Platforms (ICP), offering vertical solutions encompassing infrastructure, applications and data. ... This enterprise trend is driven by both the proliferation of smart, connected IoT devices and the behavioral shift to remote and hybrid working. The zero-trust edge (ZTE) concept refers to security functionality built into edge devices, from industrial machinery to smartphones, via cloud platforms, to ensure consistent administration of security functionality. ... Enterprises are responding by adopting green software engineering principles for carbon efficiency and adopting AI to monitor their activities. In 2026, the strategy is “green by design”, reflecting the integration of sustainability into enterprise DNA.


Preparing for the Quantum Future: Lessons from Singapore

While PQC holds promise, it faces challenges such as larger key sizes, the need for side-channel-resistant implementations, and limited adoption in standard protocols like Transport Layer Security (TLS) and Secure Shell (SSH). ... In contrast to PQC, QKD takes a different approach: instead of relying on mathematics, it uses the laws of quantum physics to generate and exchange encryption keys securely. If an attacker tries to intercept the key exchange, the quantum state changes, revealing the intrusion. The strength of this approach is that it is not based on mathematics and, therefore, cannot be broken because cracking it does not depend on an algorithm. QKD is specifically useful for strategic sites or large locations with important volumes of data transfers. ... Nation-scale strategies for quantum-safe networks are vital to prepare for Q-Day and ensure protection against quantum threats. To this end, Singapore has started a program called the National Quantum Safe Network (NQSN) to build a nationwide testbed and platform for quantum-safe technologies using a real-life fibre network. ... In a step towards securing future quantum threats, ST Engineering is also developing a Quantum-Safe Satellite Network for cross-border applications, supported by mobile and fixed Quantum Optical Ground Stations (Q-OGS). Space QKD will complement terrestrial QKD to form a global quantum-safe network. The last mile, which is typically copper cable, will rely on PQC for protection.


Superintelligence: Should we stop a race if we don’t actually know where the finish line is?

The term ‘superintelligence’ encapsulates the concerns raised. It refers to an AI system whose capabilities would surpass those of humans in almost every field: logical reasoning, creativity, strategic planning and even moral judgement. However, in reality, the situation is less clear-cut: no one actually knows what such an entity would be like, or how to measure it. Would it be an intelligence capable of self-improvement without supervision? An emerging consciousness? Or simply a system that performs even more efficiently than our current models? ... How can a pause be enforced globally when the world’s major powers have such divergent economic and geopolitical interests? The United States, China and the European Union are in fierce competition to dominate the strategic sector of artificial intelligence; slowing down unilaterally would risk losing a decisive advantage. However, for the signatories, the absence of international coordination is precisely what makes this pause essential.  ... Researchers themselves recognise the irony of the situation: they are concerned about a phenomenon that they cannot yet describe. Superintelligence is currently a theoretical concept, a kind of projection of our anxieties and ambitions. But it is precisely this uncertainty that warrants caution. If we do not know the exact nature of the finish line, should we really keep on racing forward without knowing what we are heading for?


Treating MCP like an API creates security blind spots

APIs generally don’t cause arbitrary, untrusted code to run in sensitive environments. MCP does though, which means you need a completely different security model. LLMs treat text as instructions, they follow whatever you feed them. MCP servers inject text into that execution text. ... Security professionals might also erroneously assume that they can trust all clients registering with their MCP servers, this is why the MCP spec is updating. MCP builders will have to update their code to receive the additional client identification metadata, as dynamic client registration and OAuth alone are not always enough.  Another trust model that is misunderstood is when MCP users confuse vendor reputation with architectural trustworthiness. ... Lastly, and most importantly, MCP is a protocol (not a product). And protocols don’t offer a built-in “trust guarantee.” Ultimately, the protocol only describes how servers and clients communicate through a unified language. ... Risks can also emerge from the names of tools within MCP servers. If tool names are too similar, the AI model can become confused and select the wrong tool. Malicious actors can exploit this in an attack vector known as Tool Impersonation or Tool Mimicry. The attacker simply adds a tool within their malicious server that tricks the AI into using it instead of a similarly named legitimate tool in another server you use. This can lead to data exfiltration, credential theft, data corruption, and other costly consequences. 


Ontology is the real guardrail: How to stop AI agents from misunderstanding your business

Building effective agentic solutions requries an ontology-based single source of truth. Ontology is a business definition of concepts, their hierarchy and relationships. It defines terms with respect to business domains, can help establish a single-source of truth for data and capture uniform field names and apply classifications to fields. An ontology may be domain-specific (healthcare or finance), or organization-specific based on internal structures. Defining an ontology upfront is time consuming, but can help standardize business processes and lay a strong foundation for agentic AI. ... Agents designed in this manner and tuned to follow an ontology can stick to guardrails and avoid hallucinations that can be caused by the large language models (LLM) powering them. For example, a business policy may define that unless all documents associated with a loan do not have verified flags set to "true," the loan status should be kept in “pending” state. Agents can work around this policy and determine what documents are needed and query the knowledge base. ... With this method, we can avoid hallucinations by enforcing agents to follow ontology-driven paths and maintain data classifications and relationships. Moreover, we can scale easily by adding new assets, relationships and policies that agents can automatically comply to, and control hallucinations by defining rules for the whole system rather than individual entities. 


The end of apps? Imagining software’s agentic future

Enterprise software vendors are scrambling to embed agents into existing applications. Oracle Corp. claims to have more than 600 embedded AI agents in its Fusion Cloud and Industry Applications. SAP says it has more than 40.  ... This shift is not simply about embedding AI into existing products, as generative AI is supplanting conventional menus and dashboards. It’s a rethinking of software’s core functions. Many experts working on the agentic future say the way software is built, packaged and used is about to change profoundly. Instead of being a set of buttons and screens, software will become a collaborator that interprets goals, orchestrates processes, adapts in real time and anticipates what users need based on their behavior and implied preferences. ... The coming changes to enterprise software will go beyond the interface. AI will force monolithic software stacks to give way to modular, composable systems stitched together by agents using standards such as the Model Control Protocol, the Agent2Agent Protocol and the Agent Communication Protocol that IBM Corp. recently donated to the Linux Foundation. “By 2028, AI agent ecosystems will enable networks of specialized agents to dynamically collaborate across multiple applications, allowing users to achieve goals without interacting with each application individually,” Gartner recently predicted.

Daily Tech Digest - November 30, 2025


Quote for the day:

"The real leader has no need to lead - he is content to point the way." -- Henry Miller



Four important lessons about context engineering

Modern LLMs operate with context windows ranging from 8K to 200K+ tokens, with some models claiming even larger windows. However, several technical realities shape how we should think about context. ... Research has consistently shown that LLMs experience attention degradation in the middle portions of long contexts. Models perform best with information placed at the beginning or end of the context window. This isn’t a bug. It’s an artifact of how transformer architectures process sequences. ... Context length impacts latency and cost quadratically in many architectures. A 100K token context doesn’t cost 10x a 10K context, it can cost 100x in compute terms, even if providers don’t pass all costs to users. ... The most important insight: more context isn’t better context. In production systems, we’ve seen dramatic improvements by reducing context size and increasing relevance. ... LLMs respond better to structured context than unstructured dumps. XML tags, markdown headers, and clear delimiters help models parse and attend to the right information. ... Organize context by importance and relevance, not chronologically or alphabetically. Place critical information early and late in the context window. ... Each LLM call is stateless. This isn’t a limitation to overcome, but an architectural choice to embrace. Rather than trying to maintain massive conversation histories, implement smart context management


What Fuels AI Code Risks and How DevSecOps Can Secure Pipelines

AI-generated code refers to code snippets or entire functions produced by Machine Learning models trained on vast datasets. While these models can enhance developer productivity by providing quick solutions, they often lack the nuanced understanding of security implications inherent in manual coding practices. ... Establishing secure pipelines is the backbone of any resilient development strategy. When code flows rapidly from development to production, every step becomes a potential entry point for vulnerabilities. Without careful controls, even well-intentioned automation can allow flawed or insecure code to slip through, creating risks that may only surface once the application is live. A secure pipeline ensures that every commit, every integration, and every deployment undergo consistent security scrutiny, reducing the likelihood of breaches and protecting both organizational assets and user trust. Security in the pipeline begins at the earliest stages of development. By embedding continuous testing, teams can identify vulnerabilities before they propagate, identifying issues that traditional post-development checks often miss. This proactive approach allows security to move in tandem with development rather than trailing behind it, ensuring that speed does not come at the expense of safety. 


The New Role of Enterprise Architecture in the AI Era

Traditional architecture assumes predictability in which once the code has shipped, systems behave in a standard way. On the contrary, AI breaks that assumption completely, given that the machine learning models continuously change as data evolves and model performance keeps fluctuating as every new dataset gets added. ... Architecture isn’t just a phase in the AI era; rather it’s a continuous cycle that must operate across various interconnected stages that follow well-defined phases. This process starts with discovery, where the teams assess and identify AI opportunities that are directly linked to the business objectives. Engage early with business leadership to define clear outcomes. Next comes design, where architects create modular blueprints for data pipelines and model deployment by reusing the proven patterns. In the delivery phase, teams execute iteratively with governance built in from the onset. Ethics, compliance and observability should be baked into the workflows, not added later as afterthoughts. Finally, adaptation keeps the system learning. Models are monitored, retrained and optimized continuously, with feedback loops connecting system behavior back to business metrics and KPIs (key performance indicators). When architecture operates this way, it becomes a living ecosystem that learns, adapts and improves with every iteration.


Quenching Data Center Thirst for Power Now Is Solvable Problem

“Slowing data center growth or prohibiting grid connection is a short-sighted approach that embraces a scarcity mentality,” argued Wannie Park, CEO and founder of Pado AI, an energy management and AI orchestration company, in Malibu, Calif. “The explosive growth of AI and digital infrastructure is a massive engine for economic, scientific, and industrial progress,” he told TechNewsWorld. “The focus should not be on stifling this essential innovation, but on making data centers active, supportive participants in the energy ecosystem.” ... Planning for the full lifecycle of a data center’s power needs — from construction through long-term operations — is essential, he continued. This approach includes having solutions in place that can keep facilities operational during periods of limited grid availability, major weather events, or unexpected demand pressures, he said. ... The ITIF report also called for the United States to squeeze more power from the existing grid without negatively impacting customers, while also building new capacity. New technology can increase supply from existing transmission lines and generators, the report explained, which can bridge the transition to an expanded physical grid. On the demand side, it added, there is spare capacity, but not at peak times. It suggested that large users, such as data centers, be encouraged to shift their demand to off-peak periods, without damaging their customers. Grids do some of that already, it noted, but much more is needed.


A Waste(d) Opportunity: How can the UK utilize data center waste heat?

Walking into the data hall, you are struck by the heat resonating from the numerous server racks, each capable of handling up to 20kW of compute. However, rather than allowing this heat to dissipate into the atmosphere, the team at QMUL had another plan. Instead, in partnership with Schneider Electric, the university deployed a novel heat reuse system. ... Large water cylinders across campus act like thermal batteries, storing hot water overnight when compute needs are constant, but demand is low, then releasing it in the morning rush. As one project lead put it, there is “no mechanical rejection. All the heat we generate here is used. The gas boilers are off or dialed down - the computing heat takes over completely.” At full capacity, the data center could supply the equivalent of nearly 4 million ten-minute showers per year. ... Walking out, it’s easy to see why Queen Mary’s project is being held up as a model for others. In the UK, however, the project is somewhat of an oddity, but through the lens of QMUL you can see a glimpse of the future, where compute is not only solving the mysteries of our universe but heating our morning showers. The question remains, though, why data center waste heat utilization projects in the UK are few and far between, and how the country can catch up to regions such as the Nordics, which has embedded waste heat utilization into the planning and construction of its data center sector.


Redefining cyber-resilience for a new era

The biggest vulnerability is still the human factor, not the technology. Many companies invest in expensive tools but overlook the behaviour and mindset of their teams. In regions experiencing rapid digital growth, that gap becomes even more visible. Phishing, credential theft and shadow IT remain common ways attackers gain access. What’s needed is a shift in culture. Cybersecurity should be seen as a shared responsibility, embedded in daily routines, not as a one-time technical solution. True resilience begins with awareness, leadership and clarity at all levels of the organisation. ... Leaders play a crucial role in shaping that future. They need to understand that cybersecurity is not about fear, but about clarity and long-term thinking. It is part of strategic leadership. The leaders who make the biggest impact will be the ones who see cybersecurity as cultural, not just technical. They will prioritise transparency, invest in ethical and explainable technology, and build teams that carry these values forward. ... Artificial Intelligence is already transforming how we detect and respond to threats, but the more important shift is about ownership. Who controls the infrastructure, the models and the data? Centralised AI, controlled by a few major companies, creates dependence and limits transparency. It becomes harder to know what drives decisions, how data is used and where vulnerabilities might exist.


Building Your Geopolitical Firewall Before You Need One

In today’s world, where regulators are rolling out data sovereignty and localization initiatives that turn every cross-border workflow into a compliance nightmare, this is no theoretical exercise. Service disruption has shifted from possibility to inevitability, and geopolitical moves can shut down operations overnight. For storage engineers and data infrastructure leaders, the challenge goes beyond mere compliance – it’s about building genuine operational independence before circumstances force your hand. ... The reality is messier than any compliance framework suggests. Data sprawls everywhere, from edge, cloud and core to laptops and mobile devices. Building walls around everything does not offer true operational independence. Instead, it’s really about having the data infrastructure flexibility to move workloads when regulations shift, when geopolitical tensions escalate, or when a foreign government’s legislative reach suddenly extends into your data center. ... When evaluating sovereign solutions, storage engineers typically focus on SLAs and certifications. However, Oostveen argues that the critical question is simpler and more fundamental: who actually owns the solution or the service provider? “If you’re truly sovereign, my view is that you (the solution provider) are a company that is owned and operated exclusively within the borders of that particular jurisdiction,” he explains.


The 5 elements of a good cybersecurity risk assessment

Companies can use a cybersecurity risk assessment to evaluate how effective their security measures are. This provides a foundation for deciding which security measures are important — and which are not. But also for deciding when a product or system is secure enough and additional measures would be excessive. When they’ve done enough cybersecurity. However, not every risk assessment fulfills this promise. ... Too often, cybersecurity risk assessments take place solely in cyberspace — but this doesn’t allow meaningful prioritizing of requirements. “Server down” is annoying, but cyber systems never exist for their own sake. That’s why risk assessments need a connection to real processes that are mission critical for the organization — or perhaps not. ... Without system understanding, there is no basis for attack modeling. Without attack modeling, there is no basis for identifying the most important requirements. It shouldn’t really be cybersecurity’s job to create system understanding. But since there is often a lack of documentation in IT, OT, or for cyber systems in general, cybersecurity is often left to provide it. And if cybersecurity is the first team to finally create an overview of all cyber systems, then it’s a result that is useful far beyond security risk assessment. ... Attack scenarios are a necessary stepping stone to move your thinking from systems and real-world impacts to meaningful security requirements — no more and no less. 


Finding Strength in Code, Part 2: Lessons from Loss and the Power of Reflection

Every problem usually has more than one solution. The engineers who grow the fastest are the ones who can look at their own mistakes without ego, list what they’re good at and what they're not, and then actually see multiple ways forward. Same with life. A loss (a pet, a breakup, whatever) is a bug that breaks your personal system. ... Solo debugging has limits. On sprawling systems, we rally the squad—frontend, backend, QA—to converge faster. Similarly, grief isn't meant for isolation. I've leaned on my network: a quick Slack thread with empathetic colleagues or a vulnerability share in my dev community. It distributes the load and uncovers blind spots you might miss on your own. ... Once a problem is solved, it is essential to communicate the solution. The list of lessons from that solution: some companies solve problems, but never put the effort into documenting the process in a way that prevents them from happening again. I know it is impossible to avoid problems, as it is impossible not to make mistakes in our lives. The true inefficiency? Skipping the "why" and "how next time." ... Borrowed from incident response, it's a structured debrief that prevents recurrence without finger-pointing. In engineering, it ensures resilience; in life, it builds emotional antifragility. There are endless flavours of postmortems—simple Markdown outlines to full-blown docs—but the gold standard is "blameless," focusing on systems over scapegoats.


Cyber resilience is a business imperative: skills and strategy must evolve

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practicing hands-on defense is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organization's resilience. Role-specific training makes this real: finance teams need to recognize invoice fraud attempts; HR should know how to handle sensitive data securely; customer service needs to spot social engineering in live interactions. ... Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organization's response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.

Daily Tech Digest - November 29, 2025


Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah



6 coding myths that refuse to die

A typical day as a developer can feel like you’re juggling an array (no pun intended) of tasks. You’re reading vague requirements, asking questions, reviewing designs, planning architecture, investigating bugs, reading someone else's code, writing documentation, attending standups, and occasionally, you actually get to write code. Why? Because software development is about problem-solving, not just code-producing. Real-world problems are messy. Users don’t always know what they want. Clients change their minds. Systems behave in mysterious ways. Before you even think about writing code, you often need to untangle the people-side and the process-side. ... The truth is that coding rewards persistence, curiosity, and willingness to improve far more than raw talent. Most developers I’ve worked with weren’t prodigies. They were people who kept showing up, kept asking questions, and kept refining their skills. ... Every working developer, no matter how experienced, looks up syntax constantly. We search the docs, we skim examples, we peek at old code, we search for things we’ve forgotten. Nobody expects you to memorize every keyword, operator, or built-in function. What matters in programming is the ability to break down a problem, think through the logic, and design a solution. Syntax is simply the tool you use to express that solution. It’s the grammar, not the message. So don't make this programming mistake and myth waste your time. 


The Cost of Doing Nothing: Why Unstructured Data Is Draining IT Budgets

Think of it this way: the fundamental problem contemporary enterprises have with unstructured data isn’t actually the volume they own but the lack of visibility into what exists, where it resides, who owns it, and whether it still holds value. In this context, the only alternative they have is to store everything indefinitely, including redundant, obsolete, or trivial data that serves no business purpose. The key question here, of course, is how to manage data through its lifecycle? Ideally, an effective and strategic data management process should begin by establishing a single, enterprise-wide view of unstructured data to uncover inefficiencies and risks.  ... Lifecycle management plays a central role in this, with files that have not been accessed for an extended period of time can be moved to lower-cost storage, while data that has been inactive for many years can be archived or deleted altogether. Many organizations discover that more than 60% of their stored information falls into these categories, illustrating just how much wasted capacity can be reclaimed with a policy-driven approach. ... It’s an approach that also benefits from the integration of vendor-neutral data management platforms capable of integrating data across diverse storage environments and clouds, eliminating lock-in while maintaining scalability. The outcome is greater cost control, improved compliance posture, and stronger decision-making foundations across the enterprise.


Agentic AI is supercharging the deepfake crisis: How companies can take action

As agentic AI propels fraud to a whole new level, the best way to keep your company secure is by fighting fire with fire, or in this case, AI with AI. To do so, companies need to implement multi-layered AI defense strategies that make it exponentially harder for bad actors to succeed. Enterprises can’t rely on traditional verification methods that add more layers of friction or collect more personal data as that would deter customers. Instead, businesses need to rethink digital identity protection to reduce fraud and fraud-related losses, but to also preserve customer trust and digital engagement. To achieve this, organizations’ defense systems should contextualize individual actions, granularly isolate scopes of impact, and rely on ongoing reassessments of authorization. In other words, a highly secure system doesn’t just check a user’s identity once but continuously evaluates what the user is doing, where they are doing it, and why they are doing it. ... Using layered risk signals throughout the lifecycle of users—not just during onboarding— can provide companies with detailed information on potential risks, especially from internal sources like employees who can be fouled or whose access can be hijacked to compromise a company’s key assets. Companies can continuously check the reputation of users’ email addresses, phone numbers, and IP addresses to see if any of those channels have previously been used for fraudulent activity, identifying fraud rings that are deploying AI agents at scale. 


Cyber resilience, AI & energy shape IT strategies for 2026

The historical approach - that of considering cyber resilience as a stand-alone issue, where one vendor can protect an entire company - will be put to bed. Organisations will move away from using point solutions and embrace the wider ecosystem of options as understanding grows that they can't go it alone. An interconnected framework can help prevent a ripple effect when an attack happens - users should be able to identify and halt an attack in progress. The rate and scale of attacks will continue and having a properly integrated framework is vital to mitigate risk and speed up recovery. ... As AI inference workloads are becoming part of the production workflow, organisations are going to have to ensure their infrastructure supports not just fast access but high availability, security and non-disruptive operations. Not doing this will be costly both from a results perspective and an operational perspective in terms of resource (GPUs) utilisation. ... By 2026, organisations will face a new problem: accounts and credentials that belong to people no longer with the company, but which still look and act like insiders. As HR and IT systems become more automated, old identities are easily missed. Accounts from former employees, departed contractors, and dormant service bots will linger in cloud environments and company software. Attackers will exploit these 'digital ghosts' because they appear legitimate, bypass automated offboarding, and blend in with normal system activity.


Enterprises are neglecting backup plans, and experts warn it could come back to haunt them

Crucially, only 45% consistently follow the ‘3-2-1’ backup rule - three copies of data, stored on two different media types, with one copy kept off-site. The same number are failing to keep tamper-proof copies by using immutability across all their organizational backup data to ensure resilience against cyber attacks. ... "Most organizations now recognize the need to identify phishing scams or social engineering tactics; however, we can’t lose sight of what to do when disaster does strike. While complete prevention is near impossible, assurance of rapid recovery is fully within organizational control," he said. "Our research shows that UK organizations still aren’t taking adequate precautions when it comes to data backups. By storing data on immutable platforms, they can ensure business-critical information remains beyond the reach of adversaries and that operations stay up and running, even when systems are compromised." ... Backup strategies are now front of mind for many IT professions, alternative research shows. A survey from Kaseya earlier this year found 30% are losing sleep over lackluster backup and recovery strategies, with some pushing for a stronger focus on this area. Complacency was also identified as a recurring problem for many enterprises, according to Kaseya. Nearly two-thirds (60%) of respondents said they believed they could fully recover from a data loss incident in the space of a day.


Ransomware Moves: Supply Chain Hits, Credential Harvesting

Attack volume remains high. The quantity of victims listed across ransomware groups' data leak sites increased by one-third from September to October, says a report from cybersecurity firm Cyble. Groups listing the most victims included high-fliers Qilin and Akira, newcomer Sinobi - which only appeared in July - and stalwarts INC Ransom and Play. ... After a run of attacks targeting zero-day flaws in managed file transfer software, the group used the same strategy against Oracle E-Business Suite versions 12.2.3 through 12.2.14 to steal data. Clop appears to have targeted two zero-day vulnerabilities, "both of which allow unauthenticated access to core EBS components," giving the group "a fast and reliable entry point, which explains the scale of the campaign," said cybersecurity firm SOCRadar. Oracle issued updates fixing both of those flaws. Data theft tied to that campaign appeared to begin by August, although it didn't come to light until Clop revealed it ... One of the big reasons for ransomware's success has been cryptocurrency, which makes it easier for groups to monetize and cash out their attacks. Another has been the rise of the ransomware-as-a-service business model. This allows for specialization: operators can develop malware and shake down victims, while affiliated business partners focus on hacking, rather than malware development, with both reaping the rewards. Every time a victim pays a ransom, the industry standard is for an affiliate to keep 70% to 80%.


Essential 2026 skills that DevOp leaders need to prioritize

It may sound radical, but you should prepare for a future where DevOps professionals will no longer need to learn programming languages. The DevOps role will shift up more than most people expect, enabling your team members to become supervisory architects rather than hands-on coders. ... DevOps professionals will no longer need to rely on programming languages. Instead, they will use natural language to supervise and orchestrate processes across requirements, planning, development, testing, and deployment. This leads to the elimination of hand-offs between teams and a significant blurring of traditional roles. ... However, for this shift-up to be truly successful and safe in practice, that foundational knowledge of software engineering principles remains vital. Without understanding the why behind what you are asking AI to do, your team cannot evaluate the quality of the output. This lack of evaluation can easily lead to significant risks, such as vulnerabilities that result in security breaches. In the age of AI, human judgment remains as important as ever, but only if it’s informed by a deep understanding of what the AI is being asked to produce. ... As a leader, your challenge is to guide your organization through this transformative period. The future of software development isn’t about AI replacing humans; it’s about AI empowering humans to perform at a higher, more strategic level. 


Building the Future: AI’s Role in Enterprise Evolution

The biggest obstacle we see for AI adoption isn't the technology itself, but the lack of clarity on the purpose for using it. The most critical part of any AI initiative is to understand why you want to use AI and how it can enhance your organisation’s unique attributes. There is no one-size-fits-all approach, since what works for one organisation may not work for others. A healthcare business needs data privacy for patient records, while a small startup’s goal is agility to release new product and sign new deals. These use cases will require different infrastructure investments and most workloads are not suited to the public cloud. ... Consider AI with a broader view, beyond just the technology itself. Dell approaches AI with three distinct perspectives in mind: the business side, the technical side and the people side. GenAI will provide a 20-30 per cent increase in productivity, eliminating mundane tasks and freeing people to focus on higher value work. Your employees are now available to use that extra time to reimagine processes and outcomes, creating value and efficiencies for the company.
From a people standpoint, the demand for curious, smart, adaptable employees will skyrocket. ... Many of our customers are in the early stages of their AI journey, experimenting with basic applications. Small and basic can have a big impact, so keep pushing forward. It's worth starting with pilot projects as they give you room to test and experiment with an application. 


We Need to Teach the ‘Inuit’ Mindset to Young Computing Engineers

Becoming accustomed to over-provisioned resources has brought further concerns. The decreasing cost of hardware encourages a certain complacency: if a code is inefficient in memory or CPU usage, one tends to trust that a more powerful machine or extra memory will solve the problem. ... This mindset contrasts with the traditional discipline of programming education, in which every instruction and every byte mattered, and optimization was an essential part of the computer science student’s training. The point here is that even while leveraging the benefits offered by AI in programming, an excessive dependence on AI-generated solutions and the over-provisioning of resources can undermine the proper development of computational, logical, and algorithmic thinking in future programmers or computing scientists. ... It is important to clarify that this is not about rejecting the use of AI and reverting to a former era of computing. Instead, we should integrate the best of both worlds. We must harness the tremendous potential of AI while instilling in students the ability to evaluate and improve solutions using their own sound judgement. As a direct consequence, a well-trained programmer will think twice before accepting an AI-generated solution if it uses resources disproportionately or does not guarantee adequate resilience when execution scenarios change drastically. 


Your Platform is Not an Island: Embracing Evolution in Your Ecosystem

The challenges facing smaller organizations versus larger organizations are really quite different, and the very requirement for a platform is typically indicative of you having multiple teams, so you probably don't really need a platform in a startup, particularly if you've got one 10-star full-stack developer wearing all of those hats. ... On-premises dependencies for your app will increase the number of interfaces and contributes to what we lovingly call application sprawl, and overly distributed architectures. The more teams that you have, the more people that you're probably going to need to speak to, and unfortunately, that means an increased number of working practices, and probably it's going to be far harder to reach any kind of consensus. If you work in a large organization, I'm sure that will resonate with you. ... The more features that you try to predict ahead of time, the more you risk building something that your customers actually don't want. The more minimal your MVP, the more likely your customers will see it as a motel, not a hotel. ... Developers still needed infrastructure knowledge, when we'd kind of sold that vision that they wouldn't need any, they would need little baseline understanding of Kubernetes. Integration with other legacy services across the organization, because they weren't designed by us and didn't always have APIs, was a little bit clunky.