Showing posts with label networking. Show all posts
Showing posts with label networking. Show all posts

Daily Tech Digest - February 19, 2026


Quote for the day:

“Being responsible sometimes means pissing people off.” -- Colin Powell



The new paradigm for raising up secure software engineers

CISOs were already struggling to help developers keep up with secure code principles at the speed of DevOps. Now, with AI-assisted development reshaping how code gets written and shipped, the challenge is rapidly intensifying. ... What is needed to get thrown out are traditional training methods. Consensus among security leaders is that dev training needs to be bite-sized, hands-on, and mostly embedded in developer tool chains. ... Rather than focus on preparing developers for line-by-line code review, the emphasis moves toward evaluating whether their features and functions behave securely in context of deployment conditions, says Hasan Yasar ... Developers need to recognize when AI-generated code introduces unsafe assumptions, insecure defaults, or integrations that can scale vulnerabilities across systems. And with more security enforcement built into automated engineering pipelines, developers should ideally also be trained to understand what automated gates catch, and what still requires human judgment. “Security awareness in engineering has shifted to a system-level approach rather than focusing on individual vulnerabilities,” Pinna says. ... The data from guardrails and controls being triggered can be used by the AppSec team to drive creation and delivery of more in-depth, but targeted education. When the same vulnerability or integration pattern pops up again and again, that’s a signal for focused training on a subject.


New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy

In experiments on complex coding and software engineering tasks, GEA substantially outperformed existing self-improving frameworks. Perhaps most notably for enterprise decision-makers, the system autonomously evolved agents that matched or exceeded the performance of frameworks painstakingly designed by human experts. ... Unlike traditional systems where an agent only learns from its direct parent, GEA creates a shared pool of collective experience. This pool contains the evolutionary traces from all members of the parent group, including code modifications, successful solutions to tasks, and tool invocation histories. Every agent in the group gains access to this collective history, allowing them to learn from the breakthroughs and mistakes of their peers. ... The results demonstrated a massive leap in capability without increasing the number of agents used. This collaborative approach also makes the system more robust against failure. In their experiments, the researchers intentionally broke agents by manually injecting bugs into their implementations. GEA was able to repair these critical bugs in an average of 1.4 iterations, while the baseline took 5 iterations. The system effectively leverages the "healthy" members of the group to diagnose and patch the compromised ones. ... The success of GEA stems largely from its ability to consolidate improvements. The researchers tracked specific innovations invented by the agents during the evolutionary process. 


GitHub readies agents to automate repository maintenance

In order to help developers and enterprises manage the operational drag of maintaining repositories, GitHub is previewing Agentic Workflows, a new feature that uses AI to automate most routine tasks associated with repository hygiene. It won’t solve maintenance problems all by itself, though. Developers will still have to describe the automation workflows in natural language that agents can follow, storing the instructions as Markdown files in the repo created either from the terminal via the GitHub CLI or inside an editor such as Visual Studio Code. ... “Mid-sized engineering teams gain immediate productivity benefits because they struggle most with repetitive maintenance work like triage and documentation drift,” said Dion Hinchcliffe ... Patel also warned that beyond precision and signal-to-noise concerns, there is a more prosaic risk teams may underestimate at first: As agentic workflows scale across repositories and run more frequently, the underlying compute and model-inference costs can quietly compound, turning what looks like a productivity boost into a growing operational line item if left unchecked. This can become a boardroom issue for engineering heads and CIOs because they must justify return on investment, especially at a time when they are grappling with what it really means to let software agents operate inside production workflows, Patel added.


One stolen credential is all it takes to compromise everything

Identity-based compromise dominated incident response activity in 2025. Identity weaknesses played a material role in almost 90% of investigations. Initial access was driven by identity-based techniques in 65% of cases, including phishing, stolen credentials, brute force attempts, and insider activity. ... Rubin said the growing dominance of identity attacks reflects how enterprise environments have changed over the past few years, creating more opportunities for adversaries to quietly slip in through legitimate access pathways. “The increasing role of identity as the main attack vector is a result of a fundamental change in the enterprise environment,” Rubin said. “This dynamic is driven by two key factors.” He said the first driver is the rapid expansion of SaaS adoption, cloud infrastructure, and machine identities, which in many organizations now outnumber human accounts. That shift has created what he described as a “massive, unmanaged shadow estate,” where each integration represents “a new, potentially unmonitored, path into the network.” ... The time window for defenders is shrinking. The fastest 25% of intrusions reached data exfiltration in 72 minutes in 2025. The same metric was 285 minutes in 2024. A separate simulation described an AI-assisted attack that reached exfiltration in 25 minutes. Threat actors also began automating extortion operations. Unit 42 negotiators observed consistent tone and cadence in ransom communications, suggesting partial automation or AI-assisted negotiation messaging.


The emerging enterprise AI stack is missing a trust layer

This is not simply a technology problem. It is an architectural one. Today’s enterprise AI stack is built around compute, data and models, but it is missing its most critical component: a dedicated trust layer. As AI systems move from suggesting answers to taking actions, this gap is becoming the single biggest barrier to scale. ... Our ability to generate AI outputs is scaling exponentially, while our ability to understand, govern and trust those outputs remains manual, retrospective and fragmented across point solutions. ... This layer isn’t a single tool; it’s a governance plane. I often think of it as the avionics system in a modern aircraft. It doesn’t make the plane fly faster, but it continuously measures conditions and makes adjustments to keep the flight within safe parameters. Without it, you’re flying blind — especially at scale. ... Agentic systems collapse the distance between recommendation and action. When decisions are automated, there is far less tolerance for opacity or after-the-fact explanations. If an AI-driven action cannot be reconstructed, justified and owned, the risk is no longer theoretical — it is operational. This is why trust is becoming a prerequisite for autonomy. Governance models built for dashboards and quarterly reviews are not sufficient when systems act in real time. CIOs need architectures that assume scrutiny, not exception handling and that treat accountability as a design constraint rather than a policy requirement.


India Is Not a Back Office — It’s a Core Engine of Our Global Innovation

We have a very clear data and AI strategy. We are running multiple proof-of-concept initiatives across the organisation to ensure AI becomes more than just a buzzword. The key question is: how does AI create real value for Volvo Cars? It helps us become more agile and faster, whether in product development, improving internal process efficiency, or enhancing decision-making quality. India plays a crucial role here. We have a large team working on data analytics, intelligent automation, and AI, supporting these initiatives and shaping our agenda. ... It’s not just access to talent, it’s also the mindset. Indian society is highly adaptable. You often face unforeseen situations and must find solutions quickly. That agility and ability to always have a “Plan B” drive innovation, creativity, and speed. ... Data protection is a global priority. Many regions have introduced regulations, India’s Data Privacy Act, GDPR in the European Union, and similar laws in China. For global organisations, managing how data is transferred and processed across borders is a significant challenge. For example, certain data, like Chinese customer data, may need to remain within that country. Beyond regulatory compliance, cybersecurity threats are constant. Like most organisations, we experience attempted attacks on our networks. We have a robust cybersecurity team working continuously to secure both data and infrastructure.


AI likely to put a major strain on global networks—are enterprises ready?

Retrieval-heavy architecture types such as retrieval augmented generation—an AI framework that boosts large language models by first retrieving relevant, current information from external sources—create significant network traffic because data is moving across regions, object stores, and vector indexes, Kale says. “Agent-like, multi-step workflows further amplify this by triggering an additional set of retrievals and evaluations at each step,” Kale says. “All of these patterns create fast and unpredictable bursts of network traffic that today’s networks were never designed to handle. These trends will not abate, as enterprises transition from piloting AI services to running them continually.” ... In 2026, “we will see significant disruption from accelerated appetite for all things AI,” research firm Forrester noted in a late-year predictions post. “Business demands of AI systems, network connectivity, AI for IT operations, the conversational AI-powered service desk, and more are driving substantial changes that tech leaders must enable within their organizations.” ... “Inference workloads in particular create continuous, high-intensity, globally distributed traffic patterns,” Barrow says. “A single AI feature can trigger millions of additional requests per hour, and those requests are heavier—higher bandwidth, higher concurrency, and GPU-accelerated compute on the other side of the network.”


Quantum Scientists Publish Manifesto Opposing Military Use of Quantum Research

The scientists’ primary goals include: to express a unified rejection of military uses of quantum research; to open debate within the quantum community about ethical implications; to create a forum for researchers concerned about militarization; and to advocate for a public database listing all research projects at public universities funded by military or defense agencies. Quantum technologies rely on the behavior of matter and light at the smallest scales, enabling ultra-secure communication, highly sensitive sensors and powerful computing systems. According to the manifesto, these capabilities are increasingly being folded into defense strategies worldwide. ... The manifesto places these developments in the context of rising defense budgets, particularly in Europe following Russia’s invasion of Ukraine. The scientists write in the manifesto that the research and development sector is not exempt from the broader rearmament trend and that dual-use technologies — those that can serve both civilian and military ends — are increasingly prioritized in policy documents. The scientists acknowledge that quantum technologies are not inherently military tools. However, according to the manifesto, once such systems are developed, their applications may be difficult to control. The scientists argue that closer institutional ties between universities and defense agencies risk undermining academic independence. .

From pilot purgatory to productive failure: Fixing AI's broken learning loop

"Model performance can drift with data changes, user behavior, and policy updates, so a 'set it and forget it' KPI can reward the wrong thing, too late," Manos said. The penalty for CIOs, however, comes from the time lag between the misread KPI signal and the CIO's moves to correct it. Timing is everything, and "by the time a quarterly metric flags a problem, the root cause has already compounded across workflows," Manos said. ... Waiting until the end of a POC to figure out why a concept doesn't scale is clearly too late, but neither is it prudent to abandon a "trial, observation, and refine" cycle entirely, Alex Tyrrell, head of advanced technologies at Wolters Kluwer and CTO at Wolters Kluwer Health, said. Instead, Tyrrell argues for refining the interaction process itself to detect issues earlier in a safe setting, particularly in regulated, high-trust environments like healthcare. He recommends pairing each iteration with both predictive and diagnostic signals, so IT teams can intervene before the error ripples down to the customer level. ... AI pilots fail for the same non-technical reasons that have always plagued technology performance, such as a governance vacuum, organizational unreadiness, low usage rates, or "measurement theater," which is when tech performance can't be tied to a specific business value, explained Baker.


How AI agents and humans can play together in the same sandbox

Unlike traditional automation, which is rigid and rules-based, AI agents are goal-driven. They can plan, adapt, and respond to changing conditions. That makes them especially powerful for modern business processes that are dynamic by nature - processes that span systems, teams, and time zones. Another defining characteristic is endurance. AI agents don't get tired, sick, or distracted. They can operate continuously, scaling up or down as needed, and executing tasks with consistent precision. This doesn't make humans obsolete. ... Trust plays a central role here. Agents must demonstrate that they are reliable and predictable. At the same time, humans must define boundaries - what agents can do autonomously, where approvals are required, and what guardrails must always be respected. There is a fine balance to strike. Constrain agents too tightly, and you eliminate the benefits of autonomy. ... A logical approach enables AI agents to access views of data directly from source systems, in real time, without first having to replicate or move that data. For Agentic AI, this is critical: agents need live data, delivered in the shortest possible time, in order to plan, act, and adapt effectively. By abstracting physical data complexity and unifying access across sources, a logical data layer provides AI agents with fast, trusted, and governed data - exactly what autonomous systems require to operate at scale. A shared data plane provides all consumers - human or machine - with the same source of truth. It also provides context, consistency, and traceability.

Daily Tech Digest - February 12, 2026


Quote for the day:

"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode



The hard part of purple teaming starts after detection

Imagine you’re driving, and you see the car ahead braking suddenly. Awareness helps, but it’s your immediate reaction that avoids the collision. Insurance plans don’t matter at that moment. Nor do compliance reports or dashboards. Only vigilance and rehearsal matter. Cyber resilience works the same way. You can’t build the instinct required to act by running one simulation a year. You build it through repetition. Through testing how specific scenarios unfold. Through examining not only how adversaries get in, but also how they move, escalate, evade, and exfiltrate. This is the heart of real purple teaming. ... AI can accelerate analysis, but it can’t replace intuition, design, or the judgment required to act. If the organization hasn’t rehearsed what to do when the signal appears, AI only accelerates the moment when everyone realises they don’t know what happens next. This is why so much testing today only addresses opportunistic attacks. It cleans up the low-hanging fruit. ... The standard testing model traps everyone involved: One-off tests create false confidence; Scopes limit imagination. Time pressure eliminates depth; Commercial structures discourage collaboration; Tooling gives the illusion of capability; and Compliance encourages the appearance of rigour instead of the reality of it. This is why purple teaming often becomes “jump out, stabilize, pull the chute, roll on landing.” But what about the hard scenarios? What about partial deployments? What about complex failures? That’s where resilience is built.


State AI regulations could leave CIOs with unusable systems

Numerous states are considering AI regulations for systems used in medical care, insurance, human resources, finance and other critical areas. ... Despite the growing regulatory risk, businesses appear unwilling to slow AI deployments. "Moving away from AI with the regulation is not going to be an option for us," Juttiyavar said. He said AI is already deeply embedded in how organizations operate and is essential for speed and competitiveness. ... If CIOs establish strong internal frameworks for AI deployment, "that helps you react better to legislative change" and anticipate new requirements, Kourinian said. Still, regulatory shifts can leave companies with systems that are technically sound but legally unusable, said Peter Cassat, a partner at CM Law. To manage that risk, Cassat advises CIOs to negotiate "change of law" provisions in vendor contracts that provide termination rights if regulations make continued use of a system impossible or impractical. But such provisions do not eliminate the risk of sunk costs. "If it's a SaaS provider and you've signed a three-year term, they don't want to necessarily let you walk for free either," Cassat said. Beyond legal exposure, CIOs must also anticipate public and political reaction to AI and biometric tools. "The CIO absolutely has the responsibility to understand how this technology could be perceived -- not just internally, but by the public and lawmakers," said Mark Moccia, an analyst at Forrester Research.


Your dev team isn’t a cost center — it’s about to become a multiplier

If you treat AI as a pathway to eliminate developer headcount, sure, you’ll capture some cost savings in the short term. But you’ll miss the bigger opportunity entirely. You’ll be the bank executive in 1975 who saw ATMs and thought, “Great, we can close branches and fire tellers.” Meanwhile, your competitors have automated the mundane teller tasks and are opening new branches to sell higher-end services to more people. The 1.4-1.6x productivity improvement that GDPval documented isn’t about doing the same work with fewer people. It’s about doing vastly more work with the same people. That new product idea you had that was 10x too expensive to develop? It’s now possible. That customer experience improvement that could drive loyalty that you didn’t have the headcount for? It’s on the table. The technical debt you’ve been accumulating? You can start to pay it down. ... What struck me about Werner’s final keynote wasn’t the content, it was the intent. This was Werner’s last time at that podium. He could have done a victory lap through AWS’s greatest hits. Instead, he spent his time outlining a framework of success for the next generation of developers. For those of us leading technology organizations, the framework is both validating and challenging. Validating because these traits aren’t new. They have always separated good developers from great ones. Challenging because AI amplifies everything, including the gaps in our capabilities.


Cloud teams are hitting maturity walls in governance, security, and AI use

Migration activity remains heavy across enterprises, especially for data platforms. At the same time, downtime tolerance is limited. Nearly half of respondents said their organizations can accept only one to six hours of downtime for cutover during migration. That combination creates pressure to migrate at speed while keeping data integrity intact. In regulated environments, that pressure extends to audit evidence and compliance validation, which often needs to be produced in parallel with migration execution. ... Cloud-native managed database adoption is also high. More than half of respondents reported using managed cloud databases, and a third reported using SaaS-based database services. Only 10% reported operating self-hosted databases. This shift toward managed services reduces operational burden on infrastructure teams, but it increases reliance on identity governance, network segmentation, and application-layer security controls. It also creates stronger dependency on cloud provider logging and access models. ... Development stacks also reflect this shift. Python was reported as a primary language, with Java close behind. These languages remain central to AI workflows, data engineering, and enterprise application back ends. Machine learning adoption is also widespread since organizations reported actively training ML models. Many of these pipelines are now part of production environments, making operational continuity a priority.


MIT's new fine-tuning method lets LLMs learn new skills without losing old ones

To build truly adaptive AI, the industry needs to solve "continual learning," allowing systems to accumulate knowledge much like humans do throughout their careers. The most effective way for models to learn is through "on-policy learning.” In this approach, the model learns from data it generates itself allowing it to correct its own errors and reasoning processes. This stands in contrast to learning by simply mimicking static datasets. ... The standard alternative is supervised fine-tuning (SFT), where the model is trained on a fixed dataset of expert demonstrations. While SFT provides clear ground truth, it is inherently "off-policy." Because the model is just mimicking data rather than learning from its own attempts, it often fails to generalize to out-of-distribution examples and suffers heavily from catastrophic forgetting. SDFT seeks to bridge this gap: enabling the benefits of on-policy learning using only prerecorded demonstrations, without needing a reward function. ... For teams considering SDFT, the practical tradeoffs come down to model size and compute. The technique requires models with strong enough in-context learning to act as their own teachers — currently around 4 billion parameters with newer architectures like Qwen 3, though Shenfeld expects 1 billion-parameter models to work soon. It demands roughly 2.5 times the compute of standard fine-tuning, but is best suited for organizations that need a single model to accumulate multiple skills over time, particularly in domains where defining a reward function for reinforcement learning is difficult or impossible.


The Illusion of Zero Trust in Modern Data Architectures

Modern data stacks stretch far beyond a single system. Data flows from SaaS tools into ingestion pipelines, through transformation layers, into warehouses, lakes, feature stores, and analytics tools. Each hop introduces a new identity, a new permission model, and a new surface area for implicit trust. Not to mention, niches like healthcare data storage are a completely different beast. Whatever the system may be, teams may enforce strict access at the perimeter while internal services freely exchange data with long-lived credentials and broad scopes. This is where the illusion forms. Zero Trust is declared because no user gets blanket access, yet services trust other services almost entirely. Tokens are reused, roles are overprovisioned, and data products inherit permissions they were never meant to have. The architecture technically verifies everything, but conceptually trusts too much. ... Data rarely stays where Zero Trust policies are strongest. Warehouses enforce row-level security, masking, and role-based access, but data doesn’t live exclusively in warehouses. Extracts are generated, snapshots are shared, and datasets are copied into downstream systems for performance or convenience. Each copy weakens the original trust guarantees and problems worse than increasing cloud costs come to fruition. Once data leaves its source, context is often stripped away.


Top Cyber Industry Defenses Spike CO2 Emissions

Though rarely discussed, like any other technologies, cybersecurity protections carry their own costs to the planet. Programs run on electricity. Servers demand water. Devices are built from natural resources and eventually get thrown out. ... "CISOs can help or make the situation worse [when it comes to] sustainability, depending on the way they write security rules," he says. "And that's why we started a study: to enable the CISO to be part of the sustainability process of his or her company, and to find actionable ways to reduce CO2 consumption while at the same time not adding more risks." ... "We collect a lot of logs, not exactly always knowing why, and the retention period is a huge cost in terms of infrastructure, and also CO2," Billois says. "So at some point, you can revisit your log collection, and log retention, and if there are no legal issues, you can think about compressing them to reduce their volume. It's something that is, I would say, quite easy to do. ... All of that said, unfortunately, the biggest cyber polluter, by far, is also the most difficult to scale back without incurring risk. Some companies can swap underutilized physical infrastructure for virtualized backups, which eat less power, if they're not already doing that; but there are few other great ways to make cyber resilience more efficient. "You can reduce CO2 [from backups] very easily: you stop buying two servers, or you stop having a duplicate of all your data," Billois says.


Five ways quantum technology could shape everyday life

There is growing promise of quantum technology’s ability to solve problems that today’s systems struggle to overcome, or cannot even begin to tackle, with implications for industry, national security and everyday life. ... In healthcare, faster drug discovery could bring quicker response to outbreaks and epidemics, personalised medicine and insight into previously inscrutable biological interactions. Quantum simulation of how materials behave could lead to new high efficiency energy materials, catalysts, alloys and polymers. ... In medicine, quantum sensors could improve diagnostic capabilities via more sensitive, quicker and noninvasive imaging modes. In environmental monitoring, these sensors could track delicate shifts beneath the Earth’s surface, offer early warnings of seismic activity, or detect trace pollutants in air and water with exceptional accuracy. ... Airlines and rail networks could automatically reconfigure to avoid cascading delays, while energy providers might balance renewable generation, storage and consumption with far greater precision. Banks could use quantum computers to evaluate numerous market scenarios in parallel, informing the management of investment portfolios. ... While still at an early stage of development, quantum algorithms might accelerate a subset of AI called machine learning (where algorithms improve with experience), help simulate complex systems, or optimise AI architectures more efficiently.


Nokia predicts huge WAN traffic growth, but experts question assumptions

“Consumer- and enterprise-generated AI traffic imposes a substantial impact on the wide-area network (WAN) by adding AI workloads processed by data centers across the WAN. AI traffic does not stay inside one data center; it moves across edge, metro, core, and cloud infrastructure, driving dense lateral flows and new capacity demands,” the report says. An explosion in agentic AI applications further fuels growth “by inducing extra machine-to-machine (M2M) traffic in the background,” Nokia predicts. “AI traffic isn’t just creating more demand inside data centers; it’s driving a sustained surge of traffic between them. AI inferencing traffic—both user-initiated and agentic-AI-induced M2M—moving over inter-data-center links grows at a 20.3% CAGR through 2034.” ... Global enterprise and industrial traffic, including fixed wireless access, will also steadily rise over the next decade, “as more operations, machines, and workers become digitally connected,” Nokia predicts. “Pervasive automation, high-resolution video, AI-driven analytics, and remote access to industrial systems,” will drive traffic growth. “Factory lines are streaming machine vision data to the cloud. AI copilots are assisting personnel in real time. Field teams are using AR instead of manuals. Robots are coordinating across sites,” the Nokia report says. “Industrial systems are continuously sending telemetry over the WAN instead of keeping it on-site. This shift makes wide-area connectivity part of the core production workflow.”


The death of reactive IT: How predictive engineering will redefine cloud performance in 10 years

Reactive monitoring fails not because tools are inadequate, but because the underlying assumption that failures are detectable after they occur no longer holds true. Modern distributed systems have reached a level of interdependence that produces non-linear failure propagation. A minor slowdown in a storage subsystem can exponentially increase tail latencies across an API gateway. ... Predictive engineering is not marketing jargon. It is a sophisticated engineering discipline that combines statistical forecasting, machine learning, causal inference, simulation modeling and autonomous control systems. ... Predictive engineering will usher in a new operational era where outages become statistical anomalies rather than weekly realities. Systems will no longer wait for degradation, they will preempt it. War rooms will disappear, replaced by continuous optimization loops. Cloud platforms will behave like self-regulating ecosystems, balancing resources, traffic and workloads with anticipatory intelligence. ... In distributed networks, routing will adapt in real time to avoid predicted congestion. Databases will adjust indexing strategies before query slowdowns accumulate. The long-term trajectory is unmistakable: autonomous cloud operations. Predictive engineering is not merely the next chapter in observability, it is the foundation of fully self-healing, self-optimizing digital infrastructure. 

Daily Tech Digest - February 09, 2026


Quote for the day:

"Leaders who make their teams successful are followed even through the hardest journeys." -- Gordon Tredgold



Agentic AI upends SaaS models & sparks valuation shock

The Software-as-a-Service market is moving away from seat-based licensing as agentic artificial intelligence tools change how companies build and purchase business software, according to analysts and industry executives. Investors have already reacted to the shift. A broad sell-off in software stocks followed recent advances in agentic technology, raising questions regarding the durability of current business models. Concerns persist that traditional revenue streams may be at risk as autonomous systems perform increasing volumes of work with fewer human users. ... Not every vendor is well positioned for the transition. Industry observers are using the term "zombie SaaS" for companies that raised large rounds at peak valuations from 2020 to 2022 and now trade or transact below the total capital invested. These businesses often face a mismatch between historical expectations and current demand. They can struggle to raise new funding and may lack the growth rate needed to justify earlier valuations. Meanwhile, newer entrants can build competing products faster and at lower cost, increasing pressure on incumbents with larger cost structures. ... AI is also reshaping procurement decisions. Some companies are shifting toward internal tools as non-technical teams gain access to systems that generate software from natural-language prompts and templates. Industry discussion points to Ramp building internal revenue tools and AI agents in place of third-party software. 


Software developers: Prime cyber targets and a rising risk vector for CISOs

Attackers are increasingly targeting the tools, access, and trusted channels used by software developers rather than simply exploiting application bugs. The threats blend technical compromise — malicious packages, development pipeline abuse, etc. — with social engineering and AI-driven attacks. ... The tokens, API keys, cloud credentials, and CI/CD secrets held by software developers unlock far broader access than a typical office user account, making software engineers a prime target for cybercriminals. “They [developers] hold the keys to the kingdom, privileged access to source code and cloud infrastructure, making them a high-value target,” Wood adds. ... Attackers aren’t just looking for flaws in code — they’re looking for access to software development environments. Common security shortcomings, including overprivileged service accounts, long-lived tokens, and misconfigured pipelines, offer a ready means for illicit entry into sensitive software development environments. “Improperly stored access credentials are low-hanging fruit for even the most amateur of threat actors,” says Crystal Morin, senior cybersecurity strategist at cloud-native security and observability vendor Sysdig. ... AI-assisted development and “vibe coding” are increasing exposure to risk, especially because such code is often generated quickly without adequate testing, documentation, or traceability.


How network modernization enables AI success and quantum readiness

In essence, inadequate networks limit the ability of AI “blood” to nourish the body of an organization — weakening it and stifling its growth. Many enterprise networks developed incrementally over time, with successive layers of technology implemented over time. Mergers, divestitures, and one-off projects to solve immediate problems have left organizations with a patchwork of architectures, vendors and configurations. ... As AI traffic increases across data centers, clouds, and the edge, blind spots multiply. Once-manageable technical debt becomes an active security liability, expanding the attack surface and undermining Zero Trust initiatives as AI-driven traffic increases. ... Quantum computers could break today’s encryption standards, exposing sensitive financial, healthcare and operational data. Worse, attackers are already engaging in “harvest now, decrypt later” strategies — stealing encrypted data today to exploit tomorrow. The relevance to networking and AI issues is straightforward. Preparing for the challenges (and opportunities) of quantum computing will be an incremental, multi-year project that needs to start now. Enterprise IT infrastructures must be able to adapt and scale to quantum computing developments as they evolve. Companies will need to be able to “skate to where the puck will be,” and then skate again! While becoming quantum-safe may seem daunting, organizations don’t have to do it all at once. 


Rethinking next-generation OT SOC as IT/OT convergence reshapes industrial cyber defense

Clear gains from next-generation OT SOC innovation emerge across real-world applications, such as OT-aware detection, AI-assisted triage, and distributed SOC models designed to reflect the day-to-day realities of operating critical infrastructure. ... The line between what is OT and what is IT is blurred. Each customer, scenario, and request proposal shows a unique fingerprint of architectural, process, and industry-related concerns. Our OT SOC development program integrated industrial network sensors with enterprise SOC, enabling holistic monitoring of plants and offices together. ... Risk is no longer discussed purely from a cyber perspective, but in terms of operational impact, safety, and reliability, which is more consequence-driven. When convergence is implemented securely, alerts are no longer investigated in isolation; identity, remote access activity, asset criticality, and process context are correlated together. ... From a practical standpoint, Mashirova said that automation delivers the most operational value in enrichment, correlation, prioritization, and workflow orchestration. “Automating asset context, vulnerability risk prioritization with remediation recommendations, alert deduplication, and escalation logic dramatically improves analyst efficiency without directly impacting the industrial process. AI agents can act as SOC assistants by correlating large volumes of data and providing decision support to analysts.”


Shai-hulud: The Hidden Cost of Supply Chain Attacks

In recent months, a somewhat novel supply chain threat has emerged against the open source community; attackers are unleashing self-propagating malware on component libraries and targeting downstream victims with infostealers. The most famous recent example of this is Shai-hulud, a worm targeting NPM projects that would take hold when a victim downloads a poisoned component. Once on a victim machine, the malware used its access to infect components that the victim maintains before self-publishing poisoned versions. ... Another consideration is long-term, lasting damage from these incidents. Sygnia's Kidron explains that the impact of a compromise like credential theft happens on a wider time scale. If the issue has not been adequately contained, attackers can sell access or use it for follow-on activity later. "In practice, damage unfolds across time frames. Immediately — within hours to the first few days after exposure, the primary risk is credential exposure: these campaigns are designed to execute inside developer and CI/CD paths where tokens and secrets are accessible," he says. "When those secrets leak, the downstream harm is not abstract — the attacker can use them (or sell them) to authenticate as the victim and access private repositories, pull data, tamper with code, trigger builds, publish packages, access cloud resources, or perform actions “on behalf” of legitimate identities." 


United Airlines CISO on building resilience when disruption is inevitable

Modernization in aviation is less about speed and more about precision. Every change must measurably improve safety, reliability, or resilience. Cybersecurity must respect that bar. ... Cyber risk is assessed in terms of how it affects the ability to move aircraft, crew, and passengers safely and on time. It also means cybersecurity leaders must understand the business end-to-end. You cannot protect an airline effectively without understanding flight operations, maintenance, weather, crew scheduling, and regulatory constraints. Cybersecurity becomes an enabler of safe operations, not a separate technical function. ... Risk assessment goes beyond vendor questionnaires. It includes scenario analysis, operational impact modeling, and close coordination with partners, regulators, and industry groups. Information sharing is essential, because early awareness often matters more than perfect control. Ultimately, we assume some disruptions will originate externally. The goal is to detect them quickly, understand their operational impact, and adapt without compromising safety. Resilience and coordination are just as important as contractual controls. ... Speed matters, but clarity matters more. We also plan extensively in advance. You cannot improvise under pressure when aircraft and passengers are involved. Clear playbooks, rehearsals, and defined decision authorities allow teams to act decisively while staying aligned with safety principles.


Securing IoT devices: why passwords are not enough

Traditional passwords are often not secure enough for technological devices or systems. Many consumers use the default password that comes with the system rather than changing it to a more secure one. When people update their passwords, they often choose weak ones that are easy for cyberattackers to crack. The volume of IoT devices makes manual password management inefficient and risky. A primary threat is the lack of encryption as data travels between networks. When multiple devices are connected, encryption is key to protecting information. Another threat is poor network segmentation, which means connected devices are misconfigured or less secure. ... Adopting a zero-trust methodology is a better cybersecurity measure than traditional password-based systems. IoT devices can still require a password, but the system may ask for additional information to verify the user’s authorization. Users can set up passkeys, security questions or other methods as the next step after entering a password. ... AI can be used both offensively and defensively in cybersecurity for IoT devices. Hackers use AI to launch advanced attacks, but users can also implement AI to detect suspicious behaviour and address threats. Consumers can purchase AI security systems to safeguard their IoT devices beyond passwords, but they must remain vigilant and continuously monitor their usage to prevent cyberattackers from infiltrating them.


Creating a Top-Down and Bottom-Up Grounded Capability Model

A grounded capability model is a complete and stable set of these capabilities, structured in levels from level 1 to sometimes level 4 so senior leaders, middle managers, architects, and digital transformation managers can see the business as an integrated whole. The “grounded” part matters: it means the model reflects strategy and business design, not the quirks of today’s org chart or application portfolio. ... Business Architecture Info emphasizes that a grounded capability model is best built by combining top-down strategic direction with bottom-up operational reality. The top-down view ensures the model is aligned to the business plan and strategic goals, while the bottom-up view ensures it is validated against real value streams, objectives, and subject-matter expertise. ... Top-down capability modeling needs the right stakeholders and the right strategic inputs. On the stakeholder side, senior leaders are essential because they own direction, priorities, and the definition of “what good looks like.” The EA team, enterprise architects and business architects, translates that direction into a structured capability view. ... Bottom-up capability modeling grounds the model in delivery and operational truth. It relies heavily on middle managers, subject matter experts, and business experts. In other words, people who know how value is produced, where friction exists, and what “enablement” really takes. The EA team remains a key facilitator and modeler, but validation and discovery come from the business.


Secure The Path, Not The Chokepoint

The argument here is simple: baseline security policy should be enforced along the path where packets already travel. Programmable data planes, particularly P4 on programmable switching targets, make it possible to enforce meaningful guardrails at line rate, close to the workload, without redesigning the network into a set of security detours. ... When enforcement is concentrated on a few devices, the architecture depends on traffic detours or assumptions about where traffic flows. That creates three practical problems: First, important east west traffic may never traverse an inspection point. Second, response actions often depend on where a firewall sits rather than where the attacker is operating. Third, changes become slow and risky because every new workload pattern becomes another exception. ... A fabric first model succeeds when it focuses on controls that are simple, universal, and have a high impact. ... A fabric first approach does not remove the need for firewalls. Deep application inspection, proxy functions, content controls, and specialized policy workflows still make sense where rich context exists and where inspection overhead is acceptable. The shift is about default placement. Baseline guardrails and rapid containment belong in the fabric. ... A small set of metrics usually tells the story clearly: time from detection to enforced containment, reduction in unintended internal connection attempts, and time to produce a credible incident narrative during review.


Banks Face Dual Authentication Crisis From AI Agents

Traditional authentication relies upon point-in-time verification like MFA and a password, after which access is granted. Over the years, banks have analyzed human spending patterns. But AI agents purchasing around the clock and seeking optimal deals have rendered that model obsolete. "With autonomous agents transacting on behalf of users, the distinction between legitimate and fraudulent activity is blurred, and a single compromised identity could trigger automated losses at scale," said Ajay Patel, head of agentic commerce at Prove. ... But before banks can address the authentication problem, they need to fix their data infrastructure, said Carey Ransom, managing director at BankTech Ventures. AI agents need clean, contextually appropriate data, banks don't yet have standardized ways to provide it. So, when mistakes occur, who is at fault, and who is liable for making things right? When AI agents can spawn sub-agents that delegate tasks to other AI systems throughout a transaction chain, the liability question gets murky. ... Layered authentication that balances security with the speed will reduce agentic AI valuable risks, Ransom said. "Variant transaction requests might require a new layer or type of authentication to ensure it is legitimate and reflecting the desired activity," he said. "Checks and balances will be a prevailing approach to protect both sides, while still enabling the autonomy and efficiency the market desires."

Daily Tech Digest - January 23, 2026


Quote for the day:

"Strong convictions precede great actions." -- James Freeman Clarke



90% of companies are woefully unprepared for quantum security threats

Companies shouldn't wait, Bain warned, pointing to rapid progress made by IBM, Google, and other industry leaders on this front. "At a certain threshold, quantum computing will be able to easily and quickly break asymmetric cryptography protocols such as Rivest-Shamir-Adelman (RSA), Diffie-Hellman (DH), and elliptic-curve cryptography (ECC) and reduce the time required, weakening symmetric cryptography such as advanced encryption standard (AES) and hashing functions," ... The highest impact will be on secure keys and tokens, digital certificates, authentication protocols, data encrypted at rest, and even network security and identity access management (IAM) tools. Essentially, anything currently relying on encryption. Beyond that, quantum computing could supercharge malware and make it easier to identify and weaponize "zero day" flaws, Bain warned. Another risk highlighted by security experts is "steal now, crack later" techniques, whereby threat actors harvest data now to decrypt later.  ... Companies need a board-led – and funded – roadmap to consider post-quantum risks across their business decision making, ensuring quantum resilience across their own suppliers, existing technology, and even their products. But so far, the Bain survey revealed only 12% of companies are considering quantum readiness as a key factor in procurement and risk assessments.


The New Rules of Work: What a global HR leader reveals about modern talent

The impact of AI on the workforce is a subject Sonia has thought deeply about, especially as it relates to entry-level talent. “There’s always been a question about repetitive engineering tasks—whether these should be done by engineers or by diploma holders. Now, with AI in the picture, many of these tasks will be automated,” she says. Rather than seeing this as a threat, Kutty believes it frees up human talent to focus on innovation and problem-solving. “Our true value at Quest Global comes from leveraging innovation to solve the toughest engineering problems. AI will allow us to do more of this meaningful work.” ... While the company offers AI-based courses and certifications, Kutty emphasises the importance of fostering a mindset of adaptability and systems thinking. “We call it nurturing ‘polymath engineers’—professionals who can think broadly, adapt to new challenges, and learn continuously,” she says. ... As the engineering and R&D sector prepares for rapid growth, Kutty identifies leadership development as her biggest challenge—and her greatest responsibility. “We need strong leaders who understand this industry and are ready to step up when the time comes. Planning for leadership succession keeps me up at night. It’s critical for our continued success.” On the other hand, client expectations have evolved alongside technological advances. “In the past, clients would tell us exactly what they wanted. Now, they expect us to tell them what’s possible with AI and technology. They see us as partners in innovation, not just service providers,” Kutty observes.


Work-from-office mandate? Expect top talent turnover, culture rot

There is value in cross-functional teams working together in person, says Lawrence Wolfe, CTO at marketing firm Converge. “When teams meet for architecture sessions, design sprints, or incident response, the pace of progress, as well as the level of clarity, may increase simply because being in-person caters to the way most people in the business interact,” he says. However, there are potential downsides for IT leaders, with strict work-from-office policies making it more difficult to attract and retain top IT talent. ... Despite possible resistance, it makes sense for some IT jobs to be tied to an office, says Lena McDearmid, founder and CEO of culture and leadership advisory firm Wryver. Some IT roles, including device provisioning, network operations, and conference room IT support, are better done in person, she notes. She sees some other benefits in specific situations. “In-person work is genuinely valuable for onboarding and mentoring early-career technologists, especially when learning how the organization actually operates, not just how the codebase works,” McDearmid says. “It’s also powerful when teams need to think together in high-bandwidth ways: whiteboards, war rooms, architecture reviews, incident response, or when solving messy, cross-functional problems.” ... IT leaders enforcing in-person work mandates can also focus on making the workplace a real place to collaborate, she adds. CIOs can align office space, meeting schedules, and in-office days so they reinforce the goals of collaboration and knowledge sharing, Wettemann adds.


Rethinking IT leadership to unlock the agility of ‘teamship’

Rather than waiting for the leader to set the pace, the best teams coach one another, challenge one another, co-elevate one another, and move faster, because they and their leaders have built cultures where candor is a shared responsibility. For CIOs navigating the messy middle of AI, modernization, and talent transformation, this shift from leadership to what Ferrazzi calls “teamship” may be the most important upgrade of all. ... The No. 1 shift is to move from leadership to teamship. That means stop thinking of leadership as a hub and spoke. Don’t think aboutwhat you need to give feedback on, how you need to hold people accountable, how you need to do this or that. Instead, think about, how do you get your team to step up and meet each other, to give each other feedback, to hold each other’s energy up. Get out of the center and expect your team to step up. ... To be effective, stress testing needs to be positioned as a service to the person who’s giving the project update. We’re not trying to make them look bad or catch them in what they’re doing wrong. The feedback should be offered and received as data, with no presumption that they have to act on it. ... That fear is rooted in a misunderstanding of how high-performing teams actually work. In traditional leadership models, accountability flows upward: People worry about what the boss will think. In teamship, accountability flows sideways: People worry about letting their peers down.


The Upside Down is Real: What Stranger Things Teaches Us About Modern Cybersecurity

The Upside Down’s danger lies in the unseen portals – the gates and rifts – that allow its monstrous inhabitants, like the Demogorgon and the Mind Flayer, to cross over and wreak havoc in the seemingly safe, familiar world of Hawkins. Today, nearly every business’s hidden reality is its extended attack surface. It’s the sprawling, complex, and often unmanaged network of IT, OT, IoT, medical, cloud systems and beyond that modern organizations rely on. ... For the CISO and security team, this translates directly to the need for full, continuous visibility across every single connected device and system to protect the entire attack surface and manage their organization’s cyber risk exposure in real time. Like the Dungeons and Dragons analogies the kids use to understand the creatures and their tactics, security teams rely on context and intelligence – risk scoring, vulnerability prioritization, and threat analysis – to understand how an asset is connected, why it is vulnerable, and what the most effective countermeasure is. ... First and foremost, cybersecurity requires teamwork, particularly through the fusion of IT, OT, security and business leadership so that they work from a unified view of any risks at hand. It also demands persistence from the dedicated security professionals protecting our digital infrastructure. Most of all, cybersecurity needs to be a proactive and preemptive effort where risk exposures are continuously monitored and threats can be stopped before they ever fully manifest.


Shadow AI: The emerging enterprise risk that can no longer be ignored

With regulatory frameworks tightening and emerging national standards, unsanctioned AI activity can quickly become a governance liability. Instead of reactive controls, organisations are now moving toward multi-layered visibility frameworks: monitoring external AI calls, classifying enterprise assets by sensitivity and tracking unmanaged AI usage. Forward-looking teams are even translating these metrics into financial exposure scores, linking AI misuse to operational, reputational and regulatory impact. Assigning monetary value to Shadow AI risk has proven effective for prioritising mitigation at leadership levels. ... A structured foundation is essential, comprised of trusted assessment frameworks, tested architectural blueprints and scalable AI operating models. Some organisations are pairing these with comprehensive training programs to build AI-literate leaders and teams, ensuring governance evolves alongside capability. This reflects a broader shift: responsible AI has now become the foundation of durable competitive advantage. ... Regulators, global partners and enterprise clients are seeking evidence of formal AI governance models, not just intent. For example, as per the Digital India Act, sectoral data localisation rules and global regulatory momentum are prompting enterprises to strengthen AI auditability, model documentation and workforce training. For many organisations, AI governance has moved from an operational task to a board-level agenda. 


Ireland to make age checks through government app mandatory for social media

The plan is unprecedented among governments legislating online safety, in that it makes downloading the app, designed by the Government’s chief information officer, mandatory for age assurance. Per the Extra report, “if adults refuse to download the digital wallet, they will no longer be able to access their existing social media accounts.” “Mr. O’Donovan said the process of downloading the app might inconvenience someone for ‘three or four minutes’ but this was a small ask in order to protect children online.” O’Donovan has called the harmful effects of social media and other online content on youth a “severe public health issue.” ... Concerns about age assurance technology persist among privacy rights activists. Since age verification and facial age estimation often involves the processing of biometrics, the potential for sensitive data to be exposed is high. And requiring the process to run through a government product is likely to agitate fears about mass surveillance. O’Donovan says the risk to Ireland’s youth is higher. ... “At the end of the day, if the companies have a social conscience and are interested in the protection of children online, I don’t see why anybody who wouldn’t be trading in Ireland, not just domiciled in Ireland, wouldn’t adopt the format that we’re proposing,” he says. “Some of them do have, you know, something bordering on a social conscience, which is to be welcomed. But ­others don’t.”


Secure networking: the foundation for the AI era

Global networks have been under siege for years, but recent attacks are more sophisticated and move at unprecedented speed. Many organizations are still relying on outdated infrastructure, with Cisco research revealing that 48% of network assets worldwide are aging or obsolete. This creates vulnerabilities that attackers eagerly exploit. It’s no longer enough to patch and maintain; a fundamental shift in strategy is required. ... Modern networks typically span solutions and services from a range of different vendors, creating layers of complexity that can quickly overwhelm even experienced IT teams. This complexity often translates into vulnerability, especially when secure configurations aren’t consistently implemented or maintained. For many, simplicity and automation are now mission critical. Businesses increasingly need networks where secure configurations, protocols, and features are enabled by default and adapt automatically. ... Organizations now face the challenge of not only detecting threats quickly, but also responding before vulnerabilities can be exploited. There is an urgent need to reduce the attack surface, remove legacy insecure features, and introduce advanced capabilities for detection and response. ... The next generation of security requires networks to seamlessly provide identity management, deep visibility, integrated detection and protection, and streamlined management, while also incorporating advanced technologies like post-quantum cryptography. 


Ransomware gang’s slip-up led to data recovery for 12 US firms

Researchers at Florida-based Cyber Centaurs said Thursday they took advantage of a lapse in operational security by the gang: They found artifacts left behind by Restic, an legitimate open source backup utility the gang uses to encrypt and exfiltrate victim data into cloud storage environments it controls. Assuming the gang regularly re-uses Restic-based infrastructure led to finding an unnamed cloud storage provider where stolen data was dumped. ... While Restic wasn’t used for exfiltration in this particular attack, Cyber Centaurs suspected the gang regularly used it, based on patterns seen in other incidents. It also suspected the infrastructure the crooks used was unlikely to be dismantled even after negotiations ended or payments were made by corporate victims. With that in mind, the incident response team developed a custom enumeration script to identify certain patterns that identify S3-style cloud bucket infrastructure that the stolen data might be going to. The script ran through a curated list of candidate repository identifiers derived from previously observed Restic artifacts. For each candidate, environment variables were set to match the configuration style used by the threat actor, including the repository endpoint and encryption password. Restic was then instructed to list available snapshots in a structured format, enabling investigators to analyze results without interacting with the underlying data.


The Real Attack Surface Isn’t Code Anymore — It’s Business Users

Traditional AppSec programs are optimized for code stored in repositories, pushed through pipelines, and deployed through CI/CD, not for no-code apps, connectors, and automations created on platforms like Power Platform, ServiceNow, Salesforce, and UiPath. Meanwhile, most organizations assume business-user automations are simple, low-risk, and limited in scope. The reality is more complex. Citizen developers now outnumber traditional software developers by an order of magnitude. Plus, they are wiring together data sources, triggering multi-system workflows, and calling APIs, not just building basic macros or departmental utilities. Because these automations are created outside engineering governance, traditional monitoring tools never see them. ... What emerges is a shadow layer of business logic that sits entirely outside the boundaries of traditional AppSec, DevSecOps, and identity programs. As long as ownership remains fragmented and discovery elusive, security debt continues to grow unchecked. ... We’re entering an era where the most dangerous vulnerabilities aren’t in the code AppDev teams write, but in the thousands of workflows and automations business users build on their own. The sooner organizations recognize and confront the invisible no-code estate, the faster they can reduce the security debt accumulating inside their infrastructure.

Daily Tech Digest - January 19, 2026


Quote for the day:

"Stop Judging people and start understanding people everyone's got a story" -- @PilotSpeaker



Stop calling it 'The AI bubble': It's actually multiple bubbles, each with a different expiration date

The AI ecosystem is actually three distinct layers, each with different economics, defensibility and risk profiles. Understanding these layers is critical, because they won't all pop at once. ... The most vulnerable segment isn't building AI — it's repackaging it. These are the companies that take OpenAI's API, add a slick interface and some prompt engineering, then charge $49/month for what amounts to a glorified ChatGPT wrapper. Some have achieved rapid initial success, like Jasper.ai, which reached approximately $42 million in annual recurring revenue (ARR) in its first year by wrapping GPT models in a user-friendly interface for marketers. But the cracks are already showing. ... Economic researcher Richard Bernstein points to OpenAI as an example of the bubble dynamic, noting that the company has made around $1 trillion in AI deals, including a $500 billion data center buildout project, despite being set to generate only $13 billion in revenue. The divergence between investment and plausible earnings "certainly looks bubbly," Bernstein notes. ... But infrastructure has a critical characteristic: It retains value regardless of which specific applications succeed. The fiber optic cables laid during the dot-com bubble weren’t wasted — they enabled YouTube, Netflix and cloud computing. Twenty-five years ago, the original dot-com bubble burst after debt financing built out fiber-optic cables for a future that had not yet arrived, but that future eventually did arrive, and the infrastructure was there waiting.


Modernizing Network Defense: From Firewalls to Microsegmentation

For many years, network security has been based on the concept of a perimeter defense, likened to a fortified boundary. The network perimeter functioned as a protective barrier, with a firewall serving as the main point of access control. Individuals and devices within this secured perimeter were considered trustworthy, while those outside were viewed as potential threats. The "perimeter-centric" approach was highly effective when data, applications, and employees were all located within the physical boundaries of corporate headquarters. In the current environment, however, this model is considered not only obsolete but also poses significant risks. ... Microsegmentation significantly mitigates the impact of cyberattacks by transitioning from traditional perimeter-based security to detailed, policy-driven isolation at the level of individual workloads, applications, or containers. By establishing secure enclaves for each asset, it ensures that if a device is compromised, attackers are unable to traverse laterally to other systems. ... Microsegmentation solutions offer detailed insights into application dependencies and inter-server traffic flows, uncovering long-standing technical debt such as unplanned connections, outdated protocols, and potentially risky activities that may not be visible to perimeter-based defenses. ... One significant factor deterring organizations from implementing microsegmentation is the concern regarding increased complexity. 


Human-in-the-loop has hit the wall. It’s time for AI to oversee AI

This is not a hypothetical future problem. Human-centric oversight is already failing in production. When automated systems malfunction — flash crashes in financial markets, runaway digital advertising spend, automated account lockouts or viral content — failure cascades before humans even realize something went wrong. In many cases, humans were “in the loop,” but the loop was too slow, too fragmented or too late. The uncomfortable reality is that human review does not stop machine-speed failures. At best, it explains them after the damage is done. Agentic systems raise the stakes dramatically. Visualizing a multistep agent workflow with tens or hundreds of nodes often results in dense, miles-long action traces that humans cannot realistically interpret. As a result, manually identifying risks, behavior drift or unintended consequences becomes functionally impossible. ... Delegating monitoring tasks to AI does not eliminate human accountability. It redistributes it. This is where trust often breaks down. Critics worry that AI governing AI is like trusting the police to govern themselves. That analogy only holds if oversight is self-referential and opaque. The model that works is layered, with a clear separation of powers. ... Humans shift from reviewing outputs to designing systems. They focus on setting operating standards and policies, defining objectives and constraints, designing escalation paths and failure modes, and owning outcomes when systems fail.


Building leaders in the age of AI

The leaders who end up thriving in the AI era will be those who blend human depth with digital fluency. They will use AI to think with them, not for them. And they will treat this AI moment not as a threat to their leadership but as an opportunity to focus on those elements of their portfolios that only humans can excel at. ... Leaders will need to give teams a set of guardrails (clear values and decision rights) and establish new definitions of quality while fostering a sense of trust and collaboration as new challenges emerge and business conditions evolve. ... Aspiration, judgment, and creativity are “only human” leadership traits—and the characteristics that can provide an irreplaceable competitive edge, especially when amplified using AI. It’s therefore incumbent upon organizations to actively identify and develop the individuals who demonstrate critical intrinsics like resilience, eagerness to learn from mistakes, and the ability to work in teams that will increasingly include both humans and AI agents. ... Organizations must actively cultivate core leadership qualities such as wisdom, empathy, and trust—and they must give the development of these attributes the same attention they do to the development of new IT systems or operating models. That will mean providing time for leaders to do the inner work required to lead others effectively—that is, reflecting, sharing insights with other C-suite leaders, and otherwise considering what success will mean for themselves and the organization.


The Rising Phoenix of Software Engineering

Software is undergoing a tectonic transformation. Modern applications are no longer hand-crafted from scratch. They are assembled from third-party components, APIs, open-source packages, machine-learning models, and now AI-generated snippets. Artificial intelligence, low-code tools, Open-Source Software (OSS), reusable libraries have made the act of writing new code less central to building software than ever before. ... In this new era, the primary challenge is not about builder software faster, cheaper, or more feature-rich. It is how to engineer software safely and predictably in a hostile ecosystem. ... Software engineering, as a discipline, must rise again — not as a metaphor for resilience, but as a mandate for survival. ... The future does not eliminate developers or coders. Assembling, customizing, and scripting third-party components will remain critical. But the accountability layer must shift upward, to professionals trained to reason about system safety, dependencies, and security by design. In other words, software engineers must reemerge as true engineers responsible for understanding not only how their code works, but how and where it runs… and most critically how to secure it. ... To engineer software responsibly, practitioners must model threats, evaluate anti-tamper capabilities, and verify that each dependency meets a baseline of assurance. These tasks were historically reserved for penetration testers or quality assurance (QA) teams. 


The concerning cyber-physical security disconnect

The background of many physical security professionals is in military and law enforcement, which change much slower, but are known for extensive training. The nature of the threats they need to defend against is evolving at a slower pace, and destructive, kinetic threats remain a primary concern. ... The focus of cybersecurity is much more on the insides of an organization. Detection is supposed to catch attackers lurking on compromised devices. Response activities have to consider the entire infrastructure rather than individual hosts. Security measures are spread out across the network, taking a defense-in-depth approach. Physical security is much more outward looking, trying to prevent threats from entering. Detection systems exist within premises, but focus on the outer layers. Response activities are focused on evicting individual threats or denying their access. The majority of security efforts focuses on the perimeter. ... Companies often handle both topics in different teams. Conferences and publications may feature both topics, but often focus on one and rarely address their interdependence. Security assessments like pentests and red team exercises sometimes include a physical component that tends to focus on social engineering without involving deep physical security expertise. ... Risks, especially in the form of human threat actors, will always look for the easiest way to materialize. Therefore, they will attack physical assets via their digital components and vice versa, if these flanks are not protected.


Architecting Agility: Decoupled Banking Systems Will Be the Key to Success in 2026

The banking industry is undergoing an evolutionary and market-driven shift. Digital banking systems, once rigid and monolithic, are being reimagined through decoupled architecture, AI-driven intelligence, programmatic technology consumption, and fintech innovation and partnerships. ... Delay is no longer an option — the future of banking is already being built today. To capitalize on these innovations, tech leaders must prioritize digital core banking agility, ensuring integration with new innovations and adapting to evolving market demands. ... Identify suspicious patterns in real time. As illustrated in the figure, a decoupled risk analytics gateway and prompt engine streamlines regulatory reporting and ensures adherence to evolving rules (regtech). Whitney Morgan, vice president at Skaleet, a fintech provider, states that generative AI takes this a notch further by automating regulatory reporting and accelerating product development. ... AI-enabled risk management empowers banks to detect anomalies across large translation datasets with the speed and accuracy that manual processes can’t match. Risk modeling and stress testing will enhance credit risk scoring, market risk simulations, and scenario analysis that drive preemptive and revenue options. ... The banking and financial services innovation race, with challenges in adoption and capturing market advantages, beckons leaders to be nimble and, at the same time, stay focused on the fundamentals. CIOs, CTOs, and other tech leaders can take proactive steps to strike the right balance.


Key Management Testing: The Most Overlooked Pillar of Crypto Security

The majority of security testing in crypto projects focuses on code correctness or operational attacks. Key management, however, is mainly considered a procedural issue rather than a technical problem. This is a dangerous false belief. Entropy sources, hardware integrity, and cryptographic integrity are key to generating. Ineffective randomness, broken device software or a corrupted environment may lead to keys that seem valid but are appallingly weak to attack. The testing mechanisms used to create new wallet addresses for users must be watertight when an exchange generates millions of new addresses. Testing should also be done on key storage. ... The recovery process is one of the most vulnerable areas of key management, yet it is discussed least. Backup and restoration are prone to human error, improperly configured storage, or unsafe transmission. The unfortunate fact about crypto is that recovery mechanisms can be either a saviour or a disaster. Recovery phrases, encrypted backups, and distributed shares need to be repeatedly tested in a real-world, adversarial environment. ... End-to-end lifecycle testing, automatic verification of key states, automated attack simulations and automated recovery protocols that self-heal will be the order of the day. The industry has already become such that key management is no longer a concealed or even supporting part of the security strategies. 


Inside the Chip: How Hardware Root of Trust Shifts the Odds Back to Cyber Defenders

Defenders often lack direct control or visibility into the hardware layer where workloads actually execute. This abstraction can obscure low-level threats, allowing attackers to manipulate telemetry, disable software protections, or persist beyond reboots. Crucially, modern attacks are not brute force attempts to break encryption or overwhelm defences. They exploit the assumptions built into how systems start, update, and prove what’s genuine. ... At the centre of this shift is Hardware Root of Trust (HRoT): a security architecture that embeds trust directly into the hardware layer of a device. US National Institute of Standards and Technology (NIST) defines it as “an inherently trusted combination of hardware and firmware that maintains the integrity of information.” In practice, HRoT serves as the anchor for system trust from the moment power is applied. ... For CISOs, HRoT represents an opportunity to strengthen resilience, meet regulatory demands, and finally realise true zero trust. From a resilience standpoint, it changes the balance between prevention and response. By validating integrity from power-on and continuously during operation, it reduces reliance on post-incident investigation and recovery. Compromised devices and systems are stopped early, limiting blast radius and disruption. Regulators are already reinforcing this direction. Frameworks such as the US Department of Defense’s CMMC explicitly highlight HRoT as a stronger foundation for assurance. 


What AI skills job seekers need to develop in 2026

One of the earliest AI skills involved prompt engineering — being able to get to the necessary AI-generated results by using the right questions. But that baseline skill is being pushed aside by “context engineering.” Think of context engineering as prompt engineering on steroids; it involves developing prompts that can deliver consistent and predictive answers. Ideally, “everytime you ask the same question, you always get the same answer,” said Bekir Atahan, vice president at Experis Services, a division of Manpower Group. That skill is critical because AI models are changing quickly, and the answers they spout out can differ from day to day. Context engineering is aimed at ensuring consistent outputs despite a rapidly evolving AI ecosystem. ... “Beyond algorithms and coding, the next wave of AI talent must bridge technology, governance and organizational change. The most valuable AI skill in 2026 isn’t coding, it’s building trust,” Seth said. Along those lines, he recommended that job seekers immerse themselves in the technology beyond simply taking a class. “Instead of a course, go to any conference,” Seth said. ... In hiring, genuine AI capability shows up through curiosity and real experience, Blackford said. “Strong candidates can talk honestly about something they tried, what did not work, and what they learned,” he said ... “Things are evolving at such a fast pace that there will be no perfect set of skills,” said Seth. “I would say more than skills, attitudes are more important — that adaptability to change, how quick you are to learn things.”