Daily Tech Digest - January 23, 2026


Quote for the day:

"Strong convictions precede great actions." -- James Freeman Clarke



90% of companies are woefully unprepared for quantum security threats

Companies shouldn't wait, Bain warned, pointing to rapid progress made by IBM, Google, and other industry leaders on this front. "At a certain threshold, quantum computing will be able to easily and quickly break asymmetric cryptography protocols such as Rivest-Shamir-Adelman (RSA), Diffie-Hellman (DH), and elliptic-curve cryptography (ECC) and reduce the time required, weakening symmetric cryptography such as advanced encryption standard (AES) and hashing functions," ... The highest impact will be on secure keys and tokens, digital certificates, authentication protocols, data encrypted at rest, and even network security and identity access management (IAM) tools. Essentially, anything currently relying on encryption. Beyond that, quantum computing could supercharge malware and make it easier to identify and weaponize "zero day" flaws, Bain warned. Another risk highlighted by security experts is "steal now, crack later" techniques, whereby threat actors harvest data now to decrypt later.  ... Companies need a board-led – and funded – roadmap to consider post-quantum risks across their business decision making, ensuring quantum resilience across their own suppliers, existing technology, and even their products. But so far, the Bain survey revealed only 12% of companies are considering quantum readiness as a key factor in procurement and risk assessments.


The New Rules of Work: What a global HR leader reveals about modern talent

The impact of AI on the workforce is a subject Sonia has thought deeply about, especially as it relates to entry-level talent. “There’s always been a question about repetitive engineering tasks—whether these should be done by engineers or by diploma holders. Now, with AI in the picture, many of these tasks will be automated,” she says. Rather than seeing this as a threat, Kutty believes it frees up human talent to focus on innovation and problem-solving. “Our true value at Quest Global comes from leveraging innovation to solve the toughest engineering problems. AI will allow us to do more of this meaningful work.” ... While the company offers AI-based courses and certifications, Kutty emphasises the importance of fostering a mindset of adaptability and systems thinking. “We call it nurturing ‘polymath engineers’—professionals who can think broadly, adapt to new challenges, and learn continuously,” she says. ... As the engineering and R&D sector prepares for rapid growth, Kutty identifies leadership development as her biggest challenge—and her greatest responsibility. “We need strong leaders who understand this industry and are ready to step up when the time comes. Planning for leadership succession keeps me up at night. It’s critical for our continued success.” On the other hand, client expectations have evolved alongside technological advances. “In the past, clients would tell us exactly what they wanted. Now, they expect us to tell them what’s possible with AI and technology. They see us as partners in innovation, not just service providers,” Kutty observes.


Work-from-office mandate? Expect top talent turnover, culture rot

There is value in cross-functional teams working together in person, says Lawrence Wolfe, CTO at marketing firm Converge. “When teams meet for architecture sessions, design sprints, or incident response, the pace of progress, as well as the level of clarity, may increase simply because being in-person caters to the way most people in the business interact,” he says. However, there are potential downsides for IT leaders, with strict work-from-office policies making it more difficult to attract and retain top IT talent. ... Despite possible resistance, it makes sense for some IT jobs to be tied to an office, says Lena McDearmid, founder and CEO of culture and leadership advisory firm Wryver. Some IT roles, including device provisioning, network operations, and conference room IT support, are better done in person, she notes. She sees some other benefits in specific situations. “In-person work is genuinely valuable for onboarding and mentoring early-career technologists, especially when learning how the organization actually operates, not just how the codebase works,” McDearmid says. “It’s also powerful when teams need to think together in high-bandwidth ways: whiteboards, war rooms, architecture reviews, incident response, or when solving messy, cross-functional problems.” ... IT leaders enforcing in-person work mandates can also focus on making the workplace a real place to collaborate, she adds. CIOs can align office space, meeting schedules, and in-office days so they reinforce the goals of collaboration and knowledge sharing, Wettemann adds.


Rethinking IT leadership to unlock the agility of ‘teamship’

Rather than waiting for the leader to set the pace, the best teams coach one another, challenge one another, co-elevate one another, and move faster, because they and their leaders have built cultures where candor is a shared responsibility. For CIOs navigating the messy middle of AI, modernization, and talent transformation, this shift from leadership to what Ferrazzi calls “teamship” may be the most important upgrade of all. ... The No. 1 shift is to move from leadership to teamship. That means stop thinking of leadership as a hub and spoke. Don’t think aboutwhat you need to give feedback on, how you need to hold people accountable, how you need to do this or that. Instead, think about, how do you get your team to step up and meet each other, to give each other feedback, to hold each other’s energy up. Get out of the center and expect your team to step up. ... To be effective, stress testing needs to be positioned as a service to the person who’s giving the project update. We’re not trying to make them look bad or catch them in what they’re doing wrong. The feedback should be offered and received as data, with no presumption that they have to act on it. ... That fear is rooted in a misunderstanding of how high-performing teams actually work. In traditional leadership models, accountability flows upward: People worry about what the boss will think. In teamship, accountability flows sideways: People worry about letting their peers down.


The Upside Down is Real: What Stranger Things Teaches Us About Modern Cybersecurity

The Upside Down’s danger lies in the unseen portals – the gates and rifts – that allow its monstrous inhabitants, like the Demogorgon and the Mind Flayer, to cross over and wreak havoc in the seemingly safe, familiar world of Hawkins. Today, nearly every business’s hidden reality is its extended attack surface. It’s the sprawling, complex, and often unmanaged network of IT, OT, IoT, medical, cloud systems and beyond that modern organizations rely on. ... For the CISO and security team, this translates directly to the need for full, continuous visibility across every single connected device and system to protect the entire attack surface and manage their organization’s cyber risk exposure in real time. Like the Dungeons and Dragons analogies the kids use to understand the creatures and their tactics, security teams rely on context and intelligence – risk scoring, vulnerability prioritization, and threat analysis – to understand how an asset is connected, why it is vulnerable, and what the most effective countermeasure is. ... First and foremost, cybersecurity requires teamwork, particularly through the fusion of IT, OT, security and business leadership so that they work from a unified view of any risks at hand. It also demands persistence from the dedicated security professionals protecting our digital infrastructure. Most of all, cybersecurity needs to be a proactive and preemptive effort where risk exposures are continuously monitored and threats can be stopped before they ever fully manifest.


Shadow AI: The emerging enterprise risk that can no longer be ignored

With regulatory frameworks tightening and emerging national standards, unsanctioned AI activity can quickly become a governance liability. Instead of reactive controls, organisations are now moving toward multi-layered visibility frameworks: monitoring external AI calls, classifying enterprise assets by sensitivity and tracking unmanaged AI usage. Forward-looking teams are even translating these metrics into financial exposure scores, linking AI misuse to operational, reputational and regulatory impact. Assigning monetary value to Shadow AI risk has proven effective for prioritising mitigation at leadership levels. ... A structured foundation is essential, comprised of trusted assessment frameworks, tested architectural blueprints and scalable AI operating models. Some organisations are pairing these with comprehensive training programs to build AI-literate leaders and teams, ensuring governance evolves alongside capability. This reflects a broader shift: responsible AI has now become the foundation of durable competitive advantage. ... Regulators, global partners and enterprise clients are seeking evidence of formal AI governance models, not just intent. For example, as per the Digital India Act, sectoral data localisation rules and global regulatory momentum are prompting enterprises to strengthen AI auditability, model documentation and workforce training. For many organisations, AI governance has moved from an operational task to a board-level agenda. 


Ireland to make age checks through government app mandatory for social media

The plan is unprecedented among governments legislating online safety, in that it makes downloading the app, designed by the Government’s chief information officer, mandatory for age assurance. Per the Extra report, “if adults refuse to download the digital wallet, they will no longer be able to access their existing social media accounts.” “Mr. O’Donovan said the process of downloading the app might inconvenience someone for ‘three or four minutes’ but this was a small ask in order to protect children online.” O’Donovan has called the harmful effects of social media and other online content on youth a “severe public health issue.” ... Concerns about age assurance technology persist among privacy rights activists. Since age verification and facial age estimation often involves the processing of biometrics, the potential for sensitive data to be exposed is high. And requiring the process to run through a government product is likely to agitate fears about mass surveillance. O’Donovan says the risk to Ireland’s youth is higher. ... “At the end of the day, if the companies have a social conscience and are interested in the protection of children online, I don’t see why anybody who wouldn’t be trading in Ireland, not just domiciled in Ireland, wouldn’t adopt the format that we’re proposing,” he says. “Some of them do have, you know, something bordering on a social conscience, which is to be welcomed. But ­others don’t.”


Secure networking: the foundation for the AI era

Global networks have been under siege for years, but recent attacks are more sophisticated and move at unprecedented speed. Many organizations are still relying on outdated infrastructure, with Cisco research revealing that 48% of network assets worldwide are aging or obsolete. This creates vulnerabilities that attackers eagerly exploit. It’s no longer enough to patch and maintain; a fundamental shift in strategy is required. ... Modern networks typically span solutions and services from a range of different vendors, creating layers of complexity that can quickly overwhelm even experienced IT teams. This complexity often translates into vulnerability, especially when secure configurations aren’t consistently implemented or maintained. For many, simplicity and automation are now mission critical. Businesses increasingly need networks where secure configurations, protocols, and features are enabled by default and adapt automatically. ... Organizations now face the challenge of not only detecting threats quickly, but also responding before vulnerabilities can be exploited. There is an urgent need to reduce the attack surface, remove legacy insecure features, and introduce advanced capabilities for detection and response. ... The next generation of security requires networks to seamlessly provide identity management, deep visibility, integrated detection and protection, and streamlined management, while also incorporating advanced technologies like post-quantum cryptography. 


Ransomware gang’s slip-up led to data recovery for 12 US firms

Researchers at Florida-based Cyber Centaurs said Thursday they took advantage of a lapse in operational security by the gang: They found artifacts left behind by Restic, an legitimate open source backup utility the gang uses to encrypt and exfiltrate victim data into cloud storage environments it controls. Assuming the gang regularly re-uses Restic-based infrastructure led to finding an unnamed cloud storage provider where stolen data was dumped. ... While Restic wasn’t used for exfiltration in this particular attack, Cyber Centaurs suspected the gang regularly used it, based on patterns seen in other incidents. It also suspected the infrastructure the crooks used was unlikely to be dismantled even after negotiations ended or payments were made by corporate victims. With that in mind, the incident response team developed a custom enumeration script to identify certain patterns that identify S3-style cloud bucket infrastructure that the stolen data might be going to. The script ran through a curated list of candidate repository identifiers derived from previously observed Restic artifacts. For each candidate, environment variables were set to match the configuration style used by the threat actor, including the repository endpoint and encryption password. Restic was then instructed to list available snapshots in a structured format, enabling investigators to analyze results without interacting with the underlying data.


The Real Attack Surface Isn’t Code Anymore — It’s Business Users

Traditional AppSec programs are optimized for code stored in repositories, pushed through pipelines, and deployed through CI/CD, not for no-code apps, connectors, and automations created on platforms like Power Platform, ServiceNow, Salesforce, and UiPath. Meanwhile, most organizations assume business-user automations are simple, low-risk, and limited in scope. The reality is more complex. Citizen developers now outnumber traditional software developers by an order of magnitude. Plus, they are wiring together data sources, triggering multi-system workflows, and calling APIs, not just building basic macros or departmental utilities. Because these automations are created outside engineering governance, traditional monitoring tools never see them. ... What emerges is a shadow layer of business logic that sits entirely outside the boundaries of traditional AppSec, DevSecOps, and identity programs. As long as ownership remains fragmented and discovery elusive, security debt continues to grow unchecked. ... We’re entering an era where the most dangerous vulnerabilities aren’t in the code AppDev teams write, but in the thousands of workflows and automations business users build on their own. The sooner organizations recognize and confront the invisible no-code estate, the faster they can reduce the security debt accumulating inside their infrastructure.

Daily Tech Digest - January 22, 2026


Quote for the day:

"Lost money can be found. Lost time is lost forever. Protect what matters most." -- @ValaAfshar



PTP is the New NTP: How Data Centers Are Achieving Real-Time Precision

Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronization accuracy. ... Keeping network time in sync is important on any network. But it’s especially critical in data centers, which are typically home to large numbers of network-connected devices, and where small inconsistencies in network timing could snowball into major network synchronization problems. ... NTP works very well in situations where networks can tolerate timing inconsistencies of up to a few milliseconds (meaning thousandths of a second). But beyond this, NTP-based time syncing is less reliable due to limitations ... Unlike NTP, PTP doesn’t rely solely on a server-client model for syncing time across networked devices. Instead, it uses time servers in conjunction with a method called hardware timestamping on client devices. Hardware timestamping involves specialized hardware components, usually embedded in network interface cards (NICs), to track time. Central time servers still exist under PTP. But rather than having software on servers connect to the time servers, hardware devices optimized for the task do this work. The devices also include built-in clocks, allowing them to record time data faster than they could if they had to forward it to the generic clock on a server.


Why AI adoption requires a dedicated approach to cyber governance

Today enterprises are facing unprecedented internal pressure to adopt AI tools at speed. Business units are demanding AI solutions to remain competitive, drive efficiency, and innovate faster. But existing cyber governance and third-party risk management processes were never designed to operate at this pace. ... Without modernized cyber governance and AI-ready risk management capabilities, organizations are forced to choose between speed and safety. To truly enable the business, governance frameworks must evolve to match the speed, scale, and dynamism of AI adoption – transforming security from a gatekeeper into a business enabler. ... What’s more, compliance doesn’t guarantee security. DORA, NIS2, and other regulatory frameworks set only minimum requirements and rely on reporting at specific points in time. While these reports are accurate when submitted, they capture only a snapshot of the organization’s security posture, so gaps such as human errors, legacy system weaknesses, or risks from fourth- and Nth-party vendors can still emerge afterward. What’s more, human weakness is always present, and legacy systems can fail at crucial moments. ... While there’s no magic wand, there are tried-and-tested approaches that resolve and mitigate the risks of AI vendors and solutions. Mapping the flow of data around the organization helps reveal how it’s used and resolve blind spots. Requiring AI tools to include references for their outputs ensures that risk decisions are trustworthy and reliable.


What CIOs get wrong about integration strategy and how to fix it

As Gartner advises, business and IT should be equal partners in the definition of integration strategy, representing a radical departure from the traditional IT delivery and business “project sponsorship” model. This close collaboration and shared accountability result in dramatically higher success rates ... A successful integration strategy starts by aligning with the organization’s business drivers and strategic objectives while identifying the integration capabilities that need to be developed. Clearly defining the goals of technology implementation, establishing governance frameworks and decision-making authority and setting standards and principles to guide integration choices are essential. Success metrics should be tied to business outcomes, and the integration approach should support broader digital transformation initiatives. ... Create cross-functional data stewardship teams with authority to make binding decisions about data standards and quality requirements. Document what data needs to be shared between systems, which applications are the “source of truth.” Define and document any regulatory or performance requirements to guide your technical planning. ... Integrations that succeed in production are designed with clear system-of-record rules, traceable transactions, explicit recovery paths and well-defined operational ownership. Preemptive integration is not about reacting faster — it’s about ensuring failures never reach the business.


CFOs are now getting their own 'vibe coding' moment thanks to Datarails

For the modern CFO, the hardest part of the job often isn't the math—it's the storytelling. After the books are closed and the variances calculated, finance teams spend days, sometimes weeks, manually copy-pasting charts into PowerPoint slides to explain why the numbers moved. ... Datarails’ new agents sit on top of a unified data layer that connects these disparate systems. Because the AI is grounded in the company’s own unified internal data, it avoids the hallucinations common in generic LLMs while offering a level of privacy required for sensitive financial data. "If the CFO wants to leverage AI on the CFO level or the organization data, they need to consolidate the data," explained Datarails CEO and co-founder Didi Gurfinkel in an interview with VentureBeat. By solving that consolidation problem first, Datarails can now offer agents that understand the context of the business. "Now the CFO can use our agents to run analysis, get insights, create reports... because now the data is ready," Gurfinkel said. ... "Very soon, the CFO and the financial team themselves will be able to develop applications," Gurfinkel predicted. "The LLMs become so strong that in one prompt, they can replace full product runs." He described a workflow where a user could simply prompt: "That was my budget and my actual of the past year. Now build me the budget for the next year."


The internet’s oldest trust mechanism is still one of its weakest links

Attackers continue to rely on domain names as an entry point into enterprise systems. A CSC domain security study finds that large organizations leave this part of their attack surface underprotected, even as attacks become more frequent. ... Large companies continue to add baseline protections, though adoption remains uneven. Email authentication shows the most consistent improvement, driven by phishing activity and regulatory pressure. Organizations still leave email domains partially protected, which allows spoofing to persist. Other protections see much slower uptake. ... Consumer oriented registrars tend to emphasize simplicity and cost. Organizations that rely on them often lack access to protections that limit the impact of account compromise or social engineering. Risk increases as domain portfolios grow and change. ... Brand impersonation through domain spoofing remains widespread. Lookalike domains tied to major brands are often owned by third parties. Some appear inactive while still supporting email activity. Inactive domains with mail records allow attackers to send phishing messages that appear associated with trusted brands. Others are parked with advertising networks or held for later use. A smaller portion hosts malicious content, though dormant domains can be activated quickly. ... Gaps appear in infrastructure related areas. DNS redundancy and registry lock adoption lag, and many unicorns rely on consumer grade registrars. These limitations become more pronounced as operations scale.


Misconfigured demo environments are turning into cloud backdoors to the enterprise

Internal testing, product demonstrations, and security training are critical practices in cybersecurity, giving defenders and everyday users the tools and wherewithal to prevent and respond to enterprise threats. However, according to new research from Pentera Labs, when left in default or misconfigured states, these “test” and “demo” environments are yet another entry point for attackers — and the issue even affects leading security companies and Fortune 500 companies that should know better. ... After identifying an exposed instance of Hackazon, a free, intentionally vulnerable test site developed by Deloitte, during a routine cloud security assessment for a client, Yaffe performed a five-step hunt for exposed apps. His team uncovered 1,926 “verified, live, and vulnerable applications,” more than half of which were running on enterprise-owned infrastructure on AWS, Azure, and Google Cloud platforms. They then discovered 109 exposed credential sets, many accessible via a low-priority lab environment, tied to overly-privileged identity access management (IAM) roles. These often granted “far more access” than a ‘training’ app should, Yaffe explained, and provided attackers:Administrator-level access to cloud accounts, as well as full access to S3 buckets, GCS, and Azure Blob Storage; The ability to launch and destruct compute resources and read and write to secrets managers; Permissions to interact with container registries where images are stored, shared, and deployed.


Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore

“We’re now entering a new API boom. The previous wave was driven by cloud adoption, mobile apps, and microservices. Now, the rise of AI agents is fueling a rapid proliferation of APIs, as these systems generate massive, dynamic, and unpredictable requests across enterprise applications and cloud services,” comments Jacob Ideskog ... The growing use of agentic AI systems and the way they act autonomously, making decisions and triggering workflows, is ballooning the number of APIs in play. “It isn’t just ‘I expose one billing API’,” he continues, “now there are dozens of APIs that feed data to LLMs or AI agents, accept decisions from AI agents, facilitate orchestration between services and micro-apps, and potentially expose ‘agentic’ endpoints ... APIs have been a major attack surface for years – the problem is ongoing. Starting in 2025 and accelerating through 2026 and beyond, the rapid escalation of enterprise agentic AI deployments will multiply the number of APIs and increase the attack surface. That alone suggests that attacks against APIs will grow in 2026. But the attacks themselves will scale and be more effective through adversaries’ use of their own agentic AI. Barr explains: “Agentic AI means that bad actors can automate reconnaissance, probe API endpoints, chain API calls, test business-logic abuse, and execute campaigns at machine scale. Possession of an API endpoint, particularly a self-service, unconstrained one, becomes a lucrative target. And AI can generate payloads, iterate quickly, bypass simple heuristics, and map dependencies between APIs.”


Complex VoidLink Linux Malware Created by AI

An advanced cloud-first malware framework targeting Linux systems was created almost entirely by artificial intelligence (AI), a move that signals significant evolution in the use of the technology to develop advanced malware. VoidLink — comprised of various cloud-focused capabilities and modules and designed to maintain long-term persistent access to Linux systems — is the first case of wholly original malware being developed by AI, according to Check Point Research, which discovered and detailed the malware framework last week. While other AI-generated malware exists, it's typically "been linked to inexperienced threat actors, as in the case of FunkSec, or to malware that largely mirrored the functionality of existing open-source malware tools," ... The malware framework, linked to a suspected, unspecified Chinese actor, includes custom loaders, implants, rootkits, and modular plug-ins. It also automates evasion as much as possible by profiling a Linux environment and intelligently choosing the best strategy for operating without detection. Indeed, as Check Point researchers tracked VoidLink in real time, they watched it transform quickly from what appeared to be a functional development build into a comprehensive, modular framework that became fully operational in a short timeframe. However, while the malware itself was high-functioning out of the gate, VoidLink's creator proved to be somewhat sloppy in their execution.


What’s causing the memory shortage?

Right now, the industry is suffering the worst memory shortage in history, and that’s with three core suppliers: Micron Technology, SK Hynix, and Samsung. TrendForce, a Taipei-based market researcher that specializes in the memory market, recently said it expects average DRAM memory prices to rise between 50% and 55% this quarter compared to the fourth quarter of 2025. Samsung recently issued a similar warning. So what caused this? Two letters: AI. The rush to build AI-oriented data centers has resulted in virtually all of the memory supply being consumed by data centers. AI requires massive amounts of memory to process its gigantic data sets. A traditional server would usually come with 32 GB to 64 GB of memory, while AI servers have 128 GB or more. ... There are other factors at play here, too, of course. The industry is in a transition period between DDR4 and DDR5, as DDR5 comes online and DDR4 fades away. These transitions to a new memory format are never quick or easy, and it usually take years to make a full shift. There has also been increased demand from both client and server sides. With Microsoft ending support for Windows 10, a whole lot of laptops are being replaced with Windows 11 systems, and new laptops come with DDR5 memory — the same memory used in an AI server. ... “What’s likely to happen, from a market perspective, is we’ll see the market grow less in ’26 than we had anticipated, but ASPs are likely to stay or increase. ...” he said.


OpenAI CFO Comments Signal End of AI Hype Cycle

By focusing on “practical adoption,” OpenAI can close the gap between what AI now makes possible and how people, companies, and countries are using it day to day. “The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes,” she noted. “Infrastructure expands what we can deliver,” she continued. “Innovation expands what intelligence can do. Adoption expands who can use it. Revenue funds the next leap. This is how intelligence scales and becomes a foundation for the global economy.” The framing reflects a shift from big-picture AI promise to day-to-day deployment and measurable results. ... There’s also a gap between what AI can do and how people are actually using it in daily life, noted Natasha August, founder of RM11, a content monetization platform for creators in Carrollton, Texas. “AI tools are incredibly powerful, but for many people and businesses, it’s still unclear how to turn that power into something practical like saving time, making money, or improving how they work,” she told TechNewsWorld. In business, the gap lies between AI’s raw analytical capabilities and its ability to drive tangible, repeatable business outcomes, maintained Nithin Mummaneni ... “The winning play is less ‘AI that answers’ and more ‘AI that completes tasks safely and predictably,'” he continued. “Adoption happens when AI becomes part of the workflow, not a separate destination.”

Daily Tech Digest - January 21, 2026


Quote for the day:

"People ask the difference between a leader and a boss. The leader works in the open, and the boss in covert." -- Theodore Roosevelt



Why the future of security starts with who, not where

Traditional security assumed one thing: “If someone is inside the network, they can be trusted.” That assumption worked when offices were closed environments and systems lived behind a single controlled gateway. But as Microsoft highlights in its Digital Defense Report, attackers have moved almost entirely toward identity-based attacks because stealing credentials offers far more access than exploiting firewalls. In other words, attackers stopped trying to break in. They simply started logging in. ... Zero trust isn’t about paranoia. It’s about verification. Never trust, always verify only works if identity sits at the center of every access decision. That’s why CISA’s zero trust maturity model outlines identity as the foundation on which all other zero trust pillars rest — including network segmentation, data security, device posture and automation. ... When identity becomes the perimeter, it can’t be an afterthought. It needs to be treated like core infrastructure. ... Organizations that invest in strong identity foundations won’t just improve security — they’ll improve operations, compliance, resilience and trust. Because when identity is solid, everything else becomes clearer: who can access what, who is responsible for what and where risk actually lives. The companies that struggle will be the ones trying to secure a world that no longer exists — a perimeter that disappeared years ago.


Designing Consent Under India's DPDP Act: Why UX Is Now A Legal Compliance

The request for consent must be either accompanied by or preceded by a notice. The notice must specifically contain three things: personal data and purpose for which it is being collected; the manner in which he or she may withdraw consent or make grievance; and the manner in which the complaint may be made to the board. ... “Free” consent also requires interfaces to avoid deceptive nudges or coercive UI design. Consider a consent banner implemented with a large “Accept All” button as the primary call-to-action button while the “Reject” option is kept hidden behind a secondary link that opens multiple additional screens. This creates an asymmetric interaction cost where acceptance requires a single click and refusal demands several steps. If consent is obtained through such interface, it cannot be regarded as voluntary or valid. ... A defensible consent record must capture the full interaction such as which notice version was shown, what purposes were disclosed, language of the notice and the action of the user (click, toggle, checkbox). The standard operational logs might be disposed after 30 or 90 days but the consent logs cannot follow the same cycle. Section 6(10) implicitly states that consent records must be retained as long as the data is being processed for the purposes shown in the notice. If the personal data was collected in 2024 and is still being processed in 2028, the Fiduciary must produce the 2024 consent logs as evidence.


The AI Skills Gap Is Not What Companies Think It Is

Employers often say they cannot find enough AI engineers or people with deep model expertise to keep pace with AI adoption. We can see that in job descriptions. Many blend responsibilities across model development, data engineering, analytics, and production deployment into a single role. These positions are meant to accelerate progress by reducing handoffs and simplifying ownership. And in an ideal world, the workforce would be ready for this. ... So when companies say they are struggling to fill the AI skills gap, what they are often missing is not raw technical ability. They are missing people who can operate inside imperfect environments and still move AI work forward. Most organizations do not need more model builders. ... For professionals trying to position themselves, the signal is similar. Career advantage increasingly comes from showing end-to-end exposure, not mastery of every AI tool. Experience with data pipelines, deployment constraints, and being able to monitor systems matter. Being good at stakeholder communication remains an important skill. The AI skills gap is not a shortage of talent. It is a shortage of alignment between what companies need and what they are actually hiring for. It’s also an opportunity for companies to understand what it really means, and finally close the gap. Professionals can also capitalize on this opportunity by demonstrating end-to-end, applied AI experience.


DevOps Didn’t Fail — We Just Finally Gave it the Tools it Deserved

Ask an Ops person what DevOps success looks like, and you’ll hear something very close to what Charity is advocating: Developers who care deeply about reliability, performance, and behavior in production. Ask security teams and you’ll get a different answer. For them, success is when everyone shares responsibility for security, when “shift left” actually shifts something besides PowerPoint slides. Ask developers, and many will tell you DevOps succeeded when it removed friction. When it let them automate the non-coding work so they could, you know, actually write code. Platform engineers will talk about internal developer platforms, golden paths, and guardrails that let teams move faster without blowing themselves up. SREs, data scientists, and release engineers all bring their own definitions to the table. That’s not a bug in DevOps. That’s the thing. DevOps has always been slippery. It resists clean definitions. It refuses to sit still long enough for a standards body to nail it down. At its core, DevOps was never about a single outcome. It was about breaking down silos, increasing communication, and getting more people aligned around delivering value. Success, in that sense, was always going to be plural, not singular. Charity is absolutely right about one thing that sits at the heart of her argument: Feedback loops matter. If developers don’t see what happens to their code in the wild, they can’t get better at building resilient systems. 


The sovereign algorithm – India’s DPDP act and the trilemma of innovation, rights, and sovereignty

At its core, the DPDP Act functions as a sophisticated product of governance engineering. Its architecture is a deliberate departure from punitive, post facto regulation towards a proactive, principles based model designed to shape behavior and technological design from the ground up. Foundational principles such as purpose limitation, data minimization, and storage restriction are embedded as mandatory design constraints, compelling a fundamental rethink of how digital services are conceived and built. ... The true test of this legislative architecture will be its performance in the real world, measured across a matrix of tangible and intangible metrics that will determine its ultimate success or failure. The initial eighteen month grace period for most rules constitutes a critical nationwide integration phase, a live stress test of the framework’s viability and the ecosystem’s adaptability. ... Geopolitically, the framework positions India as a normative leader for the developing world. It articulates a distinct third path between the United States’ predominantly market oriented approach and China’s model of state controlled cyber sovereignty. India’s alternative, which embeds individual rights within a democratic structure while reserving state authority for defined public interests, presents a compelling model for nations across the Global South navigating their own digital transitions.


Everyone Knows How to Model. So Why Doesn’t Anything Get Modeled?

One of the main reasons modeling feels difficult is not lack of competence, but lack of shared direction. There is no common understanding of what should be modeled, how it should be modeled, or for what purpose. In other words, there is no shared content framework or clear work plan. When it is missing, everyone defaults to their own perspective and experience. ... From the outside, it looks like architecture work is happening. In reality, there is discussion, theorizing, and a growing set of scattered diagrams, but little that forms a coherent, usable whole. At that point, modeling starts to feel heavy—not because it is technically difficult, but because the work lacks direction, a shared way of describing things, and clear boundaries. ... To be fair, tools do matter. A bad or poorly introduced tool can make modeling unnecessarily painful. An overly heavy tool kills motivation; one that is too lightweight does not support managing complexity. And if the tool rollout was left half-done, it is no surprise the work feels clumsy. At the same time, a good tool only enables better modeling—it does not automatically create it. The right tool can lower the threshold for producing and maintaining content, make relationships easier to see, and support reuse. ... Most architecture initiatives don’t fail because modeling is hard. They fail because no one has clearly decided what the modeling is for. ... These are not technical modeling problems. They are leadership and operating-model problems. 


ChatGPT Health Raises Big Security, Safety Concerns

ChatGPT Health's announcement touches on how conversations and files in ChatGPT as a whole are "encrypted by default at rest and in transit" and that there are some data controls such as multifactor authentication, but the specifics on how exactly health data will be protected on a technical and regulatory level was not clear. However, the announcement specifies that OpenAI partners with network health data firm b.well to enable access to medical records. ... While many security tentpoles remain in place, healthcare data must be held to the highest possible standard. It does not appear that ChatGPT Health conversations are end-to-end encrypted. Regulatory consumer protections are also unclear. Dark Reading asked OpenAI whether ChatGPT Health had to adhere to any HIPAA or regulatory protections for the consumer beyond OpenAI's own policies, and the spokesperson mentioned the coinciding announcement of OpenAI for Healthcare, which is OpenAI's product for healthcare organizations which do need to meet HIPAA requirements. ... even with privacy protections and promises, data breaches will happen and companies will generally comply with legal processes such as subpoenas and warrants as they come up. "If you give your data to any third party, you are inevitably giving up some control over it and people should be extremely cautious about doing that when it's their personal health information," she says.


From static workflows to intelligent automation: Architecting the self-driving enterprise

We often assume fragility only applies to bad code, but it also applies to our dependencies. Even the vanguard of the industry isn’t immune. In September 2024, OpenAI’s official newsroom account on X (formerly Twitter) was hijacked by scammers promoting a crypto token. Think about the irony: The company building the most sophisticated intelligence in human history was momentarily compromised not by a failure of their neural networks, but by the fragility of a third-party platform. This is the fragility tax in action. When you build your enterprise on deterministic connections to external platforms you don’t control, you inherit their vulnerabilities. ... Whenever we present this self-driving enterprise concept to clients, the immediate reaction is “You want an LLM to talk to our customers?” This is a valid fear. But the answer isn’t to ban AI; it is to architect confidence-based routing. We don’t hand over the keys blindly. We build governance directly into the code. In this pattern, the AI assesses its own confidence level before acting. This brings us back to the importance of verification. Why do we need humans in the loop? Because trusted endpoints don’t always stay trusted. Revisiting the security incident I mentioned earlier: If you had a fully autonomous sentient loop that automatically acted upon every post from a verified partner account, your enterprise would be at risk. A deterministic bot says: Signal comes from a trusted source -> execute. 


AI is rewriting the sustainability playbook

At first, greenops was mostly finops with a greener badge. Reduce waste, right-size instances, shut down idle resources, clean up zombie storage, and optimize data transfer. Those actions absolutely help, and many teams delivered real improvements by making energy and emissions a visible part of engineering decisions. ... Greenops was designed for incremental efficiency in a world where optimization could keep pace with growth. AI breaks that assumption. You can right-size your cloud instances all day long, but if your AI footprint grows by an order of magnitude, efficiency gains get swallowed by volume. It’s the classic rebound effect: When something (AI) becomes easier and more valuable, we do more of it, and total consumption climbs. ... Enterprises are simultaneously declaring sustainability leadership while budgeting for dramatically more compute, storage, networking, and always-on AI services. They tell stakeholders, “We’re reducing our footprint,” while telling internal teams, “Instrument everything, vectorize everything, add copilots everywhere, train custom models, and don’t fall behind.” This is hypocrisy and a governance failure. ... Greenops isn’t dead, but it is being stress-tested by a wave of AI demand that was not part of its original playbook. Optimization alone won’t save you if your consumption curve is vertical. Rather than treat greenness as just a brand attribute, enterprises that succeed will recognize greenops as an engineering and governance discipline, especially for AI


Your AI strategy is just another form of technical debt

Modern software development has become riddled with indeterminable processes and long development chains. AI should be able to fix this problem, but it’s not actually doing so. Instead, chances are your current AI strategy is saddling your organisation with even more technical debt. The problem is fairly straightforward. As software development matures, longer and longer chains are being created from when a piece of software is envisioned until it’s delivered. Some of this is due to poor management practices, and some of it is unavoidable as programs become more complex. ... These tools can’t talk to each other, though; after all, they have just one purpose, and talking isn’t one of them. The results of all this, from the perspective of maintaining a coherent value chain, are pretty grim. Results are no longer predictable. Worse yet, they are not testable or reproducible. It’s just a set of random work. Coherence is missing, and lots of ends are left dangling. ... If this wasn’t bad enough, using all these different, single-purpose tools adds another problem, namely that you’re fragmenting all your data. Because these tools don’t talk to each other, you’re putting all the things your organisation knows into near-impenetrable silos. This further weakens your value chain as your workers, human and especially AI, need that data to function. ... Bolting AI onto existing systems won’t work. AIs aren’t human, and you can’t replace them one for one, or even five for one. It doesn’t work. 

Daily Tech Digest - January 20, 2026


Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox



The culture you can’t see is running your security operations

Non-observable culture is everything happening inside people’s heads. Their beliefs about cyber risk. Their attitudes toward security. Their values and priorities when security conflicts with convenience or speed. This is where the real decisions get made. You can’t see someone’s belief that “we’re too small to be targeted” or “security is IT’s job, not mine.” You can’t measure their assumption that compliance equals security. You can’t audit their gut feeling that reporting a mistake will hurt their career. But these invisible forces shape every security decision your people make. Non-observable culture includes beliefs about the likelihood and severity of threats. It includes how people weigh security against productivity. It includes their trust in leadership and their willingness to admit mistakes. It includes all the cognitive biases that distort risk perception. ... Implicit culture is the stuff nobody talks about because nobody even realizes it’s there. The unspoken assumptions. The invisible norms. The “way things are done here” that everyone knows but nobody questions. This is the most powerful layer because it operates below conscious awareness. People don’t choose to follow implicit norms. They do. Automatically. Without thinking. Implicit culture includes unspoken beliefs like “security slows us down” or “leadership doesn’t really care about this.” It contains hidden power dynamics that determine who can challenge security decisions and who can’t.


The top 6 project management mistakes — and what to do instead

Project managers are trained to solve project problems. Scope creep. Missed deadlines. Resource bottlenecks. ... Start by helping your teams understand the business context behind the work. What problem are we trying to solve? Why does this project matter to the organization? What outcome are we aiming for? Your teams can’t answer those questions unless you bring them into the strategy conversation. When they understand the business goals, not just the project goals, they can start making decisions differently. Their conversations change to ensure everyone knows why their work matters. ... Right from the start of the project, you need to define not just the business goal but how you’ll measure it was successful in business terms. Did the project reduce cost, increase revenue, improve the customer experience? That’s what you and your peers care about, but often that’s not the focus you ask the project people to drive toward. ... People don’t resist because they’re lazy or difficult. They resist because they don’t understand why it’s happening or what it means for them. And no amount of process will fix that. With an accelerated delivery plan designed to drive business value, your project teams can now turn their attention to bringing people with them through the change process. ... To keep people engaged in the project and help it keep accelerating toward business goals, you need purpose-driven communication designed to drive actions and decisions. 


AI has static identity verification in its crosshairs. Now what?

Identity models based on “joiner–mover–leaver” workflows and static permission assignments cannot keep pace with the fluid and temporary nature of AI agents. These systems assume identities are created carefully, permissions are assigned deliberately, and changes rarely happen. AI changes all of that. An agent can be created, perform sensitive tasks, and terminate within seconds. If your verification model only checks identity at login, you’re leaving the entire session vulnerable. ... Securing AI-driven enterprises requires a shift similar to what we saw in the move from traditional firewalls to zero-trust architectures. We didn’t eliminate networks; we elevated policy and verification to operate continuously at runtime. Identity verification for AI must follow the same path. This means building a system that can: Assign verifiable identities to every human and machine actor; Evaluate permissions dynamically based on context and intent; Enforce least privilege at high velocity; Verify actions, not just entry points; ... This is why frameworks like SPIFFE and modern workload identity systems are receiving so much attention. They treat identity as a short-lived, cryptographically verifiable construct that can be created, used, and retired in seconds, exactly the model AI agents require. Human activity is becoming the minority as autonomous systems that can act faster than we can are being spun up and terminated before governance can keep up. That’s why identity verification must shift from a checkpoint to a real-time trust engine that evaluates every action from every actor, human or AI.


AWS European cloud service launch raises questions over sovereignty

AWS established a new legal entity to operate the European Sovereign Cloud under a separate governance and operational model. The new company is incorporated in Germany and run exclusively by EU residents, AWS said. ... “This is the elephant in the room,” said Rene Buest, senior director analyst at Gartner. There are two main concerns regarding the operation of AWS’s European Sovereign Cloud for businesses in Europe. The first relates to the 2018 US Cloud Act, which could require AWS to disclose customer data stored in Europe to the United States, if requested by US authorities. The second involves the possibility of US government sanctions: If a business that uses AWS services is subject to such sanctions, AWS may be compelled to block that company’s access to its cloud services, even if its data and operations are based in Europe. ... It’s an open question at this stage, said Dario Maisto, senior analyst at Forrester. “Cases will have to be tested in court before we can have a definite answer,” he said. “The legal ownership does matter, and this is one of the points that may not be addressed by the current setup of the AWS sovereign cloud.” AWS’s European Sovereign Cloud represents one of several ways that European business can approach the challenge of digital sovereignty. Gartner identifies a spectrum that ranges from global hyperscaler public cloud services through to regional cloud services that are based on non-hyperscaler technology. 


Why peripheral automation is the missing link in end-to-end digital transformation?

While organisations have successfully modernized their digital cores, the “last mile” of business operations often remains fragmented, manual, and surprisingly analogue. This gap is why Peripheral Automation is emerging not merely as a tactical correction but as the critical missing link in achieving true, end-to-end digital transformation. ... Peripheral Automation offers a strategic resolution to this paradox. It’s an architectural philosophy that advocates “differential innovation.” Rather than disrupting stable cores to accommodate fleeting business needs, organisations build agile, tailored applications and workflows that sit on top of the core systems. This approach treats the enterprise as a layered ecosystem. The core remains the single source of truth, but the periphery becomes the “system of engagement”. By leveraging modern low-code platforms and composable architecture, leaders can deploy lightweight, purpose-built automation tools that address specific friction points without altering the underlying infrastructure. ... Peripheral automation reduces process latency, manual effort, and rework. By addressing specific pain points rather than attempting broad, multi-year system redesigns, companies unlock measurable efficiency in weeks. This precision improves throughput, reduces cycle times, and frees teams to focus on high-value work.


How does agentic ops transform IT troubleshooting?

AI Canvas introduces a fundamentally different user experience for network troubleshooting. Rather than navigating through multiple dashboards and CLI interfaces, engineers interact with a dynamic canvas that populates with relevant widgets as troubleshooting progresses. You could say that the ‘canvas’ part of the name AI Canvas is the most important part of it. That is, AI Canvas is actually a blank canvas every time you start troubleshooting. It fills the canvas with boxes and on the fly widgets, among other things, during the troubleshooting. Sampath confirms this: “When you ask a question, it’s using and picking the right types of tools that it can go and execute on a specific task and calls agents to be able to effectively take a task to completion and returns a response back.” The system can spin up monitoring agents that continuously provide updated information, creating a living troubleshooting environment rather than static reports. ... AI Canvas doesn’t exist in isolation. It builds on Cisco’s existing automation foundation. The company previously launched Workflows, a no-code network automation engine, and AI assistants with specific skills for network operations. “All of the automations that are already baked into the workflows, the skills that were built inside of the assistants, now manifest themselves inside of the canvas,” Sampath details. This creates a continuum from deterministic workflows to semi-autonomous assistants to fully autonomous agentic operations.


UK government launches industry 'ambassadors' scheme to champion software security improvements

"By acting as ambassadors, signatories are committing to a process of transparency, development and continuous improvement. The implementation of this code of practice will take time and, in doing so, may bring to light issues that need to be addressed," DSIT said in a statement confirming the announcement. "Signatories and policymakers will learn from these issues as well as the successes and challenges for each organization and, where appropriate, will share information to help develop and strengthen this government policy." ... The Software Security Code of Practice was unveiled by the NCSC in May last year, setting out a series of voluntary principles defining what good software security looks like across the entire software lifecycle. Aimed at technology providers and organizations that develop, sell, or procure software, the code offers best practices for secure design and development, build-environment security, and secure deployment and maintenance. The code also emphasizes the importance of transparent communication with customers on potential security risks and vulnerabilities. ... “The code moves software security beyond narrow compliance and elevates it to a board-level resilience priority. As supply chain attacks continue to grow in scale and impact, a shared baseline is essential and through our global community and expertise, ISC2 is committed to helping professionals build the skills needed to put secure-by-design principles into practice.”


Privacy teams feel the strain as AI, breaches, and budgets collide

Where boards prioritize privacy, AI use appears more frequently and follows defined direction. Larger enterprises, particularly those with broader risk and compliance functions, also report higher uptake. In smaller organizations, or those where privacy has limited visibility at the leadership level, AI adoption remains tentative. Teams that apply privacy principles throughout system development report higher use of AI for privacy tasks. In these environments, AI supports ongoing work rather than introducing new approaches. ... Respondents working in organizations where privacy has active board backing report more consistent use of privacy by design. Budget stability shows a similar pattern, with better-funded teams reporting stronger integration of privacy into design and engineering work. The study also shows that privacy by design on its own does not stop breaches. Organizations that experienced breaches report similar levels of design practice as those that did not. The data places privacy by design mainly in a governance and compliance role, with limited connection to incident prevention. ... Governance shapes how teams view that risk. Professionals in organizations where privacy lacks board priority report higher expectations of a breach in the coming year. Gaps between privacy strategy and broader business goals also appear alongside higher breach expectations, suggesting that structural alignment influences outlook as much as technical controls. Confidence remains common, even among organizations that have experienced breaches.


Cyber Insights 2026: Information Sharing

The sheer volume of cyber threat intelligence being generated today is overwhelming. “Information sharing channels often help condense inputs and highlight genuine signals amid industry noise,” says Caitlin Condon, VP of security research at VulnCheck. “The very nature of cyber threat intelligence demands validation, context, and comparison. Information sharing allows cybersecurity professionals to more rigorously assess rising threats, identify new trends and deviations, and develop technically comprehensive guidance.” ... “The importance of the Cybersecurity Information Sharing Act of 2015 for U.S. national security cannot be overstated,” says Crystal Morin, cybersecurity strategist at Sysdig. “Without legal protections, many legal departments would advise security teams to pull back from sharing threat intelligence, resulting in slower, more cautious processes. ...” CISOs have developed their own closed communities where they can discuss current incidents with other CISOs. This is done via channels such as Slack, WhatsApp and Signal. Security of the channels is a concern, but who better than multiple CISOs to monitor and control security? ... “Much of today’s threat intelligence remains reactive, driven by short-lived IoCs that do little to help agencies anticipate or disrupt cyberattacks,” comments BeyondTrust’s Greene. “We need to modernize our information-sharing framework to emphasize behavior-based analytics enriched with identity-centric context,” he continues.


Edge AI: The future of AI inference is smarter local compute

The bump in edge AI goes hand in hand with a broader shift in focus from AI training, the act of preparing machine learning (ML) models with the right data, to inference, the practice of actively using models to apply knowledge or make predictions in production. “Advancements in powerful, energy-efficient AI processors and the proliferation of IoT (internet of things) devices are also fueling this trend, enabling complex AI models to run directly on edge devices,” says Sumeet Agrawal ... “The primary driver behind the edge AI boom is the critical need for real-time data processing,” says David. The ability to analyze data on the edge, rather than using centralized cloud-based AI workloads, helps direct immediate decisions at the source. Others agree. “Interest in edge AI is experiencing massive growth,” says Informatica’s Agrawal. For him, reduced latency is a key factor, especially in industrial or automotive settings where split-second decisions are critical. There is also the desire to feed ML models personal or proprietary context without sending such data to the cloud. “Privacy is one powerful driver,” says Johann Schleier-Smith ... A smaller footprint for local AI is helpful for edge devices, where resources like processing capacity and bandwidth are constrained. As such, techniques to optimize SLMs will be a key area to aid AI on the edge. One strategy is quantization, a model compression technique that reduces model size and processing requirements. 

Daily Tech Digest - January 19, 2026


Quote for the day:

"Stop Judging people and start understanding people everyone's got a story" -- @PilotSpeaker



Stop calling it 'The AI bubble': It's actually multiple bubbles, each with a different expiration date

The AI ecosystem is actually three distinct layers, each with different economics, defensibility and risk profiles. Understanding these layers is critical, because they won't all pop at once. ... The most vulnerable segment isn't building AI — it's repackaging it. These are the companies that take OpenAI's API, add a slick interface and some prompt engineering, then charge $49/month for what amounts to a glorified ChatGPT wrapper. Some have achieved rapid initial success, like Jasper.ai, which reached approximately $42 million in annual recurring revenue (ARR) in its first year by wrapping GPT models in a user-friendly interface for marketers. But the cracks are already showing. ... Economic researcher Richard Bernstein points to OpenAI as an example of the bubble dynamic, noting that the company has made around $1 trillion in AI deals, including a $500 billion data center buildout project, despite being set to generate only $13 billion in revenue. The divergence between investment and plausible earnings "certainly looks bubbly," Bernstein notes. ... But infrastructure has a critical characteristic: It retains value regardless of which specific applications succeed. The fiber optic cables laid during the dot-com bubble weren’t wasted — they enabled YouTube, Netflix and cloud computing. Twenty-five years ago, the original dot-com bubble burst after debt financing built out fiber-optic cables for a future that had not yet arrived, but that future eventually did arrive, and the infrastructure was there waiting.


Modernizing Network Defense: From Firewalls to Microsegmentation

For many years, network security has been based on the concept of a perimeter defense, likened to a fortified boundary. The network perimeter functioned as a protective barrier, with a firewall serving as the main point of access control. Individuals and devices within this secured perimeter were considered trustworthy, while those outside were viewed as potential threats. The "perimeter-centric" approach was highly effective when data, applications, and employees were all located within the physical boundaries of corporate headquarters. In the current environment, however, this model is considered not only obsolete but also poses significant risks. ... Microsegmentation significantly mitigates the impact of cyberattacks by transitioning from traditional perimeter-based security to detailed, policy-driven isolation at the level of individual workloads, applications, or containers. By establishing secure enclaves for each asset, it ensures that if a device is compromised, attackers are unable to traverse laterally to other systems. ... Microsegmentation solutions offer detailed insights into application dependencies and inter-server traffic flows, uncovering long-standing technical debt such as unplanned connections, outdated protocols, and potentially risky activities that may not be visible to perimeter-based defenses. ... One significant factor deterring organizations from implementing microsegmentation is the concern regarding increased complexity. 


Human-in-the-loop has hit the wall. It’s time for AI to oversee AI

This is not a hypothetical future problem. Human-centric oversight is already failing in production. When automated systems malfunction — flash crashes in financial markets, runaway digital advertising spend, automated account lockouts or viral content — failure cascades before humans even realize something went wrong. In many cases, humans were “in the loop,” but the loop was too slow, too fragmented or too late. The uncomfortable reality is that human review does not stop machine-speed failures. At best, it explains them after the damage is done. Agentic systems raise the stakes dramatically. Visualizing a multistep agent workflow with tens or hundreds of nodes often results in dense, miles-long action traces that humans cannot realistically interpret. As a result, manually identifying risks, behavior drift or unintended consequences becomes functionally impossible. ... Delegating monitoring tasks to AI does not eliminate human accountability. It redistributes it. This is where trust often breaks down. Critics worry that AI governing AI is like trusting the police to govern themselves. That analogy only holds if oversight is self-referential and opaque. The model that works is layered, with a clear separation of powers. ... Humans shift from reviewing outputs to designing systems. They focus on setting operating standards and policies, defining objectives and constraints, designing escalation paths and failure modes, and owning outcomes when systems fail.


Building leaders in the age of AI

The leaders who end up thriving in the AI era will be those who blend human depth with digital fluency. They will use AI to think with them, not for them. And they will treat this AI moment not as a threat to their leadership but as an opportunity to focus on those elements of their portfolios that only humans can excel at. ... Leaders will need to give teams a set of guardrails (clear values and decision rights) and establish new definitions of quality while fostering a sense of trust and collaboration as new challenges emerge and business conditions evolve. ... Aspiration, judgment, and creativity are “only human” leadership traits—and the characteristics that can provide an irreplaceable competitive edge, especially when amplified using AI. It’s therefore incumbent upon organizations to actively identify and develop the individuals who demonstrate critical intrinsics like resilience, eagerness to learn from mistakes, and the ability to work in teams that will increasingly include both humans and AI agents. ... Organizations must actively cultivate core leadership qualities such as wisdom, empathy, and trust—and they must give the development of these attributes the same attention they do to the development of new IT systems or operating models. That will mean providing time for leaders to do the inner work required to lead others effectively—that is, reflecting, sharing insights with other C-suite leaders, and otherwise considering what success will mean for themselves and the organization.


The Rising Phoenix of Software Engineering

Software is undergoing a tectonic transformation. Modern applications are no longer hand-crafted from scratch. They are assembled from third-party components, APIs, open-source packages, machine-learning models, and now AI-generated snippets. Artificial intelligence, low-code tools, Open-Source Software (OSS), reusable libraries have made the act of writing new code less central to building software than ever before. ... In this new era, the primary challenge is not about builder software faster, cheaper, or more feature-rich. It is how to engineer software safely and predictably in a hostile ecosystem. ... Software engineering, as a discipline, must rise again — not as a metaphor for resilience, but as a mandate for survival. ... The future does not eliminate developers or coders. Assembling, customizing, and scripting third-party components will remain critical. But the accountability layer must shift upward, to professionals trained to reason about system safety, dependencies, and security by design. In other words, software engineers must reemerge as true engineers responsible for understanding not only how their code works, but how and where it runs… and most critically how to secure it. ... To engineer software responsibly, practitioners must model threats, evaluate anti-tamper capabilities, and verify that each dependency meets a baseline of assurance. These tasks were historically reserved for penetration testers or quality assurance (QA) teams. 


The concerning cyber-physical security disconnect

The background of many physical security professionals is in military and law enforcement, which change much slower, but are known for extensive training. The nature of the threats they need to defend against is evolving at a slower pace, and destructive, kinetic threats remain a primary concern. ... The focus of cybersecurity is much more on the insides of an organization. Detection is supposed to catch attackers lurking on compromised devices. Response activities have to consider the entire infrastructure rather than individual hosts. Security measures are spread out across the network, taking a defense-in-depth approach. Physical security is much more outward looking, trying to prevent threats from entering. Detection systems exist within premises, but focus on the outer layers. Response activities are focused on evicting individual threats or denying their access. The majority of security efforts focuses on the perimeter. ... Companies often handle both topics in different teams. Conferences and publications may feature both topics, but often focus on one and rarely address their interdependence. Security assessments like pentests and red team exercises sometimes include a physical component that tends to focus on social engineering without involving deep physical security expertise. ... Risks, especially in the form of human threat actors, will always look for the easiest way to materialize. Therefore, they will attack physical assets via their digital components and vice versa, if these flanks are not protected.


Architecting Agility: Decoupled Banking Systems Will Be the Key to Success in 2026

The banking industry is undergoing an evolutionary and market-driven shift. Digital banking systems, once rigid and monolithic, are being reimagined through decoupled architecture, AI-driven intelligence, programmatic technology consumption, and fintech innovation and partnerships. ... Delay is no longer an option — the future of banking is already being built today. To capitalize on these innovations, tech leaders must prioritize digital core banking agility, ensuring integration with new innovations and adapting to evolving market demands. ... Identify suspicious patterns in real time. As illustrated in the figure, a decoupled risk analytics gateway and prompt engine streamlines regulatory reporting and ensures adherence to evolving rules (regtech). Whitney Morgan, vice president at Skaleet, a fintech provider, states that generative AI takes this a notch further by automating regulatory reporting and accelerating product development. ... AI-enabled risk management empowers banks to detect anomalies across large translation datasets with the speed and accuracy that manual processes can’t match. Risk modeling and stress testing will enhance credit risk scoring, market risk simulations, and scenario analysis that drive preemptive and revenue options. ... The banking and financial services innovation race, with challenges in adoption and capturing market advantages, beckons leaders to be nimble and, at the same time, stay focused on the fundamentals. CIOs, CTOs, and other tech leaders can take proactive steps to strike the right balance.


Key Management Testing: The Most Overlooked Pillar of Crypto Security

The majority of security testing in crypto projects focuses on code correctness or operational attacks. Key management, however, is mainly considered a procedural issue rather than a technical problem. This is a dangerous false belief. Entropy sources, hardware integrity, and cryptographic integrity are key to generating. Ineffective randomness, broken device software or a corrupted environment may lead to keys that seem valid but are appallingly weak to attack. The testing mechanisms used to create new wallet addresses for users must be watertight when an exchange generates millions of new addresses. Testing should also be done on key storage. ... The recovery process is one of the most vulnerable areas of key management, yet it is discussed least. Backup and restoration are prone to human error, improperly configured storage, or unsafe transmission. The unfortunate fact about crypto is that recovery mechanisms can be either a saviour or a disaster. Recovery phrases, encrypted backups, and distributed shares need to be repeatedly tested in a real-world, adversarial environment. ... End-to-end lifecycle testing, automatic verification of key states, automated attack simulations and automated recovery protocols that self-heal will be the order of the day. The industry has already become such that key management is no longer a concealed or even supporting part of the security strategies. 


Inside the Chip: How Hardware Root of Trust Shifts the Odds Back to Cyber Defenders

Defenders often lack direct control or visibility into the hardware layer where workloads actually execute. This abstraction can obscure low-level threats, allowing attackers to manipulate telemetry, disable software protections, or persist beyond reboots. Crucially, modern attacks are not brute force attempts to break encryption or overwhelm defences. They exploit the assumptions built into how systems start, update, and prove what’s genuine. ... At the centre of this shift is Hardware Root of Trust (HRoT): a security architecture that embeds trust directly into the hardware layer of a device. US National Institute of Standards and Technology (NIST) defines it as “an inherently trusted combination of hardware and firmware that maintains the integrity of information.” In practice, HRoT serves as the anchor for system trust from the moment power is applied. ... For CISOs, HRoT represents an opportunity to strengthen resilience, meet regulatory demands, and finally realise true zero trust. From a resilience standpoint, it changes the balance between prevention and response. By validating integrity from power-on and continuously during operation, it reduces reliance on post-incident investigation and recovery. Compromised devices and systems are stopped early, limiting blast radius and disruption. Regulators are already reinforcing this direction. Frameworks such as the US Department of Defense’s CMMC explicitly highlight HRoT as a stronger foundation for assurance. 


What AI skills job seekers need to develop in 2026

One of the earliest AI skills involved prompt engineering — being able to get to the necessary AI-generated results by using the right questions. But that baseline skill is being pushed aside by “context engineering.” Think of context engineering as prompt engineering on steroids; it involves developing prompts that can deliver consistent and predictive answers. Ideally, “everytime you ask the same question, you always get the same answer,” said Bekir Atahan, vice president at Experis Services, a division of Manpower Group. That skill is critical because AI models are changing quickly, and the answers they spout out can differ from day to day. Context engineering is aimed at ensuring consistent outputs despite a rapidly evolving AI ecosystem. ... “Beyond algorithms and coding, the next wave of AI talent must bridge technology, governance and organizational change. The most valuable AI skill in 2026 isn’t coding, it’s building trust,” Seth said. Along those lines, he recommended that job seekers immerse themselves in the technology beyond simply taking a class. “Instead of a course, go to any conference,” Seth said. ... In hiring, genuine AI capability shows up through curiosity and real experience, Blackford said. “Strong candidates can talk honestly about something they tried, what did not work, and what they learned,” he said ... “Things are evolving at such a fast pace that there will be no perfect set of skills,” said Seth. “I would say more than skills, attitudes are more important — that adaptability to change, how quick you are to learn things.”