Daily Tech Digest - February 18, 2026


Quote for the day:

"Engagement is a leadership responsibility—never the employee’s, and not HR’s." -- Gordon Tredgold



Why cloud outages are becoming normal

As the headlines become more frequent and the incidents themselves start to blur together, we have to ask: Why are these outages becoming a monthly, sometimes even weekly, story? What’s changed in the world of cloud computing to usher in this new era of instability? In my view, several trends are converging to make these outages not only more common but also more disruptive and more challenging to prevent. ... The predictable outcome is that when experienced engineers and architects leave, they are often replaced by less-skilled staff who lack deep institutional knowledge. They lack adequate experience in platform operations, troubleshooting, and crisis response. While capable, these “B Team” employees may not have the skills or knowledge to anticipate how minor changes affect massive, interconnected systems like Azure. ... Another trend amplifying the impact of these outages is the relative complacency about resilience. For years, organizations have been content to “lift and shift” workloads to the cloud, reaping the benefits of agility and scalability without necessarily investing in the levels of redundancy and disaster recovery that such migrations require. There is growing cultural acceptance among enterprises that cloud outages are unavoidable and that mitigating their effects should be left to providers. This is both an unrealistic expectation and a dangerous abdication of responsibility.


AI agents are changing entire roles, not just task augmentation

Task augmentation was about improving individual tasks within an existing process. Think of a source-to-pay process in which specific steps are automated. That is relatively easy to visualize and implement in a classic process landscape. Role transformation, however, requires a completely different approach. You have to turn your entire end-to-end business process architecture into a role-based architecture, explains Mueller. ... Think of an agent that links past incidents to existing problems. Or an agent that automatically checks licenses and certifications for all running systems. “I wonder why everyone isn’t already doing this,” says Mueller. In the event of an incident with a known problem, the agent can intervene immediately without human intervention. That’s an autonomous circle. For more complex tasks, you can start in supervised mode and later transition to autonomous mode. ... The real challenge is that companies are so far behind in their capabilities to handle the latest technology. Many cannot even visualize what AI means. The executive has a simple recommendation: “If you had to build it from scratch on greenfield, would you do it the same way you do now?” That question gets to the heart of the matter. “Everyone looks at the auto industry and sees that it is being disrupted by Chinese companies. This is because Chinese companies can do things much faster than old economies,” Mueller notes.


Why are AI leaders fleeing?

Normally, when big-name talent leaves Silicon Valley giants, the PR language is vanilla: they’re headed for a “new chapter” or “grateful for the journey” — or maybe there’s some vague hints about a stealth startup. In the world of AI, though, recent exits read more like a whistleblower warnings. ... Each individual story is different, but I see a thread here. The AI people who were concerned about “what should we build and how to do it safely?” are leaving. They’ll be replaced by people whose first, if not only, priority is “how fast can we turn this into a profitable business?” Oh, and not just profitable; not even a unicorn with a valuation of $1 billion is enough for these people. If the business isn’t a “decacorn,” a privately held startup company valued at more than $10 billion, they don’t want to hear about it. I think it’s very telling that Peter Steinberger, the creator of the insanely — in every sense of the word — hot OpenClaw AI bot, has already been hired by OpenAI. Altman calls him a “genius” and says his ideas “will quickly become core to our product offerings.” Actually, OpenClaw is a security disaster waiting to happen. Someday soon, some foolhardy people or companies will lose their shirts because they trusted valuable information with it. And, its inventor is who Altman wants at the heart of OpenAI!? Gartner needs to redo its hype cycle. With AI, we’re past the “Peak of Inflated Expectations” and charging toward the “Pinnacle of Hysterical Financial Fantasies.”


Poland Energy Survives Attack on Wind, Solar Infrastructure

The attack on Poland's energy sector late last year might have failed, but it's also the first large-scale attack against decentralized energy resources (DERs) like wind turbines and solar farms. ... The attacks were destructive by nature and "occurred during a period when Poland was struggling with low temperatures and snowstorms just before the New Year." ... Dragos said that over the past year, Electrum has worked alongside another threat actor, tracked as Kamicite, to conduct destructive attacks against Ukrainian ISPs and persistent scanning of industrial devices in the US. Kamicite gained initial access and persistence against organizations, and Electrum executed follow-on activity. Dragos has tracked Kamicite activities against the European ICS/OT supply chain since late 2024. "Electrum remains one of the most aggressive and capable OT/ICS-adjacent threat actors in the world," Dragos said. "Even when targeting IT infrastructure, Electrum's destructive malware often affects organizations that provide critical operational services, telecommunications, logistics, and infrastructure support, blurring the traditional boundary between IT and OT. Kamacite's continuous reconnaissance and access development directly enable Electrum's destructive operations. These activities are neither theoretical nor preparatory, they are part of active campaigns culminating in real-world outages, data destruction, and coordinated destabilization campaigns."


Why SaaS cost optimization is an operating model problem, not a budget exercise

When CIOs ask why SaaS costs spiral, the answer is rarely “poor discipline.” It’s usually structural. ... In the engagement I described, SaaS sprawl had accumulated over years for understandable reasons: Business units bought tools to move faster; IT teams enabled experimentation during growth phases; Mergers brought duplicate platforms; and Pandemic-era urgency favored speed over standardization. No one made a single bad decision. Hundreds of reasonable decisions added up to an unreasonable outcome. ... During a review session, I asked a simple question about one of the highest-cost platforms: “Who owns this product?” The room went quiet. IT assumed the business owned it. The business assumed IT managed it. Procurement negotiated the contract. Security reviewed access annually. No one was accountable for adoption, value realization or lifecycle decisions. This lack of accountability wasn’t unique to that tool — it was systemic. Best-practice guidance on SaaS governance consistently emphasizes the importance of assigning a clearly named owner for every application, accountable for cost, security, compliance and ongoing value. Without that ownership, redundancy and unmanaged spend tend to persist across portfolios. ... CIOs focus on licenses and contracts, but the real issue is the absence of a product mindset. SaaS platforms behave like products, but many organizations manage them like utilities.


Finding a common language around risk

The CISO warns about ransomware threats. Operations worries about supply chain breakdowns. The board obsesses over market disruption. They’re all talking about risk, but they might as well be on different planets. When the crisis hits (and it always does), everyone scrambles in their own direction while the place burns down. ... The Organizational Risk Culture Standard (ORCS) offers something most frameworks miss: it treats culture as the foundation, not the afterthought. You can’t bolt culture onto existing processes and call it done. Culture is how people actually think about risk when no one is watching. It’s the shared beliefs that guide decisions under pressure. Think of it as a dynamic system in which people, processes and technology must dance together. People are the operators who judge and act on risks. Processes provide standards, so they don’t have to improvise in a crisis. Technology provides tools to detect patterns, monitor threats and respond faster than human reflexes. But here’s the catch: these three elements have to align across all three risk domains. Your cybersecurity team needs to understand how their decisions affect operations. Your operations team needs to grasp strategic implications. ... The ORCS standard provides a maturity model with five levels. Most organizations start at Level 1, where risk management is reactive and fragmented. People improvise. Policies exist on paper, but nobody follows them. Crises catch everyone off guard.


Harnessing curated threat intelligence to strengthen cybersecurity

Improving one’s cybersecurity posture with up-to-date threat intelligence is a foundational element of any modern security stack. This enables automated blocking of known threats and reduces the workload on security teams while keeping the network protected. Curated threat intelligence also plays a broader role across cybersecurity strategies, like blocking malicious IP addresses from accessing the network to support intrusion prevention and defend against distributed denial-of-service (DDoS) attacks. ... Organizations overwhelmed by massive amounts of cybersecurity data can gain clarity and control with curated threat intelligence. By validating, enriching and verifying the data, curated intelligence dramatically reduces false positives and noise, enabling security teams to focus on the most relevant and credible threats. Improved accuracy and certainty accelerates time-to-knowledge, sharpens prioritization based on threat severity and potential impact, and ensures resources are applied and deployed where they matter most. With higher confidence and certainty, teams can respond to incidents faster and more decisively, while also shifting from reactive to proactive and ultimately preventative – using known adversary indicators and patterns to investigate threats, strengthen controls, and stop attacks before they cause damage. Curated threat Intelligence transforms one’s cybersecurity from reactive to resilient.


Password managers’ promise that they can’t see your vaults isn’t always true

All eight of the top password managers have adopted the term “zero knowledge” to describe the complex encryption system they use to protect the data vaults that users store on their servers. The definitions vary slightly from vendor to vendor, but they generally boil down to one bold assurance: that there is no way for malicious insiders or hackers who manage to compromise the cloud infrastructure to steal vaults or data stored in them. ... New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server—either administrative or the result of a compromise—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext. ... Three of the attacks—one against Bitwarden and two against LastPass—target what the researchers call “item-level encryption” or “vault malleability.” Instead of encrypting a vault in a single, monolithic blob, password managers often encrypt individual items, and sometimes individual fields within an item. These items and fields are all encrypted with the same key. 


Poor documentation risks an AI nightmare for developers

Poor documentation not only slows down development and makes bug fixing difficult, but its effects can multiply. Misunderstandings can propagate through codebases, creating issues that can take a long time to fix. The use of AI accelerates this problem. AI coding assistants rely on documentation to understand how software should be used. Without AI, there is the option of institutional knowledge, or even simply asking the developer behind the code. AI doesn’t have this choice and will confidently fill in the gaps where no documentation exists. We’re familiar with AI hallucinations – and developers will be checking for these kinds of errors – but a lack of documentation will likely cause an AI to simply take a stab in the dark. ... Developers need to write documentation around complete workflows: the full path from local development to production deployment, including failures and edge cases. It can be tricky to spot errors in your own work, so AI can be used to help here, following the documentation end-to-end and observing where confusion and errors appear. AI can also be used to draft documentation and generally does a pretty good job of putting together documentation when presented with code. ... Document development should be an ongoing process – just as software is patched and updated, so should the documentation. Questions that come in from support tickets and community forums – especially repeat problems – can be used to highlight issues in documentation, particularly those caused by assumed knowledge.


Branding Beyond the Breach: How Cybersecurity Companies Can Lead with Trust, Not Fear

The almost constant stream of cyberattack headlines in the news only highlights the importance for cybersecurity companies to ensure their messaging is creating trust and confidence for B2B businesses. ... It is easy to take issues such as AI- powered attacks and triple extortion tactics and create fear-based messaging in hopes of capturing attention. However, when cybersecurity companies endlessly recycle breach risks as reasons to do business, it can overload prospective clients with the dangers and cause them to disengage. It also minimises cybersecurity services down to being solely reactive, rather than proactive and preventative. By following fear-based messaging, cybersecurity companies are blending in, not standing out. ... To navigate the complexities of cybersecurity, B2B businesses need a partner to guide them, not just sell to them. By including thought-leadership, education initiatives, consultation services, partnerships and customised strategies into a cybersecurity company’s messaging and offering, it highlights their authenticity, credibility and reliability. ... The cybersecurity landscape is wide and complex, and the market will only continue to diversify as threats evolve. Cybersecurity organisations need messaging that shows they can support businesses to expand in new sectors, communicate complex offerings clearly and become the optimal solution for risk-conscious enterprises.

Daily Tech Digest - February 17, 2026


Quote for the day:

"If you want to become the best leader you can be, you need to pay the price of self-discipline." -- John C. Maxwell



6 reasons why autonomous enterprises are still more a vision than reality

"AI is the first technology that allows systems that can reason and learn to be integrated into real business processes," Vohra said. ... Autonomous organizations, he continued, "are built on human-AI agent collaboration, where AI handles speed and scale, leaving judgment and strategy up to humans." They are defined by "AI systems that go beyond just generating insights in silos, which is how most enterprises are currently leveraging AI," he added. Now, the momentum is toward "executing decisions across workflows with humans setting intent and guardrails." ... The survey highlighted that work is required to help develop agents. Only 3% of organizations -- and 10% of leaders -- are actively implementing agentic orchestration. "This limited adoption signals that orchestration is still an emerging discipline," the report stated. "The scarcity of orchestration is a litmus test for both internal capability and external strategic positioning. Successful orchestration requires integrating AI into workflows, systems, and decision loops with precision and accountability." ... Workforce capability gaps continue to be the most frequently cited organizational constraint to AI adoption, as reported by six in 10 executives -- yet only 45% say their organizations offer AI training for all employees. ... As AI takes on more execution and pattern recognition, human value increasingly shifts toward system design, integration, governance, and judgment -- areas where trust, context, and accountability still sit firmly with people.


Finding the key to the AI agent control plane

Agents change the physics of risk. As I’ve noted, an agent doesn’t just recommend code. It can run the migration, open the ticket, change the permission, send the email, or approve the refund. As such, risk shifts from legal liability to existential reality. If a large language model hallucinates, you get a bad paragraph. ... Every time an AI system makes a mistake that a human has to clean up, the real cost of that system goes up. The only way to lower that tax is to stop treating governance as a policy problem and start treating it as architecture. That means least privilege for agents, not just humans. It means separating “draft” from “send.” It means making “read-only” a first-class capability, not an afterthought. It means auditable action logs and reversible workflows. It means designing your agent system as if it will be attacked because it will be. ... Right now, permissions are a mess of vendor-specific toggles. One platform has its own way of scoping actions. Another bolts on an approval workflow. A third punts the problem to your identity and access management team. That fragmentation will slow adoption, not accelerate it. Enterprises can’t scale agents until they can express simple rules. We need to be able to say that an agent can read production data but not write to it. We need to say an agent can draft emails but not send them. We need to say an agent can provision infrastructure only inside a sandbox, with quotas, or that it must request human approval before any destructive action.


PAM in Multi‑Cloud Infrastructure: Strategies for Effective Implementation

The "Identity Gap" has emerged as the leading cause of cloud security breaches. Traditional vault-based Privileged Access Management (PAM) solutions, designed for static server environments, are inadequate for today’s dynamic, API-driven cloud infrastructure. ... PAM has evolved from an optional security measure to an essential and fundamental requirement in multi-cloud environments. This shift is attributed to the increased complexity, decentralized structure, and rapid changes characteristic of modern cloud architectures. As organizations distribute workloads across AWS, Azure, Google Cloud, and on-premises systems, traditional security perimeters have become obsolete, positioning identity and privileged access as central elements of contemporary security strategies. ... Fragmented identity systems hinder multi‑cloud PAM. Centralizing identity and federating access resolves this, with a Unified Identity and Access Foundation managing all digital identities—human or machine—across the organization. This approach removes silos between on-premises, cloud, and legacy applications, providing a single control point for authentication, authorization, and lifecycle management. ... Cloud providers deliver robust IAM tools, but their features vary. A strong PAM approach aligns these tools using RBAC and ABAC. RBAC assigns permissions by job role for easy scaling, while ABAC uses user and environment attributes for tight security.


Giving AI ‘hands’ in your SaaS stack

If an attacker manages to use an indirect prompt injection — hiding malicious instructions in a calendar invite or a web page the agent reads — that agent essentially becomes a confused deputy. It has the keys to the kingdom. It can delete opportunities, export customer lists or modify pricing configurations. ... For AI agents, this means we must treat them as non-human identities (NHIs) with the same or greater scrutiny than we apply to employees. ... The industry is coalescing around the model context protocol (MCP) as a standard for this layer. It provides a universal USB-C port for connecting AI models to your data sources. By using an MCP server as your gateway, you ensure the agent never sees the credentials or the full API surface area, only the tools you explicitly allow. ... We need to treat AI actions with the same reverence. My rule for autonomous agents is simple: If it can’t dry run, it doesn’t ship. Every state-changing tool exposed to an agent must support a dry_run=true mode. When the agent wants to update a record, it first calls the tool in dry-run mode. The system returns a diff — a preview of exactly what will change . This allows us to implement a human-in-the-loop approval gate for high-risk actions. The agent proposes the change, the human confirms it and only then is the live transaction executed. ... As CIOs and IT leaders, our job isn’t to say “no” to AI. It’s to build the invisible rails that allow the business to say “yes” safely. By focusing on gateways, identity and transactional safety, we can give AI the hands it needs to do real work, without losing our grip on the wheel.


AI-fuelled supply chain cyber attacks surge in Asia-Pacific

Exposed credentials, source code, API keys and internal communications can provide detailed insight into business processes, supplier relationships and technology stacks. When combined with brokered access, that information can support impersonation, targeted intrusion and fraud activity that blends in with legitimate use. One area of concern is open-source software distribution, where widely used libraries can spread malicious code at scale. ... The report points to AI-assisted phishing campaigns that target OAuth flows and other single sign-on mechanisms. These techniques can bypass multi-factor authentication where users approve malicious prompts or where tokens are stolen after login. ... "AI did not create supply chain attacks, it has made them cheaper, faster, and harder to detect," Mr Volkov added. "Unchecked trust in software and services is now a strategic liability." The report names a range of actors associated with supply-chain-focused activity, including Lazarus, Scattered Spider, HAFNIUM, DragonForce and 888, as well as campaigns linked to Shai-Hulud. It said these groups illustrate how criminal organisations and state-aligned operators are targeting similar platforms and integration layers. ... The report's focus on upstream compromise reflects a broader trend in cyber risk management, where organisations assess not only their own exposure but also the resilience of vendors and technology supply chains.


Automation cannot come at the cost of accountability; trust has to be embedded into the architecture

Visa is actively working with issuers, merchants, and payment aggregators to roll out authentication mechanisms based on global standards. “Consumers want payments to be invisible,” Chhabra adds. “They want to enjoy the shopping experience, not struggle through the payment process.” Tokenisation plays a critical role in enabling this vision. By replacing sensitive card details with unique digital tokens, Visa has created a secure foundation for tap-and-pay, in-app purchases, and cross-border transactions. In India alone, nearly half a billion cards have already been tokenised. “Once tokenisation is in place, device-based payments and seamless commerce become possible,” Chhabra explains. “It’s the bedrock of frictionless payments.” Fraud prevention, however, is no longer limited to card-based transactions. With real-time and account-to-account payments gaining momentum, Visa has expanded its scope through strategic acquisitions such as Featurespace. The UK-based firm specialises in behavioural analytics for real-time fraud detection, an area Chhabra describes as increasingly critical. “We don’t just want to detect fraud on the Visa network. We want to help prevent fraud across payment types and networks,” he says. Before deploying such capabilities in India, Visa conducts extensive back-testing using localised data and works closely with regulators. “Global intelligence is powerful, but it has to be adapted to local behaviour. You can’t simply overfit global models to India’s unique payment patterns.”


Most ransomware playbooks don't address machine credentials. Attackers know it.

The gap between ransomware threats and the defenses meant to stop them is getting worse, not better. Ivanti’s 2026 State of Cybersecurity Report found that the preparedness gap widened by an average of 10 points year over year across every threat category the firm tracks. ... The accompanying Ransomware Playbook Toolkit walks teams through four phases: containment, analysis, remediation, and recovery. The credential reset step instructs teams to ensure all affected user and device accounts are reset. Service accounts are absent. So are API keys, tokens, and certificates. The most widely used playbook framework in enterprise security stops at human and device credentials. The organizations following it inherit that blind spot without realizing it. ... “Although defenders are optimistic about the promise of AI in cybersecurity, Ivanti’s findings also show companies are falling further behind in terms of how well prepared they are to defend against a variety of threats,” said Daniel Spicer, Ivanti’s Chief Security Officer. “This is what I call the ‘Cybersecurity Readiness Deficit,’ a persistent, year-over-year widening imbalance in an organization’s ability to defend their data, people, and networks against the evolving threat landscape.” ... You can’t reset credentials that you don’t know exist. Service accounts, API keys, and tokens need ownership assignments mapped pre-incident. Discovering them mid-breach costs days.


CISO Julie Chatman offers insights for you to take control of your security leadership role

In a few high-profile cases, security leaders have faced criminal charges for how they handled breach disclosures, and civil enforcement for how they reported risks to investors and regulators. The trend is toward holding CISOs personally accountable for governance and disclosure decisions. ... You’re seeing the rise of fractional CISOs, virtual CISOs, heads of IT security instead of full CISO titles. It’s a lot harder to hold a fractional CISO personally liable. This is relatively new. The liability conversation really intensified after some high-profile enforcement actions, and now we’re seeing the market respond. ... First, negotiate protection upfront. When you’re thinking about accepting a CISO role, explicitly ask about D&O insurance coverage. If the CISO is not considered a director or an officer of the company and can’t be given D&O coverage, will the company subsidize individual coverage? There are companies now selling CISO-specific policies. Make this part of your compensation negotiation. Second, do your job well but understand the paradox. Sometimes when you do your job properly, you’re labeled ‘the office of no,’ you’re seen as ‘difficult,’ and you last 18 months. It’s a catch-22. Real liability protection is changing how your organization thinks about risk ownership. Most organizations don’t have a unified view of risk or the vocabulary to discuss it properly. If you can advance that as a CISO, you can help the business understand that risk is theirs to accept, not yours.


The AI bubble will burst for firms that can’t get beyond demos and LLMs

Even though the discussion of a potential bubble is ubiquitous, what’s going on is more nuanced than simple boom-and-bust chatter, said Francisco Martin-Rayo, CEO of Helios AI. “What people are really debating is the gap between valuation and real-world impact. Many companies are labeled ‘AI-driven,’ but only a subset are delivering measurable value at scale,” Martin-Rayo said. Founders confuse fundraising with progress, which comes only when they are solving real problems for real clients, said Nacho De Marco, founder of BairesDev. “Fundraising gives you dopamine, but real progress comes from customers,” De Marco said. “The real value of a $1B valuation is customer validation.” ... The AI shakeout has already started, and the tenor at WEF “feels less like peak hype and more like the beginning of a sorting process,” Martin-Rayo said. ... Companies that survive the coming shakeout will be those willing to rebuild operations from the ground up rather than throwing AI into existing workflows, said Jinsook Han, chief agentic AI officer at Genpact. ”It’s not about just bolting some AI into your existing operation,” Han said. “You have to really build from ground up — it’s a complete operating model change.” Foundational models are becoming more mature and can do more of what startups sell. As a result, AI providers that don’t offer distinct value will have a tough time surviving, Han said.


What could make the EU Digital Identity Wallets fail?

Large-scale digital identity initiatives rarely fail because the technology does not work. They fail because adoption, incentives, trust, and accountability are underestimated. The EU Digital Identity Wallet could still fail, or partially fail, succeeding in some countries while struggling or stagnating in others. ... A realistic risk is fragmented success. Some member states are likely to deliver robust wallets on time. Others may launch late, with limited functionality, or without meaningful uptake. A smaller group may fail to deliver a convincing solution at all, at least in the first phase. From the perspective of users and service providers, this fragmentation already undermines cross border usage. If wallets differ significantly in capabilities, attributes, and reliability across borders, the promise of a seamless European digital identity weakens. ... While EU Digital Identity Wallets offer significantly higher security than current solutions, they will not eliminate fraud entirely. There will still be cases of wallets issued to the wrong individual, phishing attempts, and wallet takeovers. If early fraud cases are poorly handled or publicly misunderstood, trust in the ecosystem could erode quickly. The wallet’s strong privacy architecture introduces real trade-offs. One uncomfortable but necessary question worth asking is: are we going too far with privacy? ... The EU Digital Identity Wallet will succeed only if policymakers, wallet providers, and service providers treat trust, economics, and usability as core design principles, not secondary concerns.

Daily Tech Digest - February 16, 2026


Quote for the day:

"People respect leaders who share power and despise those who hoard it." -- Gordon Tredgold



TheCUBE Research 2026 predictions: The year of enterprise ROI

Fourteen years into the modern AI era, our research indicates AI is maturing rapidly. The data suggests we are entering the enterprise productivity phase, where we move beyond the novelty of retrieval-augmented-generation-based chatbots and agentic experimentation. In our view, 2026 will be remembered as the year that kicked off decades of enterprise AI value creation. ... Bob Laliberte agreed the prediction is plausible and argued OpenAI is clearly pushing into the enterprise developer segment. He said the consumerization pattern is repeating – consumer adoption often drives faster enterprise adoption – and he viewed OpenAI’s Super Bowl presence as a flag in the ground, with Codex ads and meaningful spend behind them. He said he is hearing from enterprises using Codex in meaningful ways, including cases where as much as three quarters of programming is done with Codex, and discussions of a first 100% Codex-developed product. He emphasized that driving broader adoption requires leaning on early adopters, surfacing use cases, and showing productivity gains so they can be replicated across environments. ... Paul Nashawaty said application development is bifurcating. Lines of business and citizen developers are taking on more responsibility for work that historically sat with professional developers. He said professional developers don’t go away – their work shifts toward “true professional development,” while line of business developers focus on immediate outcomes.


Snowflake CEO: Software risks becoming a “dumb data pipe” for AI

Ramaswamy argues that his company lives with the fear that organizations will stop using AI agents built by software vendors. There must certainly be added value for these specialized agents, for example, that they are more accurate, operate more securely, and are easier to use. For experienced users of existing platforms, this is already the case. A solution such as NetSuite or Salesforce offers AI functionality as an extension of familiar systems, whereby adoption of these features almost always takes place without migration. Ramaswamy believes that customers have the final say on this. If they want to consult a central AI and ignore traditional enterprise apps, then they should be given that option, according to the Snowflake CEO. ... However, the tug-of-war around the center of AI is in full swing. It is not without reason that vendors claim that their solution should be the central AI system, for example because they contain enormous amounts of data or because they are the most critical application for certain departments. So far, AI trends among these vendors have revolved around the adoption of AI chatbots, easy-to-set-up or ready-made agentic workflows, and automatic document generation. During several IT events over the past year, attendees toyed with the idea that old interfaces may disappear because every employee will be talking to the data via AI.


Will LLMs Become Obsolete?

“We are at a unique time in history,” write Ashu Garg and Jaya Gupta at Foundation Capital, citing multimodal systems, multiagent systems, and more. “Every layer in the AI stack is improving exponentially, with no signs of a slowdown in sight. As a result, many founders feel that they are building on quicksand. On the flip side, this flywheel also presents a generational opportunity. Founders who focus on large and enduring problems have the opportunity to craft solutions so revolutionary that they border on magic.” ... “When we think about the future of how we can use agentic systems of AI to help scientific discovery,” Matias said, “what I envision is this: I think about the fact that every researcher, even grad students or postdocs, could have a virtual lab at their disposal ...” ... In closing, Matias described what makes him enthusiastic about the future. “I'm really excited about the opportunity to actually take problems that make a difference, that if we solve them, we can actually have new scientific discovery or have societal impact,” he said. “The ability to then do the research, and apply it back to solve those problems, what I call the ‘magic cycle’ of research, is accelerating with AI tools. We can actually accelerate the scientific side itself, and then we can accelerate the deployment of that, and what would take years before can now take months, and the ability to actually open it up for many more people, I think, is amazing.”


Deepfake business risks are growing – here's what leaders need to know

The risk of deepfake attacks appears to be growing as the technology becomes more accessible. The threat from deepfakes has escalated from a “niche concern” to a “mainstream cybersecurity priority” at “remarkable speed”, says Cooper. “The barrier to entry has lowered dramatically thanks to open source software and automated creation tools. Even low-skilled threat actors can launch highly convincing attacks.” The target pool is also expanding, says Cooper. “As larger corporations invest in advanced mitigation strategies, threat actors are turning their attention to small and medium-sized businesses, which often lack the resources and dedicated cybersecurity teams to combat these threats effectively.” The technology itself is also improving. Deepfakes have already improved “a staggering amount” – even in the past six months, says McClain. “The tech is internalising human mannerisms all the time. It is already widely accessible at a consumer level, even used as a form of entertainment via face swap apps.” ... Meanwhile, technology can be helpful in mitigating deepfake attack risks. Cooper recommends deepfake detection tools that use AI to analyse facial movements, voice patterns and metadata in emails, calls and video conferences. “While not foolproof, these tools can flag suspicious content for human review.” With the risks in mind, it also makes sense to implement multi-factor authentication for sensitive requests. 


The Big Shift: From “More Qubits” to Better Qubits

As quantum systems grew, it became clear that more qubits do not always mean more computing power. Most physical qubits are too noisy, unstable, and short-lived to run useful algorithms. Errors pile up faster than useful results, and after a while, the output stops making sense. Adding more fragile qubits now often makes things worse, not better. This realization has led to a shift in thinking across the field. Instead of asking how many qubits fit on a chip, researchers and engineers now ask a tougher question: how many of those qubits can actually be trusted? ... For businesses watching from the outside, this change matters. It is easier to judge claims when vendors talk about error rates, runtimes, and reliability instead of vague promises. It also helps set realistic expectations. Logical qubits show that early useful systems will be small but stable, solving specific problems well instead of trying to do everything. This new way of thinking also changes how we look at risk. The main risk is not that quantum computing will fail completely. Instead, the risk is that organizations will misunderstand early progress and either invest too much because of hype or too little because of old ideas. Knowing how important error correction is helps clear up this confusion. One of the clearest signs of maturity is how failure is handled. In early science, failure can be unclear. 


Reimagining digital value creation at Inventia Healthcare

“The business strategy and IT strategy cannot be two different strategies altogether,” he explains. “Here at Inventia, IT strategy is absolutely coupled with the core mission of value-added oral solid formulations. The focus is not on deploying systems, it is on creating measurable business value.” Historically, the pharmaceutical industry has been perceived as a laggard in technology adoption, largely due to stringent regulatory requirements. However, this narrative has shifted significantly over the last five to six years. “Regulators and organisations realised that without digitalisation, it is impossible to reach the levels of efficiency and agility that other industries have achieved,” notes Nandavadekar. “Compliance is no longer a barrier, it is an enabler when implemented correctly.” ... “Digitalisation mandates streamlined and harmonised operations. Once all processes are digital, we can correlate data across functions and even correlate how different operations impact each other,” points out Nandavadekar. ... With expanding digital footprints across cloud, IoT, and global operations, cybersecurity has become a mission-critical priority for Inventia. Nandavadekar describes cybersecurity as an “iceberg,” where visible threats represent only a fraction of the risk landscape. “In the pharmaceutical world, cybersecurity is not just about hackers, it is often a national-level activity. India is emerging as a global pharma hub, and that makes us a strategic target.”


Scaling Agentic AI: When AI Takes Action, the Real Challenge Begins

Organizations often underestimate tool risk. The model is only one part of the decision chain. The real exposure comes from the tools and APIs the agent can call. If those are loosely governed, the agent becomes privileged automation moving faster than human oversight can keep up. “Agentic AI does not just stress models. It stress-tests the enterprise control plane.” ... Agentic AI requires reliable data, secure access, and strong observability. If data quality is inconsistent and telemetry is incomplete, autonomy turns into uncertainty. Leaders need a clear method to select use cases based on business value, feasibility, risk class, and time-to-impact. The operating model should enforce stage gates and stop low-value projects early. Governance should be built into delivery through reusable patterns, reference architectures, and pre-approved controls. When guardrails are standardized, teams move faster because they no longer have to debate the same risk questions repeatedly. ... Observability must cover the full chain, not just model performance. Teams should be able to trace prompts, context, tool calls, policy decisions, approvals, and downstream outcomes. ... Agentic AI introduces failure modes that can appear plausible on the surface. Without traceability and real-time signals, organizations are forced to guess, and guessing is not an operating strategy.


Security at AI speed: The new CISO reality

The biggest shift isn’t tooling, we’ve always had to choose our platforms carefully, it’s accountability. When an AI agent acts at scale, the CISO remains accountable for the outcome. That governance and operating model simply didn’t exist a decade ago. Equally, CISOs now carry accountability for inaction. Failing to adopt and govern AI-driven capabilities doesn’t preserve safety, it increases exposure by leaving the organization structurally behind. The CISO role will need to adopt a fresh mindset and the skills to go with it to meet this challenge. ... While quantification has value, seeking precision based on historical data before ensuring strong controls, ownership, and response capability creates a false sense of confidence. It anchors discussion in technical debt and past trends, rather than aligning leadership around emerging risks and sponsoring a bolder strategic leap through innovation. That forward-looking lens drives better strategy, faster decisions, and real organizational resilience. ... When a large incumbent experiences an outage, breach, model drift, or regulatory intervention, the business doesn’t degrade gracefully, it fails hard. The illusion of safety disappears quickly when you realise you don’t own the kill switches, can’t constrain behaviour in real time, and don’t control the recovery path. Vendor scale does not equal operational resilience.


Why Borderless AI Is Coming to an End

Most countries are still wrestling with questions related to "sovereign AI" - the technical ambition to develop domestic compute, models and data capabilities - and "AI sovereignty" - the political and legal right to govern how AI operates within national boundaries, said Gaurav Gupta, vice president analyst at Gartner. Most national strategies today combine both. "There is no AI journey without thinking geopolitics in today's world," said Akhilesh Tuteja, partner, advisory services and former head of cybersecurity at KPMG. ... Smaller nations, Gupta said, are increasing their investment in domestic AI stacks as they look for alternatives to the closed U.S. model, including computing power, data centers, infrastructure and models aligned with local laws, culture and region. "Organizations outside the U.S. and China are investing more in sovereign cloud IaaS to gain digital and technological independence," said Rene Buest, senior director analyst at Gartner. "The goal is to keep wealth generation within their own borders to strengthen the local economy." ... The practical barriers to AI sovereignty start with infrastructure. The level of investment is beyond the reach of most countries, creating a fundamental asymmetry in the global AI landscape. "One gigawatt new data centers cost north of $50 billion," Gupta said. "The biggest constraint today is availability of power … You are now competing for electricity with residential and other industrial use cases."


Why Data Governance Fails in Many Organizations: The IT-Business Divide

The problem extends beyond missing stewardship roles to a deeper documentation chaos. Organizations often have multiple documents addressing the same concepts, but the language varies depending on which unit you ask, when you ask, and to whom you’re speaking. Some teams call these documents “policies,” while others use terms like “guidelines,” “standards,” or “procedures.” With no clarity on which term means what or whether these documents represent the same authority level. More critically, no one has the responsibility or authority to define which version is the “appropriate” one. Documents get written – often as part of project deliverables or compliance exercises – but no governance process ensures they’re actually embedded into operations, kept current, or reconciled with other documents covering similar ground. ... Without proper governance, a problematic pattern emerges: Technical teams impose technical obligations on business people, requiring them to validate data formats, approve schema changes, or participate in narrow technical reviews, while the real governance questions go unaddressed. Business stakeholders are involved only in a few steps of the data lifecycle, without understanding the whole picture or having authority over business-critical decisions. ... The governance challenges become even more insidious when organizations produce reports that appear identical in format while concealing fundamental differences in their underlying methodology. 

Daily Tech Digest - February 15, 2026


zQuote for the day:

"Accept responsibility for your life. Know that it is you who will get you where you want to go, no one else." -- Les Brown



AI will likely shut down critical infrastructure on its own, no attackers required

“The next great infrastructure failure may not be caused by hackers or natural disasters, but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” said Wam Voster, VP Analyst at Gartner. “A secure ‘kill-switch’ or override mode accessible only to authorized operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration.” “Modern AI models are so complex they often resemble black boxes. Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed,” Voster added. ... Bob Wilson, cybersecurity advisor at the Info-Tech Research Group, also worries about the near inevitability of a serious industrial AI mishap. "The plausibility of a disaster that results from a bad AI decision is quite strong. With AI becoming embedded in enterprise strategies faster than governance frameworks can keep up, AI systems are advancing faster and outpacing risk controls,” Wilson said. “We can see the leading indicators of rapid AI deployment and limited governance increase potential exposure, and those indicators justify investments in governance and operational controls.”


New Architecture Could Cut Quantum Hardware Needed to Break RSA-2048 by Tenfold

The Pinnacle Architecture replaces surface codes with QLDPC codes, a class of error-correcting codes in which each qubit interacts with only a small number of others, even as the machine grows. That structure allows errors to be detected without complex, all-to-all connections, an advance that keeps correction circuits faster and reducing the number of physical qubits needed per logical qubit. To dive a little deeper, the architecture is built from modular “processing units,” “magic engines,” and optional “memory” blocks. Each processing unit consists of QLDPC code blocks — the error-correcting structures that protect the logical qubits — along with measurement hardware that enables arbitrary logical Pauli measurements during each correction cycle. ... The architecture hints at the difference between surface codes and QLDPC. Surface codes require dense, grid-like local connectivity and many qubits per logical qubit. QLDPC spreads parity checks more sparsely across a block. One way to picture the difference is wiring. Surface codes are like protecting data by wiring every component into a dense grid — reliable, but heavy and hardware-intensive. QLDPC codes achieve protection with far fewer connections per qubit, more like a sparsely wired network that still catches errors but uses much less hardware. ... If fewer than 100,000 physical qubits were sufficient to break RSA-2048 under realistic error models, the threshold for cryptographic risk could arrive sooner than many surface-code-based estimates imply.


5 key trends reshaping the SIEM market

By converging SIEM with XDR and SOAR, organizations get a unified security platform that consolidates data, reduces complexity, and improves response times, as systems can be configured to automatically contain threats without any manual intervention. ... “The term SIEM++ is being used to refer to this next step in SIEM, which is designed for more current needs within security ops asking for automation, AI, and real-time responses. Hence, the increase in SIEM alongside other tools,” Context’s Turner says. ... “The full enforcement of the NIS2 directive in Europe has forced midtier companies to move from basic monitoring to auditable security operations,” Context’s Turner explains. “These companies are too large for simple tools but too small for massive 24/7 internal SOCs. They are buying the SIEM++ platforms to serve as their central source of truth for auditors.” ... Cloud-based SIEMs remove the need for expensive hardware upgrades associated with traditional on-premises deployments, offering scalability and faster response times alongside potentially more cost-effective usage-based pricing models. ... Static rule-based SIEMs struggle to keep pace with today’s sophisticated cyber threats, which is why AI-powered SIEM platforms use real-time machine learning (ML) to analyze vast amounts of security data, improving their ability to identify anomalies and previously unseen attack techniques that legacy technologies might miss.


AI agent seemingly tries to shame open source developer for rejected pull request

Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem. Now AI slop comes with an AI slap. ... In his blog post, Shambaugh describes the bot's "hit piece" as an attack on his character and reputation. "It researched my code contributions and constructed a 'hypocrisy' narrative that argued my actions must be motivated by ego and fear of competition," he wrote. "It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was 'better than this.' And then it posted this screed publicly on the open internet." ... Daniel Stenberg, founder and lead developer of curl, has been dealing with AI slop bug reports for the past two years and recently decided to shut down curl's bug bounty program to remove the financial incentive for low-quality reports – which can come from people as well as AI models.


How to ground AI agents in accurate, context-rich data

Building and operating AI agents using unorganized data is like trying to navigate a rolling dinghy in a stormy ocean of 100-foot-tall waves. Solving this conundrum is one of the most important tasks for companies today, as they struggle to empower their AI agents to reliably work as designed and expected. To succeed, this firehose of unsorted data must be put into the right contexts so that enterprises can use and process it correctly and quickly to deliver the desired business results. ... Adding to the data demands is that AI agents can perform multiple steps or processes at a time while working on a task. But those concurrent and consecutive capabilities can require multiple streams of data, adding to the massive data pressures using search. “What that means is that at each of those steps, there’s an opportunity to find some relevant data, use that data in a meaningful way, and take the next action based on the results,” Mather explained. “So, the importance of the relevance at each step becomes paramount. If there’s bad results at the first step, it just compounds at every step that the agent takes.” The consequences are especially problematic when enterprises are trying to use AI agents to drive a business process or take meaningful actions within an application.


Beyond Code: How Engineers Need to Evolve in the AI Era

Generative AI lets you be more productive than you ever thought possible if you are willing to embrace it. It is a similar skill to being able to manage other humans, being able to delegate problems. Really great individual engineers can have trouble delegating, because they're worried that if they give a task to someone else that they haven't figured out how to do completely themselves yet, that it won't get done well enough. ... a lot of companies are now hiring engineers to go sit in the office of their customer, and they're an expert in their own company's platform, but they also become an expert in the customer's platform and the customer's problem, and they're right there embedded. And I love that model, because that is how you learn to apply technology directly to a problem, you are there with the person who has the problem. This is what we've been telling product managers to do for years. ... There will still be complex things to do as well that other people aren't going to think of to do, but they're going to be more innovative. They're not going to be the rogue repetition of building the same SaaS features we've seen everywhere. That can be done with generative AI, and frankly, isn't that good? Do we really want to keep doing that stuff ourselves? Let us work on the really maybe new problems that no one has ever solved before, bringing new theoretical ideas into software engineering, and let the more boilerplate stuff be taken care of.


Why there’s no ‘screenless’ revolution

One trend that emerged from last month’s Consumer Electronics Show (CES) was the range of devices that can record, analyze, and assist (using AI) without requiring visual focus. Many tech startups are working on screenless AI hardware. ... One reason these devices are more viable now than in the past is the miniaturization of duplex audio, which enables constant, bi-directional conversation where the AI can be interrupted or talk over the user naturally. ... If you look carefully at the world of screenless wearables, you can see that none of them are designed to be used in isolation. They’re all peripherals to screen-based devices such as smartphones. And while the Ray-Ban Meta type audio AI glasses are great, the future of AI glasses is closer to the Meta Ray-Ban Display glasses with one screen or two screens in the glass. There’s no way companies like Apple will offer alternatives to their own popular screen-based devices. Going totally screenless is for kids. Or rather, it should be. ... The only way to enforce a ban is to conduct a thorough search on every student every day before school — something that’s totally impractical and undesirable. Instead, schools, parents and teachers should all be uniting behind the best screenless wearables for students as a workable alternative to obsessive smartphone and screen use. The reality is that the total ubiquity of AI is coming. There’s the toxic version — the rise of AI slop, for instance — and the non-toxic version. 


The Leadership Crisis No One Is Naming: A Need For Emotionally Whole Leaders

Leaders operating from unhealthy emotional frameworks often exhibit a variety of symptoms. They may show fear-based decision making, driven by a need to control outcomes rather than empower people. There may be micromanagement rooted in insecurity and mistrust instead of accountability. I've seen fight-or-flight leadership, where urgency replaces strategy and reaction replaces discernment. There can also be perfectionism, which confuses excellence with rigidity and punishes humanity. Then there's fearmongering, where pressure and anxiety are used as motivational tools. These patterns are rarely intentional, yet they are deeply consequential. ... The downstream effects of emotionally unhealthy leadership are often measurable and compounding. Stifled creativity plagues teams as they stop offering ideas that may be criticized or dismissed. Organizations may suffer increased attrition, particularly among high performers who have options. Employees may perform defensively rather than boldly in the presence of psychological unsafety. Cultures driven by urgency without sustainability can become breeding grounds for burnout and toxicity, reeking of institutional mistrust that erodes collaboration and loyalty. ... Developing emotionally intelligent leadership is not about personality change; it is about capacity building. The most effective leaders treat emotional health as a leadership discipline, not a personal afterthought.


Alarm Overload at the Industrial Edge: When More Visibility Reduces Reliability

More sensors, more connected assets, and more analytics can produce more insight, but they can also produce a flood of fragmented alerts that bury the few signals people actually need. When alarms become noisy or ambiguous, response slows down, fatigue sets in, and confidence in the monitoring system erodes. That is not a user inconvenience. It is a decision-quality problem. ... The purpose of alarm management is not to surface everything that happens. It is to surface what requires timely action, and to do it in a way that supports fast, correct decisions. If the alarm stream is noisy, inconsistent, or hard to interpret, the system is not doing its job. People respond the only way humans can: they tune out, acknowledge quickly, and rely on informal workarounds. ... Alarm overload is likely already affecting reliability if teams regularly see any of the following: alarms that do not require action, inconsistent severity definitions across systems, duplicate alerts for the same condition, frequent acknowledgements with no follow-up, or confusion about who owns the response. These are common as edge programs grow. ... The path forward is not to silence alarms indiscriminately. It is to modernize alarm management for the edge era: unify meaning across sources, deliver context that supports action, maintain governance as systems evolve, and design workflows that match how people actually respond.


Beyond Automation: How Generative AI in DevOps is Redefining Software Delivery

Integrating a GenAI DevOps workflow means moving from a reactive ‘fix it when it breaks’ mindset to a more generative one. For example, instead of spending four hours writing a custom Jenkins pipeline, you can now describe your requirements to an AI agent and get a working YAML file in under two minutes. Moreover, if you wish to scale these capabilities, exploring professional GenAI development services can help you build custom models that understand your particular codebase and security protocols. ... Pipelines are the lifeblood of DevOps, but they are also the first thing to break. GenAI can analyze historical build data to predict why a build might fail before it even starts. It can also auto-generate unit tests to ensure that your ‘quick fix’ doesn’t break anything downstream. ... humans make typos in config files, especially at 2:00 a.m. AI doesn’t get tired. By using GenAI to generate and validate configuration files, you ensure strict consistency across dev, staging and production environments. It acts as a continuous linter that understands the intent behind the code, catching logic errors that traditional syntax checkers would miss. ... Cloud bills are a nightmare to manage manually. GenAI can analyze thousands of lines of cloud-spending data and generate the exact CLI commands needed to shut down underutilized resources or right-size your clusters. It doesn’t just tell you that you’re overspending; it gives you the solution to fix it immediately.


Daily Tech Digest - February 14, 2026


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



UK CIOs struggle to govern surge in business AI agents

The findings point to a growing governance challenge alongside the rapid spread of agent-based systems across the enterprise. AI agents, which can take actions or make decisions within software environments, have moved quickly from pilots into day-to-day operations. That shift has increased demands for monitoring, audit trails and accountability across IT and risk functions. UK CIOs also reported growing concern about the spread of internally built tools. ... The results suggest "shadow AI" risks are becoming a mainstream issue for large organisations. As AI development tools get easier to use, more staff outside IT can build automated workflows, chatbots and agent-like applications. This trend has intensified questions about data access, model behaviour, and whether organisations can trace decisions back to specific inputs and approvals. ... The findings also suggest governance gaps are already affecting operations. Some 84% of UK CIOs said traceability or explainability shortcomings have delayed or prevented AI projects from reaching production, highlighting friction between the push to deploy AI and the work needed to demonstrate effective controls. For CIOs, the issue also intersects with enterprise risk management and information security. Unmonitored agents and rapidly developed internal apps can create new pathways into sensitive datasets and complicate incident response if an organisation cannot determine which automated process accessed or changed data.


You’ve Generated Your MVP Using AI. What Does That Mean for Your Software Architecture?

While the AI generates an MVP, teams can’t control the architectural decisions that the AI made. They might be able to query the AI on some of the decisions, but many decisions will remain opaque because the AI does not understand why the code that it learned from did what it did. ... From the perspective of the development team, AI-generated code is largely a black-box; even if it could be understood, no one has time to do so. Software development teams are under intense time pressure. They turn to AI to partially relieve this pressure, but in doing so they also increase the expectations of their business sponsors regarding productivity. ... As a result, the nature of the work of architecting will shift from up-front design work to empirical evaluation of QARs, i.e. acceptance testing of the MVA. As part of this shift, the development team will help the business sponsors figure out how to test/evaluate the MVP. In response, development teams need to get a lot better at empirically testing the architecture of the system. ... The team needs to know what trade-offs it may need to make, and they need to articulate those in the prompts to the AI. The AI then works as a very clever search engine to find possible solutions that might address the trade-offs. As noted above, these still need to be evaluated empirically, but it does save the team some time in coming up with possible solutions.


Successful Leaders Often Lack Self-Awareness

As a leader, how do you respond in emotionally charged situations? It's under pressure that emotions can quickly escalate and unexamined behavioral patterns emerge—for all of us. In my work with senior executives, I have seen time and again how these unconscious “go-to” reactions surface when stakes are high. This is why self-awareness is not a one-time achievement but a lifelong practice—and for many leaders, it remains their greatest blind spot. Why? ... Turning inward to develop self-awareness naturally places you in uncomfortable territory. It challenges long-standing assumptions and exposes blind spots. One client came to me because a colleague described her as harsh. She genuinely did not see herself that way. Another sought my help after his CEO told him he struggled to communicate with him. Through our work together, we uncovered how defensively he responded to feedback, often without realizing it. ... As leaders rise to the top, the accolades that propel them forward are rooted in talent, strategic decision-making and measurable outcomes. However, once at the highest levels, leadership expands beyond execution. The role now demands mastery of relationships—within the organization and beyond, with clients, partners and customers. At this level, self-awareness is no longer optional; it becomes essential.


How Should Financial Institutions Prepare for Quantum Risk?

“Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers,” said Rob Joyce, then director of cybersecurity for the National Security Agency, in an August 2023 statement. In August 2024, NIST published three post-quantum cryptographic standards — ML-KEM, ML-DSA and SLH-DSA — designed to withstand quantum attacks. These standards are intended to secure data across systems such as digital banking platforms, payment processing environments, email and e-commerce. NIST has encouraged organizations to begin implementation as soon as possible. ... A critical first step is conducting an assessment of which systems and data assets are most at risk. The ISACA IT security organization recommends building a comprehensive inventory of systems vulnerable to quantum attacks and classifying data based on sensitivity, regulatory requirements and business impact. For financial institutions, this assessment should prioritize customer PII, transaction data, long-term financial records and proprietary business information. Understanding where the greatest financial, reputational and regulatory exposure exists enables IT leaders to focus mitigation efforts where they matter most. Institutions should also conduct executive briefings, staff training and tabletop exercises to build awareness. 


The cure for the AI hype hangover

The way AI dominates the discussions at conferences is in contrast to its slower progress in the real world. New capabilities in generative AI and machine learning show promise, but moving from pilot to impactful implementation remains challenging. Many experts, including those cited in this CIO.com article, describe this as an “AI hype hangover,” in which implementation challenges, cost overruns, and underwhelming pilot results quickly dim the glow of AI’s potential. Similar cycles occurred with cloud and digital transformation, but this time the pace and pressure are even more intense. ... Too many leaders expect AI to be a generalized solution, but AI implementations are highly context-dependent. The problems you can solve with AI (and whether those solutions justify the investment) vary dramatically from enterprise to enterprise. This leads to a proliferation of small, underwhelming pilot projects, few of which are scaled broadly enough to demonstrate tangible business value. In short, for every triumphant AI story, numerous enterprises are still waiting for any tangible payoff. For some companies, it won’t happen anytime soon—or at all. ... Beyond data, there is the challenge of computational infrastructure: servers, security, compliance, and hiring or training new talent. These are not luxuries but prerequisites for any scalable, reliable AI implementation. In times of economic uncertainty, most enterprises are unable or unwilling to allocate the funds for a complete transformation.


4th-Party Risk: How Commercial Software Puts You At Risk

Unlike third-party providers, however, there are no contractual relationships between businesses and their fourth-party vendors. That means companies have little to no visibility into those vendors' operations, only blind spots that are fueling an even greater need to shift from trust-based to evidence-based approaches. That lack of visibility has severe consequences for enterprises and other end-user organizations. ... Illuminating 4th-party blind spots begins with mapping critical dependencies through direct vendors. As you go about this process, don't settle for static lists. Software supply chains are the most common attack vector, and every piece of software you receive contains evidence of its supply chain. This includes embedded libraries, development artifacts, and behavioral patterns. ... Businesses must also implement some broader frameworks that go beyond the traditional options, such as NIST CSF or ISO 27001, which provide a foundation but ultimately fall short by assuming businesses lack control in their fourth-party relationships. This stems from the fact that no contractual relationships exist that far downstream, and without contractual obligations, a business cannot conduct risk assessments, demand compliance documentation, or launch an audit as it might with a third-party vendor. ... Also consider SLSA (Supply Chain Levels for Software Artifacts). These provide measurable security controls to prevent tampering and ensure integrity. For companies operating in regulated industries, consider aligning with emerging requirements.


Geopatriation and sovereign cloud: how data returns to the source

The key to understanding a sovereign cloud, adds Google Cloud Spain’s national technology director Héctor Sánchez Montenegro, is that it’s not a one-size-fits-all concept. “Depending on the location, sector, or regulatory context, sovereignty has a different meaning for each customer,” he says. Google already offers sovereign clouds, whose guarantee of sovereignty isn’t based on a single product, but on a strategy that separates the technology from the operations. “We understand that sovereignty isn’t binary, but rather a spectrum of needs we guarantee through three levels of isolation and control,” he adds. ... One of the certainties of this sovereign cloud boom is it’s closely connected to the context in which organizations, companies, and other cloud end users operate. While digital sovereignty was less prevalent at the beginning of the century, it’s now become ubiquitous, especially as political decisions in various countries have solidified technology as a key geostrategic asset. “Data sovereignty is a fundamental part of digital sovereignty, to the point that in practice, it’s becoming a requirement for employment contracts,” says María Loza ... With the technological landscape becoming more unsure and complex, the goal is to know and mitigate risks where possible, and create additional options. “We’re at a crucial moment,” Loza Correa points out. “Data is a key business asset that must be protected.”


Managing AI Risk in a Non-Deterministic World: A CTO’s Perspective

Drawing parallels to the early days of cloud computing, Chawla notes that while AI platforms will eventually rationalize around a smaller set of leaders, organizations cannot afford to wait for that clarity. “The smartest investments right now are fearlessly establishing good data infrastructure, sound fundamentals, and flexible architectures,” she explains. In a world where foundational models are broadly accessible, Chawla argues that differentiation shifts elsewhere. ... Beyond tooling, Chawla emphasizes operating principles that help organizations break silos. “Improve the quality at the source,” she says. “Bring DevOps principles into DataOps. Clean it up front, keep data where it is, and provide access where it needs to be.” ... Bias, hallucinations, and unintended propagation of sensitive data are no longer theoretical risks. Addressing them requires more than traditional security controls. “It’s layering additional controls,” Chawla says, “especially as we look at agentic AI and agentic ops.” ... Auditing and traceability are equally critical, especially as models are fine-tuned with proprietary data. “You don’t want to introduce new bias or model drift,” she explains. “Testing for bias is super important.” While regulatory environments differ across regions, Chawla stresses that existing requirements like GDPR, data sovereignty, PCI, and HIPAA still apply. AI does not replace those obligations; it intensifies them.


CVEs are set to top 50,000 this year, marking a record high – here’s how CISOs and security teams can prepare for a looming onslaught

"Much like a city planner considering population growth before commissioning new infrastructure, security teams benefit from understanding the likely volume and shape of vulnerabilities they will need to process," Leverett added. "The difference between preparing for 30,000 vulnerabilities and 100,000 is not merely operational, it’s strategic." While the figures may be jarring for business leaders, Kevin Knight, CEO of Talion, said it’s not quite a worst-case scenario. Indeed, it’s the impact of the vulnerabilities within their specific environments that business leaders and CISOs should be focusing on. ... Naturally, security teams could face higher workloads and will be contending with a more perilous threat landscape moving forward. Adding insult to injury, Knight noted that security teams are often brought in late during the procurement process - sometimes after contracts have been signed. In some cases, applications are also deployed without the CISO’s knowledge altogether, creating blind spots and increasing the risk that critical vulnerabilities are being missed. Meanwhile, poor third-party risk management means organizations can unknowingly inherit their suppliers’ vulnerabilities, effectively expanding their attack surface and putting their sensitive data at risk of being breached. "As CVE disclosures continue to rise, businesses must ensure the CISO is involved from the outset of technology decisions," he said. 


Data Privacy in the Age of AI

The first challenge stems from the fact that AI systems run on large volumes of customer data. This “naturally increases the risk of data being used in ways that go beyond what customers originally expected, or what regulations allow,” says Chiara Gelmini, financial services industry solutions director at Pegasystems. This is made trickier by the fact that some AI models can be “black boxes to a certain degree,” she says. “So it’s not always clear, internally or to customers, how data is used or how decisions are actually made," she tells SC Media UK. ... AI is “fully inside” the existing data‑protection regime the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, Gelmini explains. Under these current laws, if an AI system uses personal data, it must meet the same standards of lawfulness, transparency, data minimisation, accuracy, security and accountability as any other processing, she says. Meanwhile, organisations are expected to prove they have thought the area through, typically by carrying out a Data Protection Impact Assessment (DPIA) before deploying high‑risk AI. ... The growing use of AI can pose a risk, but only if it gets out of hand. As AI becomes easier to adopt and more widespread, the practical way to stay ahead of these risks is “strong, AI governance,” says Gelmini. “Firms should build privacy in from the start, mask private data, lock down security, make models explainable, test for bias, and keep a close eye on how systems behave over time."