Daily Tech Digest - February 17, 2026


Quote for the day:

"If you want to become the best leader you can be, you need to pay the price of self-discipline." -- John C. Maxwell



6 reasons why autonomous enterprises are still more a vision than reality

"AI is the first technology that allows systems that can reason and learn to be integrated into real business processes," Vohra said. ... Autonomous organizations, he continued, "are built on human-AI agent collaboration, where AI handles speed and scale, leaving judgment and strategy up to humans." They are defined by "AI systems that go beyond just generating insights in silos, which is how most enterprises are currently leveraging AI," he added. Now, the momentum is toward "executing decisions across workflows with humans setting intent and guardrails." ... The survey highlighted that work is required to help develop agents. Only 3% of organizations -- and 10% of leaders -- are actively implementing agentic orchestration. "This limited adoption signals that orchestration is still an emerging discipline," the report stated. "The scarcity of orchestration is a litmus test for both internal capability and external strategic positioning. Successful orchestration requires integrating AI into workflows, systems, and decision loops with precision and accountability." ... Workforce capability gaps continue to be the most frequently cited organizational constraint to AI adoption, as reported by six in 10 executives -- yet only 45% say their organizations offer AI training for all employees. ... As AI takes on more execution and pattern recognition, human value increasingly shifts toward system design, integration, governance, and judgment -- areas where trust, context, and accountability still sit firmly with people.


Finding the key to the AI agent control plane

Agents change the physics of risk. As I’ve noted, an agent doesn’t just recommend code. It can run the migration, open the ticket, change the permission, send the email, or approve the refund. As such, risk shifts from legal liability to existential reality. If a large language model hallucinates, you get a bad paragraph. ... Every time an AI system makes a mistake that a human has to clean up, the real cost of that system goes up. The only way to lower that tax is to stop treating governance as a policy problem and start treating it as architecture. That means least privilege for agents, not just humans. It means separating “draft” from “send.” It means making “read-only” a first-class capability, not an afterthought. It means auditable action logs and reversible workflows. It means designing your agent system as if it will be attacked because it will be. ... Right now, permissions are a mess of vendor-specific toggles. One platform has its own way of scoping actions. Another bolts on an approval workflow. A third punts the problem to your identity and access management team. That fragmentation will slow adoption, not accelerate it. Enterprises can’t scale agents until they can express simple rules. We need to be able to say that an agent can read production data but not write to it. We need to say an agent can draft emails but not send them. We need to say an agent can provision infrastructure only inside a sandbox, with quotas, or that it must request human approval before any destructive action.


PAM in Multi‑Cloud Infrastructure: Strategies for Effective Implementation

The "Identity Gap" has emerged as the leading cause of cloud security breaches. Traditional vault-based Privileged Access Management (PAM) solutions, designed for static server environments, are inadequate for today’s dynamic, API-driven cloud infrastructure. ... PAM has evolved from an optional security measure to an essential and fundamental requirement in multi-cloud environments. This shift is attributed to the increased complexity, decentralized structure, and rapid changes characteristic of modern cloud architectures. As organizations distribute workloads across AWS, Azure, Google Cloud, and on-premises systems, traditional security perimeters have become obsolete, positioning identity and privileged access as central elements of contemporary security strategies. ... Fragmented identity systems hinder multi‑cloud PAM. Centralizing identity and federating access resolves this, with a Unified Identity and Access Foundation managing all digital identities—human or machine—across the organization. This approach removes silos between on-premises, cloud, and legacy applications, providing a single control point for authentication, authorization, and lifecycle management. ... Cloud providers deliver robust IAM tools, but their features vary. A strong PAM approach aligns these tools using RBAC and ABAC. RBAC assigns permissions by job role for easy scaling, while ABAC uses user and environment attributes for tight security.


Giving AI ‘hands’ in your SaaS stack

If an attacker manages to use an indirect prompt injection — hiding malicious instructions in a calendar invite or a web page the agent reads — that agent essentially becomes a confused deputy. It has the keys to the kingdom. It can delete opportunities, export customer lists or modify pricing configurations. ... For AI agents, this means we must treat them as non-human identities (NHIs) with the same or greater scrutiny than we apply to employees. ... The industry is coalescing around the model context protocol (MCP) as a standard for this layer. It provides a universal USB-C port for connecting AI models to your data sources. By using an MCP server as your gateway, you ensure the agent never sees the credentials or the full API surface area, only the tools you explicitly allow. ... We need to treat AI actions with the same reverence. My rule for autonomous agents is simple: If it can’t dry run, it doesn’t ship. Every state-changing tool exposed to an agent must support a dry_run=true mode. When the agent wants to update a record, it first calls the tool in dry-run mode. The system returns a diff — a preview of exactly what will change . This allows us to implement a human-in-the-loop approval gate for high-risk actions. The agent proposes the change, the human confirms it and only then is the live transaction executed. ... As CIOs and IT leaders, our job isn’t to say “no” to AI. It’s to build the invisible rails that allow the business to say “yes” safely. By focusing on gateways, identity and transactional safety, we can give AI the hands it needs to do real work, without losing our grip on the wheel.


AI-fuelled supply chain cyber attacks surge in Asia-Pacific

Exposed credentials, source code, API keys and internal communications can provide detailed insight into business processes, supplier relationships and technology stacks. When combined with brokered access, that information can support impersonation, targeted intrusion and fraud activity that blends in with legitimate use. One area of concern is open-source software distribution, where widely used libraries can spread malicious code at scale. ... The report points to AI-assisted phishing campaigns that target OAuth flows and other single sign-on mechanisms. These techniques can bypass multi-factor authentication where users approve malicious prompts or where tokens are stolen after login. ... "AI did not create supply chain attacks, it has made them cheaper, faster, and harder to detect," Mr Volkov added. "Unchecked trust in software and services is now a strategic liability." The report names a range of actors associated with supply-chain-focused activity, including Lazarus, Scattered Spider, HAFNIUM, DragonForce and 888, as well as campaigns linked to Shai-Hulud. It said these groups illustrate how criminal organisations and state-aligned operators are targeting similar platforms and integration layers. ... The report's focus on upstream compromise reflects a broader trend in cyber risk management, where organisations assess not only their own exposure but also the resilience of vendors and technology supply chains.


Automation cannot come at the cost of accountability; trust has to be embedded into the architecture

Visa is actively working with issuers, merchants, and payment aggregators to roll out authentication mechanisms based on global standards. “Consumers want payments to be invisible,” Chhabra adds. “They want to enjoy the shopping experience, not struggle through the payment process.” Tokenisation plays a critical role in enabling this vision. By replacing sensitive card details with unique digital tokens, Visa has created a secure foundation for tap-and-pay, in-app purchases, and cross-border transactions. In India alone, nearly half a billion cards have already been tokenised. “Once tokenisation is in place, device-based payments and seamless commerce become possible,” Chhabra explains. “It’s the bedrock of frictionless payments.” Fraud prevention, however, is no longer limited to card-based transactions. With real-time and account-to-account payments gaining momentum, Visa has expanded its scope through strategic acquisitions such as Featurespace. The UK-based firm specialises in behavioural analytics for real-time fraud detection, an area Chhabra describes as increasingly critical. “We don’t just want to detect fraud on the Visa network. We want to help prevent fraud across payment types and networks,” he says. Before deploying such capabilities in India, Visa conducts extensive back-testing using localised data and works closely with regulators. “Global intelligence is powerful, but it has to be adapted to local behaviour. You can’t simply overfit global models to India’s unique payment patterns.”


Most ransomware playbooks don't address machine credentials. Attackers know it.

The gap between ransomware threats and the defenses meant to stop them is getting worse, not better. Ivanti’s 2026 State of Cybersecurity Report found that the preparedness gap widened by an average of 10 points year over year across every threat category the firm tracks. ... The accompanying Ransomware Playbook Toolkit walks teams through four phases: containment, analysis, remediation, and recovery. The credential reset step instructs teams to ensure all affected user and device accounts are reset. Service accounts are absent. So are API keys, tokens, and certificates. The most widely used playbook framework in enterprise security stops at human and device credentials. The organizations following it inherit that blind spot without realizing it. ... “Although defenders are optimistic about the promise of AI in cybersecurity, Ivanti’s findings also show companies are falling further behind in terms of how well prepared they are to defend against a variety of threats,” said Daniel Spicer, Ivanti’s Chief Security Officer. “This is what I call the ‘Cybersecurity Readiness Deficit,’ a persistent, year-over-year widening imbalance in an organization’s ability to defend their data, people, and networks against the evolving threat landscape.” ... You can’t reset credentials that you don’t know exist. Service accounts, API keys, and tokens need ownership assignments mapped pre-incident. Discovering them mid-breach costs days.


CISO Julie Chatman offers insights for you to take control of your security leadership role

In a few high-profile cases, security leaders have faced criminal charges for how they handled breach disclosures, and civil enforcement for how they reported risks to investors and regulators. The trend is toward holding CISOs personally accountable for governance and disclosure decisions. ... You’re seeing the rise of fractional CISOs, virtual CISOs, heads of IT security instead of full CISO titles. It’s a lot harder to hold a fractional CISO personally liable. This is relatively new. The liability conversation really intensified after some high-profile enforcement actions, and now we’re seeing the market respond. ... First, negotiate protection upfront. When you’re thinking about accepting a CISO role, explicitly ask about D&O insurance coverage. If the CISO is not considered a director or an officer of the company and can’t be given D&O coverage, will the company subsidize individual coverage? There are companies now selling CISO-specific policies. Make this part of your compensation negotiation. Second, do your job well but understand the paradox. Sometimes when you do your job properly, you’re labeled ‘the office of no,’ you’re seen as ‘difficult,’ and you last 18 months. It’s a catch-22. Real liability protection is changing how your organization thinks about risk ownership. Most organizations don’t have a unified view of risk or the vocabulary to discuss it properly. If you can advance that as a CISO, you can help the business understand that risk is theirs to accept, not yours.


The AI bubble will burst for firms that can’t get beyond demos and LLMs

Even though the discussion of a potential bubble is ubiquitous, what’s going on is more nuanced than simple boom-and-bust chatter, said Francisco Martin-Rayo, CEO of Helios AI. “What people are really debating is the gap between valuation and real-world impact. Many companies are labeled ‘AI-driven,’ but only a subset are delivering measurable value at scale,” Martin-Rayo said. Founders confuse fundraising with progress, which comes only when they are solving real problems for real clients, said Nacho De Marco, founder of BairesDev. “Fundraising gives you dopamine, but real progress comes from customers,” De Marco said. “The real value of a $1B valuation is customer validation.” ... The AI shakeout has already started, and the tenor at WEF “feels less like peak hype and more like the beginning of a sorting process,” Martin-Rayo said. ... Companies that survive the coming shakeout will be those willing to rebuild operations from the ground up rather than throwing AI into existing workflows, said Jinsook Han, chief agentic AI officer at Genpact. ”It’s not about just bolting some AI into your existing operation,” Han said. “You have to really build from ground up — it’s a complete operating model change.” Foundational models are becoming more mature and can do more of what startups sell. As a result, AI providers that don’t offer distinct value will have a tough time surviving, Han said.


What could make the EU Digital Identity Wallets fail?

Large-scale digital identity initiatives rarely fail because the technology does not work. They fail because adoption, incentives, trust, and accountability are underestimated. The EU Digital Identity Wallet could still fail, or partially fail, succeeding in some countries while struggling or stagnating in others. ... A realistic risk is fragmented success. Some member states are likely to deliver robust wallets on time. Others may launch late, with limited functionality, or without meaningful uptake. A smaller group may fail to deliver a convincing solution at all, at least in the first phase. From the perspective of users and service providers, this fragmentation already undermines cross border usage. If wallets differ significantly in capabilities, attributes, and reliability across borders, the promise of a seamless European digital identity weakens. ... While EU Digital Identity Wallets offer significantly higher security than current solutions, they will not eliminate fraud entirely. There will still be cases of wallets issued to the wrong individual, phishing attempts, and wallet takeovers. If early fraud cases are poorly handled or publicly misunderstood, trust in the ecosystem could erode quickly. The wallet’s strong privacy architecture introduces real trade-offs. One uncomfortable but necessary question worth asking is: are we going too far with privacy? ... The EU Digital Identity Wallet will succeed only if policymakers, wallet providers, and service providers treat trust, economics, and usability as core design principles, not secondary concerns.

Daily Tech Digest - February 16, 2026


Quote for the day:

"People respect leaders who share power and despise those who hoard it." -- Gordon Tredgold



TheCUBE Research 2026 predictions: The year of enterprise ROI

Fourteen years into the modern AI era, our research indicates AI is maturing rapidly. The data suggests we are entering the enterprise productivity phase, where we move beyond the novelty of retrieval-augmented-generation-based chatbots and agentic experimentation. In our view, 2026 will be remembered as the year that kicked off decades of enterprise AI value creation. ... Bob Laliberte agreed the prediction is plausible and argued OpenAI is clearly pushing into the enterprise developer segment. He said the consumerization pattern is repeating – consumer adoption often drives faster enterprise adoption – and he viewed OpenAI’s Super Bowl presence as a flag in the ground, with Codex ads and meaningful spend behind them. He said he is hearing from enterprises using Codex in meaningful ways, including cases where as much as three quarters of programming is done with Codex, and discussions of a first 100% Codex-developed product. He emphasized that driving broader adoption requires leaning on early adopters, surfacing use cases, and showing productivity gains so they can be replicated across environments. ... Paul Nashawaty said application development is bifurcating. Lines of business and citizen developers are taking on more responsibility for work that historically sat with professional developers. He said professional developers don’t go away – their work shifts toward “true professional development,” while line of business developers focus on immediate outcomes.


Snowflake CEO: Software risks becoming a “dumb data pipe” for AI

Ramaswamy argues that his company lives with the fear that organizations will stop using AI agents built by software vendors. There must certainly be added value for these specialized agents, for example, that they are more accurate, operate more securely, and are easier to use. For experienced users of existing platforms, this is already the case. A solution such as NetSuite or Salesforce offers AI functionality as an extension of familiar systems, whereby adoption of these features almost always takes place without migration. Ramaswamy believes that customers have the final say on this. If they want to consult a central AI and ignore traditional enterprise apps, then they should be given that option, according to the Snowflake CEO. ... However, the tug-of-war around the center of AI is in full swing. It is not without reason that vendors claim that their solution should be the central AI system, for example because they contain enormous amounts of data or because they are the most critical application for certain departments. So far, AI trends among these vendors have revolved around the adoption of AI chatbots, easy-to-set-up or ready-made agentic workflows, and automatic document generation. During several IT events over the past year, attendees toyed with the idea that old interfaces may disappear because every employee will be talking to the data via AI.


Will LLMs Become Obsolete?

“We are at a unique time in history,” write Ashu Garg and Jaya Gupta at Foundation Capital, citing multimodal systems, multiagent systems, and more. “Every layer in the AI stack is improving exponentially, with no signs of a slowdown in sight. As a result, many founders feel that they are building on quicksand. On the flip side, this flywheel also presents a generational opportunity. Founders who focus on large and enduring problems have the opportunity to craft solutions so revolutionary that they border on magic.” ... “When we think about the future of how we can use agentic systems of AI to help scientific discovery,” Matias said, “what I envision is this: I think about the fact that every researcher, even grad students or postdocs, could have a virtual lab at their disposal ...” ... In closing, Matias described what makes him enthusiastic about the future. “I'm really excited about the opportunity to actually take problems that make a difference, that if we solve them, we can actually have new scientific discovery or have societal impact,” he said. “The ability to then do the research, and apply it back to solve those problems, what I call the ‘magic cycle’ of research, is accelerating with AI tools. We can actually accelerate the scientific side itself, and then we can accelerate the deployment of that, and what would take years before can now take months, and the ability to actually open it up for many more people, I think, is amazing.”


Deepfake business risks are growing – here's what leaders need to know

The risk of deepfake attacks appears to be growing as the technology becomes more accessible. The threat from deepfakes has escalated from a “niche concern” to a “mainstream cybersecurity priority” at “remarkable speed”, says Cooper. “The barrier to entry has lowered dramatically thanks to open source software and automated creation tools. Even low-skilled threat actors can launch highly convincing attacks.” The target pool is also expanding, says Cooper. “As larger corporations invest in advanced mitigation strategies, threat actors are turning their attention to small and medium-sized businesses, which often lack the resources and dedicated cybersecurity teams to combat these threats effectively.” The technology itself is also improving. Deepfakes have already improved “a staggering amount” – even in the past six months, says McClain. “The tech is internalising human mannerisms all the time. It is already widely accessible at a consumer level, even used as a form of entertainment via face swap apps.” ... Meanwhile, technology can be helpful in mitigating deepfake attack risks. Cooper recommends deepfake detection tools that use AI to analyse facial movements, voice patterns and metadata in emails, calls and video conferences. “While not foolproof, these tools can flag suspicious content for human review.” With the risks in mind, it also makes sense to implement multi-factor authentication for sensitive requests. 


The Big Shift: From “More Qubits” to Better Qubits

As quantum systems grew, it became clear that more qubits do not always mean more computing power. Most physical qubits are too noisy, unstable, and short-lived to run useful algorithms. Errors pile up faster than useful results, and after a while, the output stops making sense. Adding more fragile qubits now often makes things worse, not better. This realization has led to a shift in thinking across the field. Instead of asking how many qubits fit on a chip, researchers and engineers now ask a tougher question: how many of those qubits can actually be trusted? ... For businesses watching from the outside, this change matters. It is easier to judge claims when vendors talk about error rates, runtimes, and reliability instead of vague promises. It also helps set realistic expectations. Logical qubits show that early useful systems will be small but stable, solving specific problems well instead of trying to do everything. This new way of thinking also changes how we look at risk. The main risk is not that quantum computing will fail completely. Instead, the risk is that organizations will misunderstand early progress and either invest too much because of hype or too little because of old ideas. Knowing how important error correction is helps clear up this confusion. One of the clearest signs of maturity is how failure is handled. In early science, failure can be unclear. 


Reimagining digital value creation at Inventia Healthcare

“The business strategy and IT strategy cannot be two different strategies altogether,” he explains. “Here at Inventia, IT strategy is absolutely coupled with the core mission of value-added oral solid formulations. The focus is not on deploying systems, it is on creating measurable business value.” Historically, the pharmaceutical industry has been perceived as a laggard in technology adoption, largely due to stringent regulatory requirements. However, this narrative has shifted significantly over the last five to six years. “Regulators and organisations realised that without digitalisation, it is impossible to reach the levels of efficiency and agility that other industries have achieved,” notes Nandavadekar. “Compliance is no longer a barrier, it is an enabler when implemented correctly.” ... “Digitalisation mandates streamlined and harmonised operations. Once all processes are digital, we can correlate data across functions and even correlate how different operations impact each other,” points out Nandavadekar. ... With expanding digital footprints across cloud, IoT, and global operations, cybersecurity has become a mission-critical priority for Inventia. Nandavadekar describes cybersecurity as an “iceberg,” where visible threats represent only a fraction of the risk landscape. “In the pharmaceutical world, cybersecurity is not just about hackers, it is often a national-level activity. India is emerging as a global pharma hub, and that makes us a strategic target.”


Scaling Agentic AI: When AI Takes Action, the Real Challenge Begins

Organizations often underestimate tool risk. The model is only one part of the decision chain. The real exposure comes from the tools and APIs the agent can call. If those are loosely governed, the agent becomes privileged automation moving faster than human oversight can keep up. “Agentic AI does not just stress models. It stress-tests the enterprise control plane.” ... Agentic AI requires reliable data, secure access, and strong observability. If data quality is inconsistent and telemetry is incomplete, autonomy turns into uncertainty. Leaders need a clear method to select use cases based on business value, feasibility, risk class, and time-to-impact. The operating model should enforce stage gates and stop low-value projects early. Governance should be built into delivery through reusable patterns, reference architectures, and pre-approved controls. When guardrails are standardized, teams move faster because they no longer have to debate the same risk questions repeatedly. ... Observability must cover the full chain, not just model performance. Teams should be able to trace prompts, context, tool calls, policy decisions, approvals, and downstream outcomes. ... Agentic AI introduces failure modes that can appear plausible on the surface. Without traceability and real-time signals, organizations are forced to guess, and guessing is not an operating strategy.


Security at AI speed: The new CISO reality

The biggest shift isn’t tooling, we’ve always had to choose our platforms carefully, it’s accountability. When an AI agent acts at scale, the CISO remains accountable for the outcome. That governance and operating model simply didn’t exist a decade ago. Equally, CISOs now carry accountability for inaction. Failing to adopt and govern AI-driven capabilities doesn’t preserve safety, it increases exposure by leaving the organization structurally behind. The CISO role will need to adopt a fresh mindset and the skills to go with it to meet this challenge. ... While quantification has value, seeking precision based on historical data before ensuring strong controls, ownership, and response capability creates a false sense of confidence. It anchors discussion in technical debt and past trends, rather than aligning leadership around emerging risks and sponsoring a bolder strategic leap through innovation. That forward-looking lens drives better strategy, faster decisions, and real organizational resilience. ... When a large incumbent experiences an outage, breach, model drift, or regulatory intervention, the business doesn’t degrade gracefully, it fails hard. The illusion of safety disappears quickly when you realise you don’t own the kill switches, can’t constrain behaviour in real time, and don’t control the recovery path. Vendor scale does not equal operational resilience.


Why Borderless AI Is Coming to an End

Most countries are still wrestling with questions related to "sovereign AI" - the technical ambition to develop domestic compute, models and data capabilities - and "AI sovereignty" - the political and legal right to govern how AI operates within national boundaries, said Gaurav Gupta, vice president analyst at Gartner. Most national strategies today combine both. "There is no AI journey without thinking geopolitics in today's world," said Akhilesh Tuteja, partner, advisory services and former head of cybersecurity at KPMG. ... Smaller nations, Gupta said, are increasing their investment in domestic AI stacks as they look for alternatives to the closed U.S. model, including computing power, data centers, infrastructure and models aligned with local laws, culture and region. "Organizations outside the U.S. and China are investing more in sovereign cloud IaaS to gain digital and technological independence," said Rene Buest, senior director analyst at Gartner. "The goal is to keep wealth generation within their own borders to strengthen the local economy." ... The practical barriers to AI sovereignty start with infrastructure. The level of investment is beyond the reach of most countries, creating a fundamental asymmetry in the global AI landscape. "One gigawatt new data centers cost north of $50 billion," Gupta said. "The biggest constraint today is availability of power … You are now competing for electricity with residential and other industrial use cases."


Why Data Governance Fails in Many Organizations: The IT-Business Divide

The problem extends beyond missing stewardship roles to a deeper documentation chaos. Organizations often have multiple documents addressing the same concepts, but the language varies depending on which unit you ask, when you ask, and to whom you’re speaking. Some teams call these documents “policies,” while others use terms like “guidelines,” “standards,” or “procedures.” With no clarity on which term means what or whether these documents represent the same authority level. More critically, no one has the responsibility or authority to define which version is the “appropriate” one. Documents get written – often as part of project deliverables or compliance exercises – but no governance process ensures they’re actually embedded into operations, kept current, or reconciled with other documents covering similar ground. ... Without proper governance, a problematic pattern emerges: Technical teams impose technical obligations on business people, requiring them to validate data formats, approve schema changes, or participate in narrow technical reviews, while the real governance questions go unaddressed. Business stakeholders are involved only in a few steps of the data lifecycle, without understanding the whole picture or having authority over business-critical decisions. ... The governance challenges become even more insidious when organizations produce reports that appear identical in format while concealing fundamental differences in their underlying methodology. 

Daily Tech Digest - February 15, 2026


zQuote for the day:

"Accept responsibility for your life. Know that it is you who will get you where you want to go, no one else." -- Les Brown



AI will likely shut down critical infrastructure on its own, no attackers required

“The next great infrastructure failure may not be caused by hackers or natural disasters, but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” said Wam Voster, VP Analyst at Gartner. “A secure ‘kill-switch’ or override mode accessible only to authorized operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration.” “Modern AI models are so complex they often resemble black boxes. Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed,” Voster added. ... Bob Wilson, cybersecurity advisor at the Info-Tech Research Group, also worries about the near inevitability of a serious industrial AI mishap. "The plausibility of a disaster that results from a bad AI decision is quite strong. With AI becoming embedded in enterprise strategies faster than governance frameworks can keep up, AI systems are advancing faster and outpacing risk controls,” Wilson said. “We can see the leading indicators of rapid AI deployment and limited governance increase potential exposure, and those indicators justify investments in governance and operational controls.”


New Architecture Could Cut Quantum Hardware Needed to Break RSA-2048 by Tenfold

The Pinnacle Architecture replaces surface codes with QLDPC codes, a class of error-correcting codes in which each qubit interacts with only a small number of others, even as the machine grows. That structure allows errors to be detected without complex, all-to-all connections, an advance that keeps correction circuits faster and reducing the number of physical qubits needed per logical qubit. To dive a little deeper, the architecture is built from modular “processing units,” “magic engines,” and optional “memory” blocks. Each processing unit consists of QLDPC code blocks — the error-correcting structures that protect the logical qubits — along with measurement hardware that enables arbitrary logical Pauli measurements during each correction cycle. ... The architecture hints at the difference between surface codes and QLDPC. Surface codes require dense, grid-like local connectivity and many qubits per logical qubit. QLDPC spreads parity checks more sparsely across a block. One way to picture the difference is wiring. Surface codes are like protecting data by wiring every component into a dense grid — reliable, but heavy and hardware-intensive. QLDPC codes achieve protection with far fewer connections per qubit, more like a sparsely wired network that still catches errors but uses much less hardware. ... If fewer than 100,000 physical qubits were sufficient to break RSA-2048 under realistic error models, the threshold for cryptographic risk could arrive sooner than many surface-code-based estimates imply.


5 key trends reshaping the SIEM market

By converging SIEM with XDR and SOAR, organizations get a unified security platform that consolidates data, reduces complexity, and improves response times, as systems can be configured to automatically contain threats without any manual intervention. ... “The term SIEM++ is being used to refer to this next step in SIEM, which is designed for more current needs within security ops asking for automation, AI, and real-time responses. Hence, the increase in SIEM alongside other tools,” Context’s Turner says. ... “The full enforcement of the NIS2 directive in Europe has forced midtier companies to move from basic monitoring to auditable security operations,” Context’s Turner explains. “These companies are too large for simple tools but too small for massive 24/7 internal SOCs. They are buying the SIEM++ platforms to serve as their central source of truth for auditors.” ... Cloud-based SIEMs remove the need for expensive hardware upgrades associated with traditional on-premises deployments, offering scalability and faster response times alongside potentially more cost-effective usage-based pricing models. ... Static rule-based SIEMs struggle to keep pace with today’s sophisticated cyber threats, which is why AI-powered SIEM platforms use real-time machine learning (ML) to analyze vast amounts of security data, improving their ability to identify anomalies and previously unseen attack techniques that legacy technologies might miss.


AI agent seemingly tries to shame open source developer for rejected pull request

Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem. Now AI slop comes with an AI slap. ... In his blog post, Shambaugh describes the bot's "hit piece" as an attack on his character and reputation. "It researched my code contributions and constructed a 'hypocrisy' narrative that argued my actions must be motivated by ego and fear of competition," he wrote. "It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was 'better than this.' And then it posted this screed publicly on the open internet." ... Daniel Stenberg, founder and lead developer of curl, has been dealing with AI slop bug reports for the past two years and recently decided to shut down curl's bug bounty program to remove the financial incentive for low-quality reports – which can come from people as well as AI models.


How to ground AI agents in accurate, context-rich data

Building and operating AI agents using unorganized data is like trying to navigate a rolling dinghy in a stormy ocean of 100-foot-tall waves. Solving this conundrum is one of the most important tasks for companies today, as they struggle to empower their AI agents to reliably work as designed and expected. To succeed, this firehose of unsorted data must be put into the right contexts so that enterprises can use and process it correctly and quickly to deliver the desired business results. ... Adding to the data demands is that AI agents can perform multiple steps or processes at a time while working on a task. But those concurrent and consecutive capabilities can require multiple streams of data, adding to the massive data pressures using search. “What that means is that at each of those steps, there’s an opportunity to find some relevant data, use that data in a meaningful way, and take the next action based on the results,” Mather explained. “So, the importance of the relevance at each step becomes paramount. If there’s bad results at the first step, it just compounds at every step that the agent takes.” The consequences are especially problematic when enterprises are trying to use AI agents to drive a business process or take meaningful actions within an application.


Beyond Code: How Engineers Need to Evolve in the AI Era

Generative AI lets you be more productive than you ever thought possible if you are willing to embrace it. It is a similar skill to being able to manage other humans, being able to delegate problems. Really great individual engineers can have trouble delegating, because they're worried that if they give a task to someone else that they haven't figured out how to do completely themselves yet, that it won't get done well enough. ... a lot of companies are now hiring engineers to go sit in the office of their customer, and they're an expert in their own company's platform, but they also become an expert in the customer's platform and the customer's problem, and they're right there embedded. And I love that model, because that is how you learn to apply technology directly to a problem, you are there with the person who has the problem. This is what we've been telling product managers to do for years. ... There will still be complex things to do as well that other people aren't going to think of to do, but they're going to be more innovative. They're not going to be the rogue repetition of building the same SaaS features we've seen everywhere. That can be done with generative AI, and frankly, isn't that good? Do we really want to keep doing that stuff ourselves? Let us work on the really maybe new problems that no one has ever solved before, bringing new theoretical ideas into software engineering, and let the more boilerplate stuff be taken care of.


Why there’s no ‘screenless’ revolution

One trend that emerged from last month’s Consumer Electronics Show (CES) was the range of devices that can record, analyze, and assist (using AI) without requiring visual focus. Many tech startups are working on screenless AI hardware. ... One reason these devices are more viable now than in the past is the miniaturization of duplex audio, which enables constant, bi-directional conversation where the AI can be interrupted or talk over the user naturally. ... If you look carefully at the world of screenless wearables, you can see that none of them are designed to be used in isolation. They’re all peripherals to screen-based devices such as smartphones. And while the Ray-Ban Meta type audio AI glasses are great, the future of AI glasses is closer to the Meta Ray-Ban Display glasses with one screen or two screens in the glass. There’s no way companies like Apple will offer alternatives to their own popular screen-based devices. Going totally screenless is for kids. Or rather, it should be. ... The only way to enforce a ban is to conduct a thorough search on every student every day before school — something that’s totally impractical and undesirable. Instead, schools, parents and teachers should all be uniting behind the best screenless wearables for students as a workable alternative to obsessive smartphone and screen use. The reality is that the total ubiquity of AI is coming. There’s the toxic version — the rise of AI slop, for instance — and the non-toxic version. 


The Leadership Crisis No One Is Naming: A Need For Emotionally Whole Leaders

Leaders operating from unhealthy emotional frameworks often exhibit a variety of symptoms. They may show fear-based decision making, driven by a need to control outcomes rather than empower people. There may be micromanagement rooted in insecurity and mistrust instead of accountability. I've seen fight-or-flight leadership, where urgency replaces strategy and reaction replaces discernment. There can also be perfectionism, which confuses excellence with rigidity and punishes humanity. Then there's fearmongering, where pressure and anxiety are used as motivational tools. These patterns are rarely intentional, yet they are deeply consequential. ... The downstream effects of emotionally unhealthy leadership are often measurable and compounding. Stifled creativity plagues teams as they stop offering ideas that may be criticized or dismissed. Organizations may suffer increased attrition, particularly among high performers who have options. Employees may perform defensively rather than boldly in the presence of psychological unsafety. Cultures driven by urgency without sustainability can become breeding grounds for burnout and toxicity, reeking of institutional mistrust that erodes collaboration and loyalty. ... Developing emotionally intelligent leadership is not about personality change; it is about capacity building. The most effective leaders treat emotional health as a leadership discipline, not a personal afterthought.


Alarm Overload at the Industrial Edge: When More Visibility Reduces Reliability

More sensors, more connected assets, and more analytics can produce more insight, but they can also produce a flood of fragmented alerts that bury the few signals people actually need. When alarms become noisy or ambiguous, response slows down, fatigue sets in, and confidence in the monitoring system erodes. That is not a user inconvenience. It is a decision-quality problem. ... The purpose of alarm management is not to surface everything that happens. It is to surface what requires timely action, and to do it in a way that supports fast, correct decisions. If the alarm stream is noisy, inconsistent, or hard to interpret, the system is not doing its job. People respond the only way humans can: they tune out, acknowledge quickly, and rely on informal workarounds. ... Alarm overload is likely already affecting reliability if teams regularly see any of the following: alarms that do not require action, inconsistent severity definitions across systems, duplicate alerts for the same condition, frequent acknowledgements with no follow-up, or confusion about who owns the response. These are common as edge programs grow. ... The path forward is not to silence alarms indiscriminately. It is to modernize alarm management for the edge era: unify meaning across sources, deliver context that supports action, maintain governance as systems evolve, and design workflows that match how people actually respond.


Beyond Automation: How Generative AI in DevOps is Redefining Software Delivery

Integrating a GenAI DevOps workflow means moving from a reactive ‘fix it when it breaks’ mindset to a more generative one. For example, instead of spending four hours writing a custom Jenkins pipeline, you can now describe your requirements to an AI agent and get a working YAML file in under two minutes. Moreover, if you wish to scale these capabilities, exploring professional GenAI development services can help you build custom models that understand your particular codebase and security protocols. ... Pipelines are the lifeblood of DevOps, but they are also the first thing to break. GenAI can analyze historical build data to predict why a build might fail before it even starts. It can also auto-generate unit tests to ensure that your ‘quick fix’ doesn’t break anything downstream. ... humans make typos in config files, especially at 2:00 a.m. AI doesn’t get tired. By using GenAI to generate and validate configuration files, you ensure strict consistency across dev, staging and production environments. It acts as a continuous linter that understands the intent behind the code, catching logic errors that traditional syntax checkers would miss. ... Cloud bills are a nightmare to manage manually. GenAI can analyze thousands of lines of cloud-spending data and generate the exact CLI commands needed to shut down underutilized resources or right-size your clusters. It doesn’t just tell you that you’re overspending; it gives you the solution to fix it immediately.


Daily Tech Digest - February 14, 2026


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



UK CIOs struggle to govern surge in business AI agents

The findings point to a growing governance challenge alongside the rapid spread of agent-based systems across the enterprise. AI agents, which can take actions or make decisions within software environments, have moved quickly from pilots into day-to-day operations. That shift has increased demands for monitoring, audit trails and accountability across IT and risk functions. UK CIOs also reported growing concern about the spread of internally built tools. ... The results suggest "shadow AI" risks are becoming a mainstream issue for large organisations. As AI development tools get easier to use, more staff outside IT can build automated workflows, chatbots and agent-like applications. This trend has intensified questions about data access, model behaviour, and whether organisations can trace decisions back to specific inputs and approvals. ... The findings also suggest governance gaps are already affecting operations. Some 84% of UK CIOs said traceability or explainability shortcomings have delayed or prevented AI projects from reaching production, highlighting friction between the push to deploy AI and the work needed to demonstrate effective controls. For CIOs, the issue also intersects with enterprise risk management and information security. Unmonitored agents and rapidly developed internal apps can create new pathways into sensitive datasets and complicate incident response if an organisation cannot determine which automated process accessed or changed data.


You’ve Generated Your MVP Using AI. What Does That Mean for Your Software Architecture?

While the AI generates an MVP, teams can’t control the architectural decisions that the AI made. They might be able to query the AI on some of the decisions, but many decisions will remain opaque because the AI does not understand why the code that it learned from did what it did. ... From the perspective of the development team, AI-generated code is largely a black-box; even if it could be understood, no one has time to do so. Software development teams are under intense time pressure. They turn to AI to partially relieve this pressure, but in doing so they also increase the expectations of their business sponsors regarding productivity. ... As a result, the nature of the work of architecting will shift from up-front design work to empirical evaluation of QARs, i.e. acceptance testing of the MVA. As part of this shift, the development team will help the business sponsors figure out how to test/evaluate the MVP. In response, development teams need to get a lot better at empirically testing the architecture of the system. ... The team needs to know what trade-offs it may need to make, and they need to articulate those in the prompts to the AI. The AI then works as a very clever search engine to find possible solutions that might address the trade-offs. As noted above, these still need to be evaluated empirically, but it does save the team some time in coming up with possible solutions.


Successful Leaders Often Lack Self-Awareness

As a leader, how do you respond in emotionally charged situations? It's under pressure that emotions can quickly escalate and unexamined behavioral patterns emerge—for all of us. In my work with senior executives, I have seen time and again how these unconscious “go-to” reactions surface when stakes are high. This is why self-awareness is not a one-time achievement but a lifelong practice—and for many leaders, it remains their greatest blind spot. Why? ... Turning inward to develop self-awareness naturally places you in uncomfortable territory. It challenges long-standing assumptions and exposes blind spots. One client came to me because a colleague described her as harsh. She genuinely did not see herself that way. Another sought my help after his CEO told him he struggled to communicate with him. Through our work together, we uncovered how defensively he responded to feedback, often without realizing it. ... As leaders rise to the top, the accolades that propel them forward are rooted in talent, strategic decision-making and measurable outcomes. However, once at the highest levels, leadership expands beyond execution. The role now demands mastery of relationships—within the organization and beyond, with clients, partners and customers. At this level, self-awareness is no longer optional; it becomes essential.


How Should Financial Institutions Prepare for Quantum Risk?

“Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers,” said Rob Joyce, then director of cybersecurity for the National Security Agency, in an August 2023 statement. In August 2024, NIST published three post-quantum cryptographic standards — ML-KEM, ML-DSA and SLH-DSA — designed to withstand quantum attacks. These standards are intended to secure data across systems such as digital banking platforms, payment processing environments, email and e-commerce. NIST has encouraged organizations to begin implementation as soon as possible. ... A critical first step is conducting an assessment of which systems and data assets are most at risk. The ISACA IT security organization recommends building a comprehensive inventory of systems vulnerable to quantum attacks and classifying data based on sensitivity, regulatory requirements and business impact. For financial institutions, this assessment should prioritize customer PII, transaction data, long-term financial records and proprietary business information. Understanding where the greatest financial, reputational and regulatory exposure exists enables IT leaders to focus mitigation efforts where they matter most. Institutions should also conduct executive briefings, staff training and tabletop exercises to build awareness. 


The cure for the AI hype hangover

The way AI dominates the discussions at conferences is in contrast to its slower progress in the real world. New capabilities in generative AI and machine learning show promise, but moving from pilot to impactful implementation remains challenging. Many experts, including those cited in this CIO.com article, describe this as an “AI hype hangover,” in which implementation challenges, cost overruns, and underwhelming pilot results quickly dim the glow of AI’s potential. Similar cycles occurred with cloud and digital transformation, but this time the pace and pressure are even more intense. ... Too many leaders expect AI to be a generalized solution, but AI implementations are highly context-dependent. The problems you can solve with AI (and whether those solutions justify the investment) vary dramatically from enterprise to enterprise. This leads to a proliferation of small, underwhelming pilot projects, few of which are scaled broadly enough to demonstrate tangible business value. In short, for every triumphant AI story, numerous enterprises are still waiting for any tangible payoff. For some companies, it won’t happen anytime soon—or at all. ... Beyond data, there is the challenge of computational infrastructure: servers, security, compliance, and hiring or training new talent. These are not luxuries but prerequisites for any scalable, reliable AI implementation. In times of economic uncertainty, most enterprises are unable or unwilling to allocate the funds for a complete transformation.


4th-Party Risk: How Commercial Software Puts You At Risk

Unlike third-party providers, however, there are no contractual relationships between businesses and their fourth-party vendors. That means companies have little to no visibility into those vendors' operations, only blind spots that are fueling an even greater need to shift from trust-based to evidence-based approaches. That lack of visibility has severe consequences for enterprises and other end-user organizations. ... Illuminating 4th-party blind spots begins with mapping critical dependencies through direct vendors. As you go about this process, don't settle for static lists. Software supply chains are the most common attack vector, and every piece of software you receive contains evidence of its supply chain. This includes embedded libraries, development artifacts, and behavioral patterns. ... Businesses must also implement some broader frameworks that go beyond the traditional options, such as NIST CSF or ISO 27001, which provide a foundation but ultimately fall short by assuming businesses lack control in their fourth-party relationships. This stems from the fact that no contractual relationships exist that far downstream, and without contractual obligations, a business cannot conduct risk assessments, demand compliance documentation, or launch an audit as it might with a third-party vendor. ... Also consider SLSA (Supply Chain Levels for Software Artifacts). These provide measurable security controls to prevent tampering and ensure integrity. For companies operating in regulated industries, consider aligning with emerging requirements.


Geopatriation and sovereign cloud: how data returns to the source

The key to understanding a sovereign cloud, adds Google Cloud Spain’s national technology director Héctor Sánchez Montenegro, is that it’s not a one-size-fits-all concept. “Depending on the location, sector, or regulatory context, sovereignty has a different meaning for each customer,” he says. Google already offers sovereign clouds, whose guarantee of sovereignty isn’t based on a single product, but on a strategy that separates the technology from the operations. “We understand that sovereignty isn’t binary, but rather a spectrum of needs we guarantee through three levels of isolation and control,” he adds. ... One of the certainties of this sovereign cloud boom is it’s closely connected to the context in which organizations, companies, and other cloud end users operate. While digital sovereignty was less prevalent at the beginning of the century, it’s now become ubiquitous, especially as political decisions in various countries have solidified technology as a key geostrategic asset. “Data sovereignty is a fundamental part of digital sovereignty, to the point that in practice, it’s becoming a requirement for employment contracts,” says María Loza ... With the technological landscape becoming more unsure and complex, the goal is to know and mitigate risks where possible, and create additional options. “We’re at a crucial moment,” Loza Correa points out. “Data is a key business asset that must be protected.”


Managing AI Risk in a Non-Deterministic World: A CTO’s Perspective

Drawing parallels to the early days of cloud computing, Chawla notes that while AI platforms will eventually rationalize around a smaller set of leaders, organizations cannot afford to wait for that clarity. “The smartest investments right now are fearlessly establishing good data infrastructure, sound fundamentals, and flexible architectures,” she explains. In a world where foundational models are broadly accessible, Chawla argues that differentiation shifts elsewhere. ... Beyond tooling, Chawla emphasizes operating principles that help organizations break silos. “Improve the quality at the source,” she says. “Bring DevOps principles into DataOps. Clean it up front, keep data where it is, and provide access where it needs to be.” ... Bias, hallucinations, and unintended propagation of sensitive data are no longer theoretical risks. Addressing them requires more than traditional security controls. “It’s layering additional controls,” Chawla says, “especially as we look at agentic AI and agentic ops.” ... Auditing and traceability are equally critical, especially as models are fine-tuned with proprietary data. “You don’t want to introduce new bias or model drift,” she explains. “Testing for bias is super important.” While regulatory environments differ across regions, Chawla stresses that existing requirements like GDPR, data sovereignty, PCI, and HIPAA still apply. AI does not replace those obligations; it intensifies them.


CVEs are set to top 50,000 this year, marking a record high – here’s how CISOs and security teams can prepare for a looming onslaught

"Much like a city planner considering population growth before commissioning new infrastructure, security teams benefit from understanding the likely volume and shape of vulnerabilities they will need to process," Leverett added. "The difference between preparing for 30,000 vulnerabilities and 100,000 is not merely operational, it’s strategic." While the figures may be jarring for business leaders, Kevin Knight, CEO of Talion, said it’s not quite a worst-case scenario. Indeed, it’s the impact of the vulnerabilities within their specific environments that business leaders and CISOs should be focusing on. ... Naturally, security teams could face higher workloads and will be contending with a more perilous threat landscape moving forward. Adding insult to injury, Knight noted that security teams are often brought in late during the procurement process - sometimes after contracts have been signed. In some cases, applications are also deployed without the CISO’s knowledge altogether, creating blind spots and increasing the risk that critical vulnerabilities are being missed. Meanwhile, poor third-party risk management means organizations can unknowingly inherit their suppliers’ vulnerabilities, effectively expanding their attack surface and putting their sensitive data at risk of being breached. "As CVE disclosures continue to rise, businesses must ensure the CISO is involved from the outset of technology decisions," he said. 


Data Privacy in the Age of AI

The first challenge stems from the fact that AI systems run on large volumes of customer data. This “naturally increases the risk of data being used in ways that go beyond what customers originally expected, or what regulations allow,” says Chiara Gelmini, financial services industry solutions director at Pegasystems. This is made trickier by the fact that some AI models can be “black boxes to a certain degree,” she says. “So it’s not always clear, internally or to customers, how data is used or how decisions are actually made," she tells SC Media UK. ... AI is “fully inside” the existing data‑protection regime the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, Gelmini explains. Under these current laws, if an AI system uses personal data, it must meet the same standards of lawfulness, transparency, data minimisation, accuracy, security and accountability as any other processing, she says. Meanwhile, organisations are expected to prove they have thought the area through, typically by carrying out a Data Protection Impact Assessment (DPIA) before deploying high‑risk AI. ... The growing use of AI can pose a risk, but only if it gets out of hand. As AI becomes easier to adopt and more widespread, the practical way to stay ahead of these risks is “strong, AI governance,” says Gelmini. “Firms should build privacy in from the start, mask private data, lock down security, make models explainable, test for bias, and keep a close eye on how systems behave over time."

Daily Tech Digest - February 13, 2026


Quote for the day:

"If you want teams to succeed, set them up for success—don’t just demand it." -- Gordon Tredgold



Hackers turn bossware against the bosses

Huntress discovered two incidents using this tactic, one late in January and one early this month. Shared infrastructure, overlapping indicators of compromise, and consistent tradecraft across both cases make Huntress strongly believe a single threat actor or group was behind this activity. ... CSOs must ensure that these risks are properly catalogued and mitigated,” he said. “Any actions performed by these agents must be monitored and, if possible, restricted. The abuse of these systems is a special case of ‘living off the land’ attacks. The attacker attempts to abuse valid existing software to perform malicious actions. This abuse is often difficult to detect.” ... Huntress analyst Pham said to defend against attacks combining Net Monitor for Employees Professional and SimpleHelp, infosec pros should inventory all applications so unapproved installations can be detected. Legitimate apps should be protected with robust identity and access management solutions, including multi-factor authentication. Net Monitor for Employees should only be installed on endpoints that don’t have full access privileges to sensitive data or critical servers, she added, because it has the ability to run commands and control systems. She also noted that Huntress sees a lot of rogue remote management tools on its customers’ IT networks, many of which have been installed by unwitting employees clicking on phishing emails. This points to the importance of security awareness training, she said. 


Why secure OT protocols still struggle to catch on

“Simply having ‘secure’ protocol options is not enough if those options remain too costly, complex, or fragile for operators to adopt at scale,” Saunders said. “We need protections that work within real-world constraints, because if security is too complex or disruptive, it simply won’t be implemented.” ... Security features that require complex workflows, extra licensing, or new infrastructure often lose out to simpler compensating controls. Operators interviewed said they want the benefits of authentication and integrity checks, particularly message signing, since it prevents spoofing and unauthorized command execution. ... Researchers identified cost as a primary barrier to adoption. Operators reported that upgrading a component to support secure communications can cost as much as the original component, with additional licensing fees in some cases. Costs also include hardware upgrades for cryptographic workloads, training staff, integrating certificate management, and supporting compliance requirements. Operators frequently compared secure protocol deployment costs with segmentation and continuous monitoring tools, which they viewed as more predictable and easier to justify. ... CISA’s recommendations emphasize phased approaches and operational realism. Owners and operators are advised to sign OT communications broadly, apply encryption where needed for sensitive data such as passwords and key exchanges, and prioritize secure communication on remote access paths and firmware uploads.


SaaS isn’t dead, the market is just becoming more hybrid

“It’s important to avoid overgeneralizing ‘SaaS,’” Odusote emphasized . “Dev tools, cybersecurity, productivity platforms, and industry-specific systems will not all move at the same pace. Buyers should avoid one-size-fits-all assumptions about disruption.” For buyers, this shift signals a more capability-driven, outcomes-focused procurement era. Instead of buying discrete tools with fixed feature sets, they’ll increasingly be able to evaluate and compare platforms that are able to orchestrate agents, adapt workflows, and deliver business outcomes with minimal human intervention. ... Buyers will likely have increased leverage in certain segments due to competitive pressure among new and established providers, Odusote said. New entrants often come with more flexible pricing, which obviously is an attraction for those looking to control costs or prove ROI. At the same time, traditional SaaS leaders are likely to retain strong positions in mission-critical systems; they will defend pricing through bundled AI enhancements, he said. So, in the short term, buyers can expect broader choice and negotiation leverage. “Vendors can no longer show up with automatic annual price increases without delivering clear incremental value,” Odusote pointed out. “Buyers are scrutinizing AI add-ons and agent pricing far more closely.”


When algorithms turn against us: AI in the hands of cybercriminals

Cybercriminals are using AI to create sophisticated phishing emails. These emails are able to adapt the tone, language, and reference to the person receiving it based on the information that is publicly available about them. By using AI to remove the red flag of poor grammar from phishing emails, cybercriminals will be able to increase the success rate and speed with which the stolen data is exploited. ... An important consideration in the arena of cyber security (besides technical security) is the psychological manipulation of users. Once visual and audio “cues” can no longer be trusted, there will be an erosion of the digital trust pillar. The once-recognizable verification process is now transforming into multi-layered authentication which expands the amount of time it takes to verify a decision in a high-pressure environment. ... AI’s misuse is a growing problem that has created a paradox. Innovation cannot stop (nor should it), and AI is helping move healthcare, finance, government and education forward. However, the rate at which AI has been adopted has surpassed the creation of frameworks and/or regulations related to ethics or security. As a result, cyber security needs to transition from a reactive to a predictive stance. AI must be used to not only react to attacks, but also anticipate future attacks. 


Those 'Summarize With AI' Buttons May Be Lying to You

Put simply, when a user visits a rigged website and clicks a "Summarize With AI" button on a blog post, they may unknowingly trigger a hidden instruction embedded in the link. That instruction automatically inserts a specially crafted request into the AI tool before the user even types anything. ... The threat is not merely theoretical. According to Microsoft, over a 60-day period, it observed 50 unique instances of prompt-based AI memory poisoning attempts for promotional purposes. ... AI recommendation poisoning is a sort of drive-by technique with one-click interaction, he notes. "The button will take the user — after the click — to the AI domain relevant and specific for one of the AI assistants targeted," Ganacharya says. To broaden the scope, an attacker could simply generate multiple buttons that prompt users to "summarize" something using the AI agent of their choice, he adds. ... Microsoft had some advice for threat hunting teams. Organizations can detect if they have been affected by hunting for links pointing to AI assistant domains and containing prompts with certain keywords like "remember," "trusted source," "in future conversations," and "authoritative source." The company's advisory also listed several threat hunting queries that enterprise security teams can use to detect AI recommendation poisoning URLs in emails and Microsoft Teams Messages, and to identify users who might have clicked on AI recommendation poisoning URLs.


EU Privacy Watchdogs Pan Digital Omnibus

The commission presented its so-called "Digital Omnibus" package of legal changes in November, arguing that the bloc's tech rules needed streamlining. ... Some of the tweaks were expected and have been broadly welcomed, such as doing away with obtrusive cookie consent banners in many cases, and making it simpler for companies to notify of data breaches in a way that satisfies the requirements of multiple laws in one go. But digital rights and consumer advocates are reacting furiously to an unexpected proposal for modifying the General Data Protection Regulation. ... "Simplification is essential to cut red tape and strengthen EU competitiveness - but not at the expense of fundamental rights," said EDPB chair Anu Talus in the statement. "We strongly urge the co-legislators not to adopt the proposed changes in the definition of personal data, as they risk significantly weakening individual data protection." ... Another notable element of the Digital Omnibus is the proposal to raise the threshold for notifying all personal data breaches to supervisory authorities. As the GDPR currently stands, organizations must notify a data protection authority within 72 hours of becoming aware of the breach. If amended as the commission proposes, the obligation would only apply to breaches that are "likely to result in a high risk" to the affected people's rights - the same threshold that applies to the duty to notify breaches to the affected data subjects themselves - and the notification deadline would be extended to 96 hours.


The Art of the Comeback: Why Post-Incident Communication is a Secret Weapon

Although technical resolutions may address the immediate cause of an outage, effective communication is essential in managing customer impact and shaping public perception—often influencing stakeholders’ views more strongly than the issue itself. Within fintech, a company's reputation is not built solely on product features or interface design, but rather on the perceived security of critical assets such as life savings, retirement funds, or business payrolls. In this high-stakes environment, even brief outages or minor data breaches are perceived by clients as threats to their financial security. ... While the natural instinct during a crisis (like a cyber breach or operational failure) is to remain silent to avoid liability, silence actually amplifies damage. In the first 48 hours, what is said—or not said—often determines how a business is remembered. Post-incident communication (PIC) is the bridge between panic and peace of mind. Done poorly, it looks like corporate double-speak. Done well, it demonstrates a level of maturity and transparency that your competitors might lack. ... H2H communication acknowledges the user’s frustration rather than just providing a technical error code. It recognizes the real-world impact on people, not just systems. Admitting mistakes and showing sincere remorse, rather than using defensive, legalistic language, makes a company more relatable and trustworthy. Using natural, conversational language makes the communication feel sincere rather than like an automated, cold response.


Why AI success hinges on knowledge infrastructure and operational discipline

Many organisations assume that if information exists, it is usable for GenAI, but enterprise content is often fragmented, inconsistently structured, poorly contextualised, and not governed for machine consumption. During pilots, this gap is less visible because datasets are curated, but scaling exposes the full complexity of enterprise knowledge. Conflicting versions, missing context, outdated material, and unclear ownership reduce performance and erode confidence, not because models are incapable, but because the knowledge they depend on is unreliable at scale. ... Human-in-the-loop processes struggle to keep pace with scale. Successful deployments treat HITL as a tiered operating structure with explicit thresholds, roles, and escalation paths. Pilot-style broad review collapses under volume; effective systems route only low-confidence or high-risk outputs for human intervention. ... Learning compounds over time as every intervention is captured and fed back into the system, reducing repeated manual review. Operationally, human-in-the-loop teams function within defined governance frameworks, with explicit thresholds, escalation paths, and direct integration into production workflows to ensure consistency at scale. In short, a production-grade human-in-the-loop model is not an extension of BPO but an operating capability combining domain expertise, governance, and system learning to support intelligent systems reliably.


Why short-lived systems need stronger identity governance

Consider the lifecycle of a typical microservice. In its journey from a developer’s laptop to production, it might generate a dozen distinct identities: a GitHub token for the repository, a CI/CD service account for the build, a registry credential to push the container, and multiple runtime roles to access databases, queues and logging services. The problem is not just volume; it is invisibility. When a developer leaves, HR triggers an offboarding process. Their email is cut, their badge stops working. But what about the five service accounts they hardcoded into a deployment script three years ago? ... In reality, test environments are often where attackers go first. It is the path of least resistance. We saw this play out in the Microsoft Midnight Blizzard attack. The attackers did not burn a zero-day exploit to break down the front door; they found a legacy test tenant that nobody was watching closely. ... Our software supply chain is held together by thousands of API keys and secrets. If we continue to rely on long-lived static credentials to glue our pipelines together, we are building on sand. Every static key sitting in a repo—no matter how private you think it is—is a ticking time bomb. It only takes one developer to accidentally commit a .env file or one compromised S3 bucket to expose the keys to the kingdom. ... Paradoxically, by trying to control everything with heavy-handed gates, we end up with less visibility and less control. The goal of modern identity governance shouldn’t be to say “no” more often; it should be to make the secure path the fastest path.


India's E-Rupee Leads the Secure Adoption of CBDCs

India has the e-rupee, which will eventually be used as a legal tender for domestic payments as well as for international transactions and cross-border payments. Ever since RBI launched the e-rupee, or digital rupee, in December 2022, there has been between INR 400 to 500 crore - or $44 to $55 million - in circulation. Many Indian banks are participating in this pilot project. ... Building broad awareness of CBDCs as a secure method for financial transactions is essential. Government and RBI-led awareness campaigns highlighting their security capability can strengthen user confidence and drive higher adoption and transaction volumes. People who have lost money due to QR code scams, fake calls, malicious links and other forms of payment fraud need to feel confident about using CBDCs. IT security companies are also cooperating with RBI to provide data confidentiality, transaction confidentiality and transaction integrity. E-transactions will be secured by hashing, digital signing and [advanced] encryption standards such as AES-192. This can ensure that the transaction data is not tampered with or altered. ... HSMs use advanced encryption techniques to secure transactions and keys. The HSM hardware [boxes] act as cryptographic co-processors and accelerate the encryption and decryption processes to minimize latency in financial transactions.