Daily Tech Digest - December 20, 2025


Quote for the day:

"The bad news is time flies, The good news is you're the pilot." -- Elizabeth McCormick



Europe’s AI Challenge Runs Deeper Than Regulation

European firms may welcome a lessening of their regulatory burden. But Europe's problem isn’t merely regulatory drag. There's the structural gulf between what modern AI development requires and what Europe currently has the capacity to deliver. The Omnibus, helpful as it may be for legal alignment, cannot close those gaps. ... Europe has only a handful of companies, such as Aleph Alpha and Mistral, developing large-scale generative AI models domestically. Even these firms face steep structural disadvantages. A European Commission analysis has warned that such companies "require massive investment to avoid losing the race to U.S. competitors," while acknowledging that European capital markets "do not meet this need, forcing European firms to seek funding abroad." The result is a persistent leakage of ownership, control and strategic direction at precisely the moment scale matters most. ... This capital asymmetry produces powerful second-order effects. It determines who can absorb the high costs of large-scale model training, sustain loss-leading platform expansion and iterate continuously at the frontier of AI development. Over time, these dynamics create self-reinforcing structural advantages for capital-rich ecosystems. Advantages compound over time and remain largely beyond the corrective reach of regulation. These gaps are not regulatory problems. 


How to Pivot When Digital Ambitions Crash into Operational Realities

Transformation usually begins with ambition. Leaders imagine a future where the bank operates more efficiently and interacts with customers the way modern platforms do. But the more I speak with people running these programs, the more I see that banks are trying to build the future without fully understanding the present. They push forward with new digital products, new interfaces, new journeys, while the actual work happening across branches, operations centers and back offices remains something of a mystery, even to the teams responsible for changing it. ... what’s less widely discussed is that banks do not fail because change is impossible, they do because too much of the real work remains invisible. Many institutions still rely on assumptions about how processes run, assumptions based on documentation that no longer reflects reality. And when a transformation is built on assumptions, the project begins to drift. What banks need is an honest picture of their operational baseline. Once leaders see how their organization works today (not how it was designed years ago and not how it is described in flowcharts) the conversation changes. Priorities become clearer. Bottlenecks reveal themselves. Entire categories of work turn out to be more manual than anyone expected. And what looked like a technology problem often turns out to be a process problem that has been accumulating for years.


Six Lessons Learned Building RAG Systems in Production

Something ships quickly, the demo looks fine, leadership is satisfied. Then real users start asking real questions. The answers are vague. Sometimes wrong. Occasionally confident and completely nonsensical. That’s usually the end of it. Trust disappears fast, and once users decide a system can’t be trusted, they don’t keep checking back to see if it has improved and will not give it a second chance. They simply stop using it. In this case, the real failure is not technical but it’s human one. People will tolerate slow tools and clunky interfaces. What they won’t tolerate is being misled. When a system gives you the wrong answer with confidence, it feels deceptive. Recovering from that, even after months of work, is extremely hard. ... Many teams rush their RAG development, and to be honest, a simple MVP can be achieved very quickly if we aren’t focused on performance. But RAG is not a quick prototype; it’s a huge infrastructure project. The moment you start stressing your system with real evolving data in production, the weaknesses in your pipeline will begin to surface. ... When we talk about data preparation, we’re not just talking about clean data; we’re talking about meaningful context. That brings us to chunking. Chunking refers to breaking down a source document, perhaps a PDF or internal document, into smaller chunks before encoding it into vector form and storing it within a database.


Enterprise reactions to cloud and internet outages

Those in the c-suite, not surprisingly, “examined” or “explored” or “assessed” their companies’ vulnerability to cloud and internet problems after the news. So what did they find? Are enterprises fleeing the cloud they now see as risky instead of protective? ... All the enterprises thought the dire comments they’d read about cloud abandonment were exaggerations, or reflected an incomplete understanding of the cloud and alternatives to cloud dependence. And the internet? “What’s our alternative there?” one executive asked me. ... The enterprise experts pointed out that the network piece of this cake had special challenges. Its critical to keep the two other layers separated, at least to ensure that nothing from the user-facing layer could see the resource layer, which of course would be supporting other applications and, in the case of the cloud, other companies. It’s also critical in exposing the features of the cloud to customers. The network layer, of course, includes the Domain Name Server (DNS) system that converts our familiar URLs to actual IP addresses for traffic routing; it’s the system that played a key role in the AWS problem, and as I’ve noted, it’s run by a different team. ... Enterprises don’t see the notion of a combined team or an overlay, every-layer team, as the solution. None of the enterprises had a view of what would be needed to fix the internet, and only a quarter of even the virtualization experts express an opinion on what the answer is for the cloud. 


Offering more AI tools can't guarantee better adoption -- so what can?

After multiple years of relentless hype around AI and its promises, it's no surprise that companies have high expectations for their AI investments. But the measurable results have left a lot to be desired, with studies repeatedly showing most organizations aren't seeing the ROI they'd hoped for; in a Deloitte research report from October, only 10% of 1,854 respondents using agentic AI said they were realizing significant ROI on that investment, despite 85% increasing their spend on AI over the last 12 months. ... At face value, it seems obvious that the IT leadership team should be responsible for all things AI, since it is a technical product deployed at scale. In practice, this approach creates unnecessary hurdles to effective adoption, isolating technical decision-making from daily department workflows. And since many AI deployments are focused on equipping the workforce with new capabilities, excluding the human resources department is likely to constrain the effort. ... "If you focus on the tool, it's going to become procedural," Weed-Schertzer warned. "'Here's how to log in. This is your account.'" While technically useful, she added that she sees the biggest rewards coming from training employees on specific applications and having managers demonstrate the utility of an AI program for their teams, so that workers have a clear model from which to work. Seeing the utility is what will prompt long-term adoption, as opposed to a demo of basic tool functionality.


Why Cybersecurity Awareness Month Should Include Personal Privacy

Cybersecurity awareness campaigns tend to focus on email hygiene, secure logins, and network defense. These are key, but the boundary between internal threats and external exposure isn’t clear. An executive’s phone number leaked on a data broker’s site can become the first step in a targeted spear-phishing attack. A social media post about a trip can tip off a burglar. Forward-thinking entities know this. They tie personal privacy to enterprise risk. They integrate privacy checks into executive protection, threat monitoring, and insider-risk programs. Employees’ digital identities are treated as part of the attack surface. ... Removing data from your social profiles is only half the fight. The real struggle lives in data broker databases. These brokers compile, package, and resell personal data (addresses, phone numbers, demographics), feeding dozens of downstream systems. Together, they extend your reach into places you never directly visited. Most individuals never see their names there, never ask for removal, and never know about the pathways. Because every broker has its own rules, opt-outs require patience and effort. One broker demands forms, another wants ID, and a third ignores requests entirely. ... Awareness without action fades. However, when employees internalize privacy practices, they extend protection during their off hours and weekends. That’s when bad actors strike, during perceived downtime.


How CIOs can break free from reactive IT

Invisible IT is emerging as a practical way for CIOs to minimize disruption and improve the performance of the digital workplace. At its simplest, it’s an approach that prevents many issues from becoming problems in the first place, reducing the need for users to raise tickets or wait for help. As ecosystems scale, the gap between what organizations expect and what legacy workflows can deliver continues to widen. Lenovo’s latest research highlights invisible IT as a strategic shift toward proactive, personalized support that strengthens the performance of the digital workplace. ... In a workplace where devices, applications and services operate across different locations and conditions, this approach leaves CIOs without the early signals needed to prevent interruption. Faults often emerge gradually through performance drift or configuration inconsistencies, but traditional workflows only respond once the impact is visible to users. ... Invisible IT draws on AI to interpret device health, behavioral patterns and performance signals across the organization, giving CIOs earlier awareness of degradation and emerging risks. ... Invisible IT gives CIOs a clearer path to shaping a digital workplace that strengthens productivity and resilience by design. By shifting from user-reported issues to signal-driven insight, CIOs gain earlier visibility into risks and greater control over how disruptions are managed.


AI isn’t one system, and your threat model shouldn’t be either

The right way to partition a modern AI stack for threat modeling is not to treat “AI systems” as a monolithic risk category, we should return to security fundamentals and segment the stack by what the system does, how it is used, the sensitivity of the data it touches, and the impact its failure or breach could have. This distinguishes low risk internal productivity tools from models embedded in mission critical workflows or those representing core intellectual property and ensures AI is evaluated in context rather than by label. ... Threat modeling is a driver of higher quality that extends beyond security, and the best way to convey this to business leaders is through analogies rooted in their own domain. For example, in a car dealership, no one would allow a new salesperson to sign off on an 80 percent discount. The general manager instantly understands why that safeguard exists because it protects revenue, reputation, and operational stability. ... Tool calling patterns are one key area to incorporate into threat modeling. Most modern LLM implementations rely on external tool calls, such as web search or internal MCPs (some server side, and some client side). Unless these are tightly defined and constrained, they can drive the model to behave in unexpected or partially malicious ways. Changes in the frequency, sequence, or parameters of tool calls can indicate misuse, model confusion, or an attempted escalation path.


The Convergence Challenge: Architecture, Risk, and the Urgency for Assurance

If there was a single topic that drew the sharpest concern, it was the way organizations are adopting AI. Hayes described AI as a new threat vector that many companies have rushed into without architectural planning or governance. In his view, the industry is creating a new category of debt that may exceed what already exists in legacy systems. “AI is being adopted haphazardly in many organizations,” Hayes said. Marketing teams connect tools to mail systems. Staff paste corporate content into public models. Guardrails are light or nonexistent. In many cases no one has defined how to test models, how to check for poisoning, or how to verify that outputs remain reliable over time. Hayes argued that the field has done a poor job securing software in general, and is now repeating the same mistakes with AI, only faster. The difference is that AI systems can act and adapt at a pace human attackers cannot match. Swanson added that boards and senior leaders still struggle with their role in major technology shifts. They do not want to manage details, but they are responsible for strategy and oversight. With AI, as with earlier changes, many boards have not yet decided how to oversee investments that fundamentally reshape business operations. Ominski put a fine point on it. “We are moving into risks we have not fully imagined,” he said. “The pace alone forces us to rethink how we govern technology.”


AI Coding Agents and Domain-Specific Languages: Challenges and Practical Mitigation Strategies

DSLs are deliberately narrow, domain-targeted languages with unique syntax rules, semantics, and execution models. They often have little representation in public datasets, evolve quickly, and include concepts that resemble no mainstream programming language. For these reasons, DSLs expose the fundamental weaknesses of large language models when used as code generators. ... Many DSLs, especially new ones, lack mature Language Server Protocol (LSP) support, which provide syntax and error highlighting in the code editor. Without structured domain data for Copilot to query, the model cannot check its guesses against a canonical schema. ... Because the problem stems from missing knowledge and structure, the solution is to supply knowledge and impose structure. Copilot’s extensibility, particularly Custom Agents, project-level instruction files, and Model Context Protocol (MCP) make this possible. ... Structure matters: AI systems chunk documentation for retrieval. Keep related information proximate – constraints mentioned three paragraphs after a concept may never appear in the same retrieval context. Each section should be self-contained with necessary context included. ... AI coding agents are powerful, but they are pattern-driven tools. DSLs, by definition, lack the broad pattern exposure that enables LLMs to behave reliably.

Daily Tech Digest - December 19, 2025


Quote for the day:

"A leader's dynamic does not come from special powers. It comes from a strong belief in a purpose and a willingness to express that conviction." -- Kouzes & Posner



AI tops CEO earnings calls as bubble fears intensify

Research by Hamburg-based IoT Analytics examined around 10,000 earnings calls from about 5,000 global companies listed in the US. The firm's latest quarterly study found that AI rose to the top of CEO agendas for the first time in the period, while concerns about a possible AI-related asset bubble also increased sharply. Mentions of an "AI bubble" climbed 64% compared with the previous quarter. IoT Analytics said executives often paired announcements of new AI investments with comments that questioned the sustainability of current market valuations and the pace of capital inflows into the sector. ... While the number of AI-related references reached a new high, comments that explicitly mentioned a "bubble" in connection with technology or financial markets grew even faster in percentage terms. The study recorded the strongest quarter-on-quarter jump in bubble-related language since it began tracking the metric. Executives used the term "bubble" in several contexts. Some discussed venture funding and valuations for private AI companies. Others raised questions about the level of spending on compute infrastructure and the potential for overcapacity. A smaller group linked bubble concerns to individual asset classes such as AI-related equities. The increase in bubble-related discussion came alongside continued announcements of long-term AI spending plans. 


AI governance becomes a board mandate as operational reality lags

Executives have clearly moved fast to formalize oversight. But the foundations needed to operationalize those frameworks—processes, controls, tooling, and skills embedded in day-to-day work—have not kept pace, according to the report. ... Many organizations still lack a comprehensive view of where AI is being used across their business, Singh explained. Shadow AI and unsanctioned tools proliferate, while sanctioned projects are not always cataloged in a central inventory. Without this map of AI systems and use cases, governance bodies are effectively trying to manage risk they cannot fully see. The second gap is conceptual. “There’s a myth that governance is the same as regulation,” Singh said. “Unfortunately, it’s not.” Governance, she argued, is much broader: It includes understanding and mitigating risk, but also proving out product quality, reliability, and alignment with organizational values. Treating governance as a compliance checkbox leaves major gaps in how AI actually behaves in production. The final one is AI literacy. “You can’t govern something you don’t use or understand,” Singh said. If only a small AI team truly grasps the technology while the rest of the organization is buying or deploying AI-enabled tools, governance frameworks will not translate into responsible decisions on the ground. ... What good governance looks like, Singh argued, is highly contextual. Organizations need to anchor governance in what they care about most. 


Legal Issues for Data Professionals: Data Centers in Space

If data is processed, copied, or stored on satellites, courts may be forced to decide whether space-based computing falls outside the scope of a “worldwide” license. A licensor could argue that the licensee exceeded the grant by moving data “off-planet,” creating an unintended new use. Moreover, even defining the equivalent of “territory” as “throughout the universe” raises questions as well as addressing them. The legal issues and regulatory rules involving data governance and legal rights in data centers in orbit have antecedents. ... Satellite-based data centers raise new questions: Where is an unauthorized copy of copyrighted material made for legal purposes, and which jurisdiction’s laws apply? A location in space complicates these legal issues and has implications for data governance. ... On Earth, IP enforcement against infringement relies on tools like forensic imaging, seizure of hard drives, discovery of server logs, and on-site inspections. Space breaks these tools. A court cannot easily order the seizure of a satellite. Inspecting hardware in orbit is not possible without specialized spacecraft. From a user’s perspective, retrieving logs may depend entirely on a vendor’s operation. ... Most cloud contracts and cyber insurance policies assume all processing happens on Earth. They do not address such things as satellite collisions, radiation damage, solar storms, loss of access due to orbital debris, or the failure of a satellite-to-Earth data link.


DNS as a Threat Vector: Detection and Mitigation Strategies

DNS is a critical control plane for modern digital infrastructure — resolving billions of queries per second, enabling content delivery, SaaS access, and virtually every online transaction. Its ubiquity and trust assumptions make it a high‑value target for attackers and a frequent root cause of outages. Unfortunately, this essential service can be exploited as a DoS vector. Attackers can harness misconfigured authoritative DNS servers, open DNS resolvers, or the networks that support such activities to initiate a flood of traffic to a target, impacting the service availability and causing disruptions in a large scale. This misuse of DNS capabilities makes it a potent tool in the hands of cybercriminals. ... DNS detection strategies focus on analyzing traffic patterns and query content for anomalies (like long/random subdomains, high volume, rare record types) to spot threats like tunneling, Domain Generation Algorithms, or malware, using AI/ML, threat intel, and SIEMs for real-time monitoring, payload analysis, and traffic analysis, complemented by DNSSEC and rate limiting for prevention. Legacy security tools often miss DNS threats. ... DNS mitigation strategies involve securing servers, controlling access (MFA, strong passwords), monitoring traffic for anomalies, rate-limiting queries, hardening configurations, and using specialized DDoS protection services to prevent amplification, hijacking, and spoofing attacks, ensuring domain integrity and availability.


The ‘chassis strategy’: How to build an innovation system that compounds value

The chassis strategy starts with a simple principle: centralize what must be common and decentralize what should evolve. You don’t need a monolithic innovation platform. You need a spine — a shared foundation of data, models and governance — that everything else plugs into. That spine ensures no matter who builds the next great idea — your team, a startup or a strategic partner — the learning, data and IP stay inside your system. ... You don’t need five years or an enterprise overhaul. A minimal but functional chassis can be built in nine months. The first three months are about framing and simplification. Pick three or four innovation domains — formulation, packaging, pricing or supply chain. Define the shared spine: your data schema, APIs and key metrics. Draw a bright line between what you’ll own (core) and what you’ll source (modules). The next three months are about building the core. Set up a unified data layer, model registry, API gateway and an experimentation sandbox. Keep it lightweight. No monoliths, no “innovation cloud.” Just the essentials that make reuse possible. The final three months are about plugging and proving. Integrate a few external modules — a supplier-insight engine, a generative packaging designer, a formulation optimizer. Track time to activation and reuse rate. The goal isn’t more features; it’s showing that vendors can connect fast, share data safely and strengthen the system.


AI is creating more software flaws – and they're getting worse

The CodeRabbit study found 10.83 issues with AI pull requests versus 6.45 for human-only ones, adding that AI pull requests were far more likely to have critical or major issues. "Even more striking: high-issue outliers were much more common in AI PRs, creating heavy review workloads," Loker said. Logic and correctness was the worst area for AI code, followed by code quality and maintainability and security. Because of that, CodeRabbit advised reviewers to watch out for those types of errors in AI code. ... "These include business logic mistakes, incorrect dependencies, flawed control flow, and misconfigurations," Loker wrote. "Logic errors are among the most expensive to fix and most likely to cause downstream incidents." AI code was also spotted omitting null checks, guardrails, and other error checking, which Loker noted are issues that can lead to outages in the real world. When it came to security, the most common mistake by AI was improper password handling and insecure object references, Loker noted, with security issues 2.74 times more common in AI code than that written by humans. Another major difference between AI code and human written-code was readability. "AI-produced code often looks consistent but violates local patterns around naming, clarity, and structure," Loker added.


Identity risk is changing faster than most security teams expect

Two forces are expected to influence trust systems in 2026. The first is the rise of autonomous AI agents. These agents run onboarding attempts, learn from rejection, and retry with improved tactics. Their speed compresses the window for detecting weaknesses and demands faster defensive responses. The second force comes from the long tail of quantum disruption. Growing quantum capability is putting pressure on classical cryptographic methods, which lose strength once computation reaches certain thresholds. Data encrypted today can be harvested and unlocked in the future. In response, some organizations are adopting quantum resilient hashing and beginning the transition toward post quantum cryptography that can withstand newer forms of computational power. ... A three part structure is emerging as a practical response. Hashing establishes integrity that cannot be altered. Encryption protects data while standards evolve. Predictive analysis identifies early drift and synthetic behavior before it scales. Together these elements support a continuous trust posture that strengthens as it absorbs more identity events. This model also addresses rising threats such as presentation spoofing, identity drift, and credential replay. All three are expected to increase in 2026 based on observed anomaly patterns. Since these vectors rely on repeated behaviors, long term monitoring is essential.


D&O liability protection rising for security leaders — unless you’re a midtier CISO

CISOs have the potential for more than one safety net, the first of which is a company’s indemnification provisions — rules typically embedded in the company’s articles of incorporation and bylaws. “The language of a company’s indemnification provisions must be properly worded — typically achieved by the general counsel and a board vote — to provide indemnification for a CISO equal to every other director or officer of a company,” explains John Peterson of World Insurance Associates, a provider of employment practice liability insurance. The second safety net for a CISO is the D&O liability insurance policy procured by the CISO’s company through an insurance broker. Even when a company has D&O insurance in place, Peterson advises CISOs to review those policies to make sure they are covered as an “insured person.” ... While enterprise CISOs often have access to legal teams and crisis PR advisors to help shield them, a midrange firm often has one or two people — possibly more — wearing multiple hats, like compliance, IT, and security all rolled into one. This can become an issue because “regulators, customers, and even the courts won’t lower the expectations just because the company is smaller,” Bagnall says. “Without legal protection, CISOs face significant personal and professional risk,” Bagnall said. 


The CIO Conundrum: Balancing Security and Innovation in the Age of AI SaaS

AI tools are now accessible, inexpensive, and often solve workflow friction that teams have lived with for years. The business is moving fast because the barrier to entry is low. This pace raises important questions for CIOs:Are we creating unnecessary friction where teams expect velocity? Have we made the “right path” faster than the workaround? Do our processes match how people work today? Shadow IT grows when official paths feel slow or unclear. Not because teams want to hide things, but because they feel innovation can’t wait. Governance must evolve to match that reality. ... Security should accelerate productivity, not constrain it. With strong identity controls, clear data boundaries, and automated configuration standards, we can introduce new tools without adding friction. These guardrails reduce the workload on security teams and create a predictable environment for employees. The business moves faster. IT gains visibility. The organization avoids the drift that creates risk and inefficiency. ... The question isn’t whether teams will continue exploring new tools, it’s whether we provide a responsible, scalable path forward. When intake is transparent, vetting is calibrated, and guardrails are embedded, the organization can innovate with confidence. The CIO’s job is to design frameworks that keep pace with the business, not frameworks the business waits on.


From hype to reality: The three forces defining security in 2026

Organisations should stop asking “what might agentic AI do” and start identifying the repeatable security workflows they want automated; for example: incident triage, patrol optimisation, evidence packaging; then measure agent performance against those KPIs. The winners in 2026 will be platforms that expose safe, auditable agent APIs and vendors who integrate them into end-to-end operational playbooks. ... Looking ahead, the widespread adoption of digital twins is poised to reshape the security industry’s approach to risk management and operational planning. With a unified, real-time view of complex environments, digital twins enable proactive decision-making, allowing security teams to anticipate threats, optimise resource allocation and continuously refine standard operating procedures. Over time, this capability will shift the industry from reactive incident response to predictive and preventative security strategies, where investment in training, infrastructure and technology is guided through simulated outcomes rather than historical events. ... AR and wearables have had turbulent history, but their resurgence in 2026 will be different — and AI is the reason. AI transforms wearables from simple capture devices into intelligent companions. It elevates AR from a visual overlay to a real-time, context-aware guidance layer. 

Daily Tech Digest - December 17, 2025


Quote for the day:

"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey



5 key agenticops practices to start building now

“AI agents in production need a different playbook because, unlike traditional apps, their outputs vary, so teams must track outcomes like containment, cost per action, and escalation rates, not just uptime,” says Rajeev Butani, chairman and CEO of MediaMint. ... Architects, devops engineers, and security leaders should collaborate on standards for IAM and digital certificates for the initial rollout of AI agents. But expect capabilities to evolve, especially as the number of AI agents scales. As the agent workforce grows, specialized tools and configurations may be needed. ... Devops teams will need to define the minimally required configurations and standards for platform engineering, observability, and monitoring for the first AI agents deployed to production. Then, teams should monitor their vendor capabilities and review new tools as AI agent development becomes mainstream. ... Select tools and train SREs on the concepts of data lineage, provenance, and data quality. These areas will be critical to up-skilling IT operations to support incident and problem management related to AI agents. ... Leaders should define a holistic model of operational metrics for AI agents, which can be implemented using third-party agents from SaaS vendors and proprietary ones developed in-house. ... ser feedback is essential operational data that shouldn’t be left out of scope in AIops and incident management. This data not only helps to resolve issues with AI agents, but is critical for feeding back into AI agent language and reasoning models.


The great AI hype correction of 2025

The pendulum from hype to anti-hype can swing too far. It would be rash to dismiss this technology just because it has been oversold. The knee-jerk response when AI fails to live up to its hype is to say that progress has hit a wall. But that misunderstands how research and innovation in tech work. Progress has always moved in fits and starts. There are ways over, around, and under walls. Take a step back from the GPT-5 launch. It came hot on the heels of a series of remarkable models that OpenAI had shipped in the previous months, including o1 and o3 (first-of-their-kind reasoning models that introduced the industry to a whole new paradigm) and Sora 2, which raised the bar for video generation once again. That doesn’t sound like hitting a wall to me. ... Even an AGI evangelist like Ilya Sutskever, chief scientist and cofounder at the AI startup Safe Superintelligence and former chief scientist and cofounder at OpenAI, now highlights the limitations of LLMs, a technology he had a huge hand in creating. LLMs are very good at learning how to do a lot of specific tasks, but they do not seem to learn the principles behind those tasks, Sutskever said in an interview with Dwarkesh Patel in November. It’s the difference between learning how to solve a thousand different algebra problems and learning how to solve any algebra problem. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people,” Sutskever said.


The future of responsible AI: Balancing innovation with ethics

Trust begins with explainability. When teams understand the reasons for a model’s behavior — the reasons behind a certain code being generated, a certain test being selected, a certain dataset being prioritized — they can validate it and fix it. Explainability matters to customers as well. Research shows that when customers are clear on when and how AI is influencing decisions, they trust the brand more. This does not require sharing the proprietary model architectures; it simply requires transparency around AI in the flow of the decision making. Another emerging pillar of trust is the responsible use of synthetic data. In sensitive privacy environments, companies are generating domain specific synthetic datasets for experimentation. The LLM (large language model) powered agents can be used in multi-agent pipelines to filter the outputs for regulatory compliance, thematic compliance and accuracy of structure — all of which help teams train/fine-tune the model without compromising data privacy. ... Responsible AI is no longer just the last step in the workflow. It’s becoming a blueprint for how teams build it, release it, and iterate on it. The future will belong to organizations that think of responsibility as a design choice, not a compliance checkbox. The goal is the same whether it’s about using synthetic data safely, validating generative code, or raising overall explainability in workflows: to create AI systems that people trust and that teams can depend on.


Thriving in the unknown future

To navigate this successfully, we understood that our first challenge was one of mindset. How could we maintain agility of thinking and resilience, while also meeting our customers anticipated needs of a specific defined product on target deadlines? Since a core of our offering is technological excellence, which ensures unmatched data accuracy, depth of insight and business predictions, how could we insist on this high level of authority, with the swirling changes all around us? We approach our work from a new point of view, and with a great deal of curiosity and imagination. ... With all the hype around AI, it is easy for our customers and our organizations to expect it to achieve… everything. But, as professionals building these tools, we know this is not the case. Many internal stakeholders and customers might not understand the difference between predictive analytics, machine learning, and generative AI, leading to misaligned expectations. ... Although our product, R&D, data science, project management and customer success teams are each independent, we work cross functionally to foster the ability for swift action and change, when needed. Engineers, data scientists and product managers work together for holistic problem-solving. These collaborations are less formalized, instituted per project or issue, so colleagues feel free to turn to each other for assistance and still can remain focused on individual projects.


Tokenization takes the lead in the fight for data security

Because tokenization preserves the structure and ordinality of the original data, it can still be used for modeling and analytics, turning protection into a business enabler. Take private health data governed by HIPAA for example: tokenization means that data canbeused to build pricing models or for gene therapy research, while remaining compliant. "If your data is already protected, you can then proliferate the usage of data across the entire enterprise and have everybody creating more and more value out of the data," Raghu said. "Conversely, if you don’t have that, there’s a lot of reticence for enterprises today to have more people access it, or have more and more AI agents access their data. Ironically, they’re limiting the blast radius of innovation. The tokenization impact is massive, and there are many metrics you could use to measure that – operational impact, revenue impact, and obviously the peace of mind from a security standpoint." ... While conventional tokenization methods can involve some complexity and slow down operations, Databolt seamlessly integrates with encrypted data warehouses, allowing businesses to maintain robust security without slowing performance or operations. Tokenization occurs in the customer’s environment, removing the need to communicate with an external network to perform tokenization operations, which can also slow performance.


Enterprises to prioritize infrastructure modernization in 2026

The rise of AI has heightened the importance of IT modernization, as many organizations are still reliant on outdated, legacy infrastructure that is ill-equipped to handle modern workload requirements, says tech solutions provider World Wide Technologies (WWT). ... A move to modernize data center infrastructure has many organizations are looking at private cloud models, according to the WWT report: “The drive toward private cloud is fueled by several needs, with one primary driver being greater data security and privacy. Industries like finance and government, which handle sensitive information, often find private cloud architectures better suited for meeting strict compliance requirements. ... There is also a move to build up network and compute abilities at the edge, Anderson noted. “Customers are not going to be able to home run all that AI data to their data center and in real time get the answers they need. They will have to have edge compute, and to make that happen, it’s going to be agents sitting out there that are talking to other agents in your central cluster. It’s going to be a very, distributed hybrid architecture, and that will require a very high speed network,” Anderson said. ... Such modernization needs to take into consideration power and cooling needs much more than ever, Anderson said. “Most of our customers are not sitting there with a lot of excess data center power; rather, most people are out of power or need to be doing more power projects to prepare for the near future,” he said.


How researchers are teaching AI agents to ask for permission the right way

Under permissioning appeared mostly with highly sensitive information. Social Security numbers, bank account details, and child names fell into this category. Participants withheld Social Security numbers almost half the time, even in tasks where the number would be necessary. The researchers noted that people often stayed cautious when the data touched on financial or identity related matters. This tension between convenience and caution opens the door to new risks when such systems move from controlled studies into production environments. Brian Sathianathan, CTO at Iterate.ai, said the risk extends far beyond the model itself. “Arguably the biggest vulnerability isn’t so much the permission system itself but the infrastructure that it all runs on. ... Accuracy alone will not solve security concerns in sensitive fields. Sathianathan said organizations need to treat permission inference as protected infrastructure. “Mitigation here, in practice, means running permission inference behind your firewall and on your hardware. You should treat it like your SIEM where things are isolated, auditable, and never outsourced to shared infrastructure. You can’t let the permission system learn from unvetted data.” ... “The paper shows that collaborative filtering can predict user preferences with high accuracy, which is good, but the challenge for regulated industries is more in ensuring that compliance requirements take precedence over learned patterns even when users would prefer otherwise.”


Bank Tech Planning 2026: What’s Real and What’s Hype?

Cybersecurity issues underpin every aspect of modern banking. With digital channels, cloud platforms and open APIs, financial institutions are exposed to increasingly sophisticated attacks, including ransomware, phishing and systemic fraud. Strong cybersecurity frameworks protect customer data, ensure regulatory compliance, and maintain operational continuity. ... Legacy core systems constrain banks’ ability to innovate, integrate with partners, and scale efficiently. Cloud-native or hybrid-core architectures provide flexibility, reduce maintenance burdens, and accelerate product delivery. By decoupling core functions from hardware limitations, banks gain resilience and the agility to respond quickly to market changes. ... Real-time payment infrastructure allows immediate settlement of transactions, eliminating delays inherent in batch processing. This capability is critical for consumer expectations, B2B cash flow, and operational efficiency. It also supports modern business needs, such as instant payroll, vendor disbursement, and high-frequency transfers. ,,, Modern banks rely on consolidated data platforms and advanced analytics to make timely, informed decisions. Predictive modeling, fraud detection and customer insights depend on high-quality, integrated data. Analytics also enables proactive risk management, operational efficiency and personalized customer experiences.


Are You a Modern Professional?

An overreliance on tech that would crimp professional development and lead to job losses. As well as holding AI to a higher ROI. “More than 90% of professionals said they believe computers should be held to higher standards of accuracy than humans,” the report notes. “About 40% said AI outputs would need to be 100% accurate before they could be used without human review, meaning that it’s still critical that humans continue to review AI-generated outputs.” ... Professionals are involved across the AI landscape—as developers, providers, deployers and users—as defined by the EU AI Act. “While this provides opportunities, it also exposes professionals to risks at every stage—from biases, hallucinations, dependencies, misuse and more,” notes Dr Florence G’Sell, professor of private law at the Cyber Policy Center at Stanford University. “Opacity complicates the situation, as it makes assessing model performance difficult. To mitigate these risks, organizations could seek independent external assessment. But developers are reluctant to provide auditors access to data sources, model weights and code. This limits the ability to evaluate and ensure compliance with responsible AI principles.” ... Uncertain regulatory issues are already taking a toll on professionals, with more than 60% of enterprises in the Asia-Pacific experiencing moderate to significant disruption to their IT operations. 


Why The Ability To Focus Will Be Crucial For Future Leaders

Focus has become a fundamental value, as noise and excess have taken over our daily routines. Every notification, interruption or sense of urgency activates our brain’s alert system, diverting energy from the prefrontal cortex, the region responsible for decision making, planning and strategic thinking. In the process, strategic vision gives way to the micro decisions of the day-to-day. This is what some neuroscientists call a "fragmented attention" state, in which the brain reacts more than it creates. For leaders, this means you become reactive rather than innovative. ... Leaders who learn to regulate their own mental operating system can gain a decisive advantage and the ability to sustain clarity amid chaos. You can start with intentional pauses throughout the day—simple practices such as deep breathing, brief walks or moments of silence. Equally important is noticing when your mind drifts and deliberately working to bring it back. ... Modern leaders often overvalue expression and undervalue absorption. Yet, from a neurobiological standpoint, silence is not the absence of thought; it’s the synchronization of neural rhythms. One study found that periods of intentional quiet—no input, no analysis, no output—can activate the prefrontal cortex and strengthen the brain’s capacity for integration. Put another way: The mind reorganizes fragments into coherence only when it’s not forced to produce. In a culture addicted to immediacy, mental silence, time to recover and intentional breaks become a competitive advantage.

Daily Tech Digest - December 16, 2025


Quote for the day:

"Worry less, smile more. Don't regret, just learn and grow." -- @Pilotspeaker


The battle for agent connectivity: Can MCP survive the enterprise?

"MCP is the UI for agents. The future of asking ChatGPT to book an Uber and have a pizza available when you arrive at the hotel only works if we have the connectivity," said Dag Calafell III, director of Technology Innovation at MCA Connect, an IT consultancy for manufacturers. But while seamless connectivity might be the Holy Grail for consumer apps, critics argue that it is irrelevant -- or even dangerous -- for the enterprise. ... Notably, MCP has significant backing from prominent companies, including Google, OpenAI, Microsoft and its creator, Anthropic. Indeed, Calafell argued that while there are competitors out there, "MCP is winning" precisely because it has seen significant adoption by large software providers. Still, MCP clearly has significant issues -- mostly because it's in its infancy. MCP's rapidly evolving specification, uneven tooling, unclear security and governance controls, and lack of standardized memory, debugging, and orchestration make it better for experimentation than reliable enterprise use today. ... "There is room to innovate with a security-first 'MCP-like' standard that is resource aware, with trusted catalogues, privileges, scopes, etc. These would either be built on top of MCP, a sort of MCP v2, or introduced as part of a new protocol," said Liav Caspi, co-founder and CTO at Legit Security. And, of course, there remains an evolving trend that the AI industry will take an entirely different direction.


Digital Twin in Railways: A Practical Solution to Managing Complex Rail Systems

In the context of railways, digital twins are being deployed to improve asset lifecycle management, predictive maintenance, and infrastructure planning. By integrating inputs from IoT devices and advanced analytics platforms, these models help engineers monitor structural health, detect anomalies, and plan maintenance before failures occur. ... As the scale and complexity of rail networks continue to grow, the use of digital twins offers a unified, comprehensive view of interconnected assets, which empowers rail operators with faster decision-making and better coordination across departments. This technology is gradually becoming a core component of smart railway ecosystems. ... The architecture of a digital twin in railway systems is built upon the integration of multiple digital technologies, including Building Information Modelling (BIM), the Internet of Things (IoT), Geographic Information Systems (GIS), and data analytics platforms. Together, these technologies create a unified framework that connects the physical and digital environments of railway infrastructure and operations. ... The integration of operational data, including train movements, energy consumption, and passenger flows, allows operators to simulate different scenarios and optimise timetables, headways, and energy use. In dense networks such as urban metro systems, this contributes to improved punctuality and efficient energy utilisation.


Stop mimicking and start anchoring

It’s a fundamental truth that most CIOs are ignoring in their rush to emulate Big Tech playbooks. The result is a systematic misallocation of resources based on a fundamental misunderstanding of how value creation works across industries. ... the strategic value of IT should be measured by how effectively it addresses industry-specific value creation. Different industries have vastly different technology intensity and value-creation dynamics. In our view, CIOs must therefore resist trend-driven decisions and view IT investment through their industry’s value-creation to sharpen competitive edge. To understand why IT strategies diverge across industries shaped by sectoral realities and maturity differences, we need to examine how business models shape the role of technology. ... funding business outcomes rather than chasing technology fads is easier said than done. It’s difficult to unravel the maze created by the relentless march of technological hype versus the grounded reality of business. But the role of IT is not universal; its business relevance changes from one industry to another. ... Long-term value from emerging technologies comes from grounded application, not blind adoption. In the race to transform, the wisest CIOs will be those who understand that the best technology decisions are often the ones that honour, rather than abandon the fundamental nature of their business. The future belongs not to those who adopt the most tech, but to those who adopt the right tech for the right reasons.


Build vs buy is dead — AI just killed it

Ssomething fundamental has changed: AI has made building accessible to everyone. What used to take weeks now takes hours, and what used to require fluency in a programming language now requires fluency in plain English.When the cost and complexity of building collapse this dramatically, the old framework goes down with them. It’s not build versus buy anymore. It’s something stranger that we haven't quite found the right words for. ... And it's not some future state. This is already happening. Right now, somewhere, a customer rep is using AI to fix a product issue they spotted minutes ago. Somewhere else, a finance team is prototyping their own analytical tools because they've realized they can iterate faster than they can write up requirements for engineering. Somewhere, a team is realizing that the boundary between technical and non-technical was always more cultural than fundamental. The companies that embrace this shift will move faster and spend smarter. They’ll know their operations more deeply than any vendor ever could. They'll make fewer expensive mistakes, and buy better tools because they actually understand what makes tools good. The companies that stick to the old playbook will keep sitting through vendor pitches, nodding along at budget-friendly proposals. They’ll debate timelines, and keep mistaking professional decks for actual solutions. Until someone on their own team pops open their laptop, says, “I built a version of this last night. Want to check it out?,”


Quantum Tech Hits Its “Transistor Moment,” Scientists Say

“This transformative moment in quantum technology is reminiscent of the transistor’s earliest days,” said lead author David Awschalom, the Liew Family Professor of molecular engineering and physics at the University of Chicago, and director of the Chicago Quantum Exchange and the Chicago Quantum Institute. “The foundational physics concepts are established, functional systems exist, and now we must nurture the partnerships and coordinated efforts necessary to achieve the technology’s full, utility-scale potential. How will we meet the challenges of scaling and modular quantum architectures?” ... Although advanced prototypes have demonstrated system operation and public cloud access, their raw performance remains early in development. For example, many meaningful applications, including large-scale quantum chemistry simulations, could require millions of physical qubits with error performance far beyond what is technologically viable today. ... “While semiconductor chips in the 1970s were TLR-9 for that time, they could do very little compared with today’s advanced integrated circuits,” he said. “Similarly, a high TRL for quantum technologies today does not indicate that the end goal has been achieved, nor does it indicate that the science is done and only engineering remains. Rather, it reflects a significant, yet relatively modest, system-level demonstration has been achieved—one that still must be substantially improved and scaled to realize the full promise.”


Before you build your first enterprise AI app

Model weights are becoming undifferentiated heavy lifting, the boring infrastructure that everyone needs but no one wants to manage. Whether you use Anthropic, OpenAI, or an open weights model like Llama, you are getting a level of intelligence that is good enough for 90% of enterprise tasks. The differences are marginal for a first version. The “best” model is usually just the one you can actually access securely and reliably. ... We used to obsess over the massive cost of training models. But for the enterprise, that is largely irrelevant. AI is all about inference now, or the application of knowledge to power applications. In other words, AI will become truly useful within the enterprise as we apply models to governed enterprise data. The best place to build up your AI muscle isn’t with some moonshot agentic system. It’s a simple retrieval-augmented generation (RAG) pipeline. What does this mean in practice? Find a corpus of boring, messy documents, such as HR policies, technical documentation, or customer support logs, and build a system that allows a user to ask a question and get an answer based only on that data. This forces you to solve the hard problems that actually build a moat for your company. ... When you build your first application, design it to keep the human in the loop. Don’t try to automate the entire process. Use the AI to generate the first draft of a report or the first pass at a SQL query, and then force a human to review and execute it. 


Cloudflare reveals AI surge & Internet ‘bot wars’ in 2025

Cloudflare reported that use of AI models and AI crawling activity increased sharply. It said crawling for model training accounted for the majority of AI crawler traffic during the year. Training-related crawlers generated traffic that reached as much as seven to eight times the level of retrieval-augmented generation and search crawlers at peak. Traffic from training crawlers was also as much as 25 times higher than AI crawlers tied to direct user actions. The company said Meta’s llama-3-8b-instruct model was the most widely used on its network. It was used by more than three times as many accounts as the next most popular models from providers such as OpenAI and Stability AI. Cloudflare added that Google’s crawling bot remained the dominant automated actor on the Internet. It said Googlebot’s crawl volume exceeded that of all other leading AI bots by a wide margin and was the largest single source of automated traffic it observed. ... Cloudflare reported a notable shift in the sectors that face the highest volume of cyber attacks. Civil society and non-profit organisations became the most attacked group for the first time. The company linked this trend to the sensitivity and financial value of the data held by such organisations. This includes personal information about donors, volunteers and beneficiaries. Cloudflare’s data also showed changes in the causes of major Internet outages. 


Who Owns AI Risk? Why Governance Begins with Architecture

But as AI systems grow more complex, so do their risks. Bias, opacity, data misuse, model drift, or even overreliance on AI outputs can all cause serious business, ethical, and reputational damage. This raises an uncomfortable question: who actually owns the risk of AI? ... AI doesn’t live in isolation. It consumes enterprise data, depends on cloud services, interacts with APIs, and influences real business processes.Governance, therefore, can’t rely on policies alone, it must be designed, structured, and embedded into the architecture itself. For instance, companies like Microsoft and Google have embedded AI governance directly into their architectural blueprints creating internal AI Ethics and Risk Committees that review model design before deployment. This proactive structure ensures compliance and builds trust long before a model reaches production. ... In other words, AI Governance is not a department, it’s an ecosystem of shared responsibility.Enterprise Architects connect the dots, Business Owners set the direction, Data Scientists implement, and Governance Boards oversee. But the real maturity comes when everyone in the organization, from the C-suite to the operational level, understands that AI is a shared asset and a shared risk. ... Modern enterprise architecture is no longer only about connecting systems. It’s about connecting responsibility. The moment artificial intelligence becomes part of the business fabric, architecture must evolve to ensure that governance isn’t something external or reactive, it’s embedded in the very design of every AI-enabled solution.


The 5 power skills every CISO needs to master in the AI era

According to the World Economic Forum’s Future of Jobs Report, nearly 40% of core job skills will change by 2030, driven primarily by AI, data and automation. For security professionals, this means that expertise in network defense, forensics and patching — while still essential — is no longer enough to create value. The real impact comes from how we interpret, communicate and apply what AI enables. ... The biggest myth in security is that technical mastery equals longevity. In truth, the more we automate, the more we value human differentiation. Success in the next decade won’t depend on how much code you can write — but on how effectively you can connect, translate and lead across systems and silos. When I look at the most resilient organizations today, they share one trait: They see cybersecurity not as a control function, but as a strategic enabler. And their leaders? They’re fluent in both algorithms and empathy. The future of cybersecurity belongs to those who build bridges — not just firewalls. Cybersecurity is no longer a war between humans and machines — it’s a collaboration between both. The organizations that succeed will be the ones that combine AI’s precision with human empathy and creative foresight. As AI handles scale, leaders must handle meaning. And that’s the true essence of power skills. The future of cybersecurity belongs to those who can blend AI’s precision with human expertise — and lead with both.


Manufacturing is becoming a test bed for ransomware shifts

“Manufacturing depends on interconnected systems where even brief downtime can stop production and ripple across supply chains,” said Alexandra Rose, Director of Threat Research, Sophos Counter Threat Unit. “Attackers exploit this pressure: despite encryption rates falling to 40%, the median ransom paid still reached $1 million. While half of manufacturers stopped attacks before encryption, recovery costs average $1.3 million and leadership stress remains high. Layered defenses, continuous visibility, and well-rehearsed response plans are essential to reduce both operational impact and financial risk,” Rose continued. Teams were able to stop attacks before encryption in a larger share of cases, which likely contributed to the decline. Early detection helped reduce disruption, although strong detection did not guarantee a smooth recovery. ... IT and security leaders in manufacturing see progress in some areas but ongoing gaps in others. Detection appears to be improving. Recovery is becoming steadier. Payment rates are declining. But operational weaknesses persist. Skills shortages, aging protections, and limited visibility into vulnerabilities continue to contribute to compromises. These factors shape outcomes as much as attacker capability. The findings also show a need for stronger internal support. Security teams are absorbing organizational and emotional strain that can affect long term performance. Manufacturing operations depend on stable systems, and teams cannot maintain stability without workloads they can manage.

Daily Tech Digest - December 14, 2025


Quote for the day:

“It is never too late to be what you might have been.” -- George Eliot


Six questions to ask when crafting an AI enablement plan

As we near the end of 2025, there are two inconvenient truths about AI that every CISO needs to take into their heart. Truth #1: Every employee who can is using generative AI tools for their job. Even when your company doesn’t provide an account for them, even when your policy forbids it, even when the employee has to pay out of pocket. Truth #2: Every employee who uses generative AI will (or likely has already) provided this AI with internal and confidential company information. ... In the case of AI, this refers to the difference between the approved business apps that are trusted to access company data and the growing number of untrusted and unmanaged apps that have access to that data without the knowledge of IT or security teams. Essentially, employees are using unmonitored devices, which can hold any number of unknown AI apps, and each of those apps can introduce a whole lot of risk to sensitive corporate data. ... Simply put, organizations cannot afford to wait any longer to get a handle on AI governance. ... So now, the job is to craft an AI enablement plan that promotes productive use and throttles reckless behaviors. ... Think back to the mid‑2000s, when SaaS crept into the enterprise through expense reports and project trackers. IT tried to blacklist unvetted domains, finance balked at credit‑card sprawl, and legal wondered whether customer data belonged on “someone else’s computer.” Eventually, we accepted that the workplace had evolved, and SaaS became essential to modern business.


Why most enterprise AI coding pilots underperform (Hint: It's not the model)

When organizations introduce agentic tools without addressing workflow and environment, productivity can decline. A randomized control study this year showed that developers who used AI assistance in unchanged workflows completed tasks more slowly, largely due to verification, rework and confusion around intent. The lesson is straightforward: Autonomy without orchestration rarely yields efficiency. ... Security and governance, too, demand a shift in mindset. AI-generated code introduces new forms of risk: Unvetted dependencies, subtle license violations and undocumented modules that escape peer review. Mature teams are beginning to integrate agentic activity directly into their CI/CD pipelines, treating agents as autonomous contributors whose work must pass the same static analysis, audit logging and approval gates as any human developer. GitHub’s own documentation highlights this trajectory, positioning Copilot Agents not as replacements for engineers but as orchestrated participants in secure, reviewable workflows. ... Under the hood, agentic coding is less a tooling problem than a data problem. Every context snapshot, test iteration and code revision becomes a form of structured data that must be stored, indexed and reused. As these agents proliferate, enterprises will find themselves managing an entirely new data layer: One that captures not just what was built, but how it was reasoned about. 


Enabling small language models to solve complex reasoning tasks

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries. ... You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results. The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. 


Key trends accelerating Industrial Secure Remote Access (ISRA) Adoption

As essential maintenance and diagnostic activities continue to shift toward remote and digital execution, they become exposed to cyber risks that were not present when plants, fleets, and factories operated as isolated, closed systems. Compounding the challenge, many industrial organizations still lack the expertise and skill sets to select and operate the proper technologies that establish secure remote connections efficiently and securely. This, unfortunately, results in operational delays and slower response in critical or emergency situations. Industrial Cyber emphasizes that controlled, identity-bound, and fully auditable access to critical tasks is key to ensuring secure remote access functions as an operational and business enabler—without introducing new pathways for malicious actors. ... Compounding the risk, OT environments frequently rely on legacy hardware that lacks modern encryption capabilities, leaving these connections especially vulnerable. By centralizing access governance, securely managing vendor credentials, streamlining access-request workflows, and maintaining consistent audit trails, industrial organizations can regain control over third-party access. ... Industrial Cyber recognizes two solutions from SSH. 1) PrivX OT is purpose-built for industrial environments. The solution provides passwordless, keyless, and just-in-time industrial secure remote access using short-lived certificates and micro-segmentation to reduce risk. 2) NQX delivers quantum-safe, high-speed network encryption for site-to-site connectivity.


Navigating AI Liability: What Businesses That Utilize AI Need to Know

Cybercriminals can now use generative AI to create extremely convincing deepfakes. These deepfakes can then be used for corporate espionage, identity theft and phishing scams. AI software may end up automatically aggregating and analyzing huge amounts of data from multiple sources. This can increase privacy invasion risks when comprehensive profiles of people are compiled without their awareness or consent. AI systems which experience glitches or malfunctions, let others have unauthorized access to them, or lack robust security could lead to sensitive data being exposed. ... It is risky for your business to publish AI-generated content because AI models are trained on vast amounts of copyrighted material. The models thus end up not always creating original material, and sometimes create material which is identical to or extremely similar to copyrighted content. “It was the AI’s fault” will not be a valid argument in court if this happens to your business. Ignorance is not a defense in a copyright infringement claim. ... Content that is fully generated by AI has no copyright protection. AI-generated content that is significantly edited by humans may receive copyright protection, but the situation is murky. Original content that is created by humans and is then slightly edited or optimized by AI will usually receive full copyright protection. A lot of businesses now document the process of content creation to prove that humans created the content and preserve copyright protection.


When the Cloud Comes Home: What DBAs Need to Know About Cloud Repatriation

One of the main drivers for cloud repatriation is cost. Early cloud migrations were often justified by projected savings because there would be no more hardware to maintain. Furthermore, the cloud promised flexible scaling and pay-as-you-go pricing. Nevertheless, for many enterprises, those savings have proven elusive. Data-intensive workloads, in particular, can rack up significant cloud bills. Every I/O operation, network transfer, and storage request adds up. When workloads are steady and predictable, the cloud’s on-demand elasticity can actually become more expensive than on-prem capacity. DBAs, who often have a front-row seat to performance and utilization metrics, can play a crucial role in identifying when cloud costs are out of alignment with business value. ... In highly regulated industries, compliance concerns are another driver. Regulations such as HIPAA, PCI-DSS, GDPR and more, require your applications and the data they access to be secure and controlled. Organizations may find that managing sensitive data in the cloud introduces risk, especially when data residency, auditability, or encryption requirements evolve. Repatriating workloads can restore a sense of control and predictability—key traits valued by DBAs. ... Today’s computing needs demand an IT architecture that embraces the cloud, but also on premises workloads, including the mainframe. Remember, data gravity attracts applications to where the data resides. 


SaaS price hikes put CIOs’ budgets in a bind

Subscription prices from major SaaS vendors have risen sharply in recent months, putting many CIOs in a bind as they struggle to stay within their IT budgets. ... While inflation may have driven some cost increases in past months, rates have since stabilized, meaning there are other factors at play, Tucciarone says. Vendors are justifying subscription price hikes with frequent product repackaging schemes, consumption-based subscription models, regional pricing adjustments, and evolving generative AI offerings, he adds. “Vendors are rationalizing this as the cost of innovation and gen AI development,” he says. ... SaaS data platforms fall into a similar category as other mission-critical applications, Aymé adds, because the cost of moving an organization’s data can be prohibitively expensive, in addition to the price of a new SaaS tool. Kunal Agarwal, CEO and cofounder of data observability platform Unravel Data, also pointed to price increases for data-related SaaS tools. Data infrastructure costs, including cloud data warehouses, lakehouses, and analytics platforms, have risen 30% to 50% in the past year, he says. Several factors are driving cost increases, including the proliferation of computing-intensive gen AI workloads and a lack of visibility into organizational consumption, he adds. “Unlike traditional SaaS, where you’re paying for seats, these platforms bill based on consumption, making costs highly variable and difficult to predict,” Agarwal says.


How to simplify enterprise cybersecurity through effective identity management

“It is challenging for a lot of organizations to get a complete picture of what their assets are and what controls apply to those assets,” Persaud says. He explains that Deloitte’s identity solution assisted the customer in connecting users with the assets they utilized. As they discovered these assets, they were able to fine-tune the security controls that were applied to each in a more refined fashion. “If the system is going to [process] financial data and other private information, we need to put the right controls in place on the identity side,” he says. “We’ve been able to bring those two pieces together by correlating discovery of assets with discovery of identity and lining that up with controls from the IT asset management system.” ... “If you think from a broader risk management perspective, this has been fundamental to our security model,” he says. The ability to simply track the locations of employees and assign risk accordingly is a significant advancement in risk monitoring for a company growing its international presence. The company looks out for instances of impossible travel, such as if an employee has entered the system in one location and then in another at a distant location that they could not have possibly reached during a specified period, an alert is raised. Security analysts also use the software to scan for risky sign-ins. If a user logs in from an IP that has been blacklisted, an alert is raised. They have increasingly relied on conditional access policies that rely on monitoring user behavior. 



When an AI Agent Says ‘I Agree,’ Who’s Consenting?

The most autonomous agents can execute a chain of actions related to a transaction—such as comparing, booking, paying, forwarding the invoice. The broader the autonomy, the tighter the frame: precise contractual rules, allow-lists, budgets, a kill-switch, clear user notices, and, where required, electronic signatures. At this point the question stops being technical and becomes legal: under what framework does each agent-made click have effect, on whose authority, and with what safeguards? European law and national laws already offer solid anchors—agency and online contracting, signatures and secure payments, fair disclosure—now joined by the newer eIDAS 2 and the AI Act. ... Under European law, an AI agent has no will of its own. It is a means of expressing—or failing to express—someone’s will. Legally, someone always consents: the user (consumer) or a representative in the civil law sense. If an agent “accepts” an offer, we are back to agency: the act binds the principal only within the authority granted; beyond that, it is unenforceable. The agent is not a new subject of law. ... Who is on the hook if consent is tainted? First, the business that designs the onboarding. Europe’s Digital Services Act (DSA) bans deceptive interfaces (“dark patterns”) that materially impair a user’s ability to make a free, informed choice. A pushy interface can support a finding of civil fraud and a regulatory breach. Second, the principal is bound only within the mandate. 


AI cybercrime agents will strike in 2026: Are defenders ready?

The prediction itself isn’t novel. What’s sobering is the math behind it—and the widening gap between how fast organisations can defend versus how quickly they’re being attacked. “The groups that convert intelligence into monetisation the fastest will set the tempo,” Rashish Pandey, VP of marketing & communications for APAC at Fortinet, told journalists at a media briefing earlier this week. “Throughput defines impact.” This isn’t about whether AI will be weaponised—that’s already happening. The urgent question is whether defenders can close what Fortinet calls the “tempo differential” before autonomous AI agents fundamentally alter the economics of cybercrime. ... The evolution extends beyond speed. Fortinet’s predictions highlight how attackers are weaponising generative AI for rapid data analysis—sifting through stolen information to identify the most valuable targets and optimal extortion strategies before defenders even detect the breach. This aligns with broader attack trends: ransomware operations increasingly blend system disruption with data theft and multi-stage extortion. Critical infrastructure sectors—healthcare, manufacturing, utilities—face heightened risk as operational technology systems become targets. ... “The ‘skills gap’ is less about scarcity and more about alignment—matching expertise to the reality of machine-speed, data-driven operations,” Pandey noted during the briefing.