Daily Tech Digest - November 26, 2025


Quote for the day:

“There is only one thing that makes a dream impossible to achieve: the fear of failure.” -- Paulo Coelho



7 signs your cybersecurity framework needs rebuilding

The biggest mistake, Pearlson says, is failing to recognize that the current plan is out of date or simply not working. Breaches happen, but that doesn’t always mean your cyber framework needs rebuilding. It does, however, indicate that the framework needs to be rethought and redesigned. ... “If your framework hasn’t kept pace with evolving threats or business needs, it’s time for a rebuild.” Cyber threats are always evolving, so staying proactive with regular reviews and fostering a culture of cybersecurity awareness will help catch issues before they become crises, Bucher says. ... “The cybersecurity landscape has evolved rapidly, especially with the rise of generative AI — your framework should reflect these shifts.” McLeod recommends a complete a biannual framework review combined with a cursory review during the gap years. “This helps to ensure that the framework stays aligned with evolving threats, business changes, and regulatory requirements.” Ideally, security leaders should always have their security framework in mind while maintaining a rough, running list of areas that could be improved, streamlined, or clarified, McLeod suggests. ... If an organization is stuck in a cycle of continually chasing alerts and incidents, as well as reporting events after the fact instead of performing predictive threat assessments, data analysis, and forward planning, it’s time for a change, Baiati advises. 


Your Million-Dollar IIoT Strategy is Being Sabotaged by Hundred-Dollar Radios

The ambition is clear: to create hyper-efficient, data-driven operations in a market expected to exceed $1.6 billion by 2030. Yet, a fundamental paradox lies at the heart of this transformation. While we architect complex digital twins and deploy sophisticated AI models, the foundational tools entrusted to our most valuable asset—the frontline workforce—are often decades old, disconnected, and failing at an alarming rate. ... Data shows that one in four organizations loses more than an entire day of productivity every month simply dealing with broken technology. The primary culprits are as predictable as they are preventable: nearly half of workers cite battery problems (48.4%) and physical damage (46.8%) as the most common causes of failure. ... While conversations about this crisis often focus on pay and career paths, Relay’s research reveals a more immediate, tangible cause: the daily frustration of using broken tools. 1 in 4 frontline workers already feel their equipment is second-class compared to what their corporate counterparts use, and a staggering 43% of workers saying they’d be less likely to quit if guaranteed access to modern, automatically upgraded devices. ... Beyond reliability, it’s important to address the data black hole created by legacy, disconnected tools. Every day, frontline teams generate thousands of hours of spoken communication—a rich stream of unstructured data filled with maintenance alerts, safety concerns, and process bottlenecks. 


Ask the Experts: Validate, don't just migrate

"Refactoring code is certainly a big undertaking. And if you start before you have good hygiene and governance, then you're just setting yourself up for failure. Similarly, if you haven't tagged properly, you have no way to attribute it to the project, and that becomes a cost problem." ... "If you do conclude [that migration is necessary], then you really must make sure the application is architected right. A lot of times, these workloads weren't designed for the cloud world, so you must adapt them and deliberately architect them for a cloud workload. "[To prepare a mission-critical application], it's key to look at the appropriateness, operating system [and] licenses. Sometimes, there are licenses tied to CPUs or other things that might introduce issues for you as well, so regression, latency and performance testing will be mandatory. ... "[IT leaders must also understand] the risks and costs associated with taking things into the cloud, and the pros and cons of that versus leaving it alone. Because old stuff, whether it was [procured] yesterday or five years ago, is inherently going to be vulnerable from a cybersecurity standpoint. Risk No. 2 is interoperability and compatibility, because old stuff doesn't talk to new stuff. And the third one is supportability, because it's hard to find old people to support old systems. ... "Sometimes, people have the false sense that if it's in cloud, then I'm all set. Everything is available, and everything is highly redundant. And it is, if you design [the application] with those things in mind.


Heineken CISO champions a new risk mindset to unlock innovation

Starting as an auditor and later leading a cyber defense team. It’s easy to fall into the black-and-white trap of being the function that always says “no” or speaks in cryptic tech jargon. It’s a scary world out there with so many attacks happening in every industry. The classical reaction of most security professionals is to tighten defences and impose even more rules. ... CISOs need to shift the mindset from pure compliance to asking: How does our cyber strategy support the business and its values? What calculated risks do we want the business to take? Where do we need their attention and help to embed security into the DNA of our people and our company? ... Be visible and approachable. Share the lessons that shaped you as a leader, what worked, what didn’t, and the principles that guide your decisions. I’m passionate about building diverse teams where everyone gets the same opportunities, no matter age, gender, or background. Diversity makes us stronger, and when there’s trust and openness, it sparks mentoring, coaching, and knowledge sharing. Make coaching and mentoring non-negotiable, and carve out time for it. It’s easy to push aside when you’re busy putting out security fires, but neglecting people’s growth and well-being is a big miss. Be authentic and vulnerable, walk the talk. Share the real stories, including failures and what made you stronger. Too often, people focus only on titles, certifications, and tech skills.


Data-Driven Enterprise: How Companies Turn Data into Strategic Advantage

A data-driven enterprise is not defined by the number of dashboards or analytics tools it owns. It’s defined by its ability to turn raw information into intelligent action. True data-driven organizations embed data thinking into every level of decision-making from boardroom strategy to day-to-day operations. ... A modern data architecture is not a single platform, but an interconnected ecosystem designed to balance agility, governance, and scalability. ... As organizations mature in their data journey, they are moving away from rigid, centralized models that rely on a single source of truth. While centralization once ensured control, it often created bottlenecks slowing down innovation and limiting agility.  ... We are entering an era of data agents self-learning systems capable of autonomously detecting anomalies, assessing risks, and forecasting trends in real time. These intelligent agents will soon become the invisible workforce of the enterprise, operating across domains: predicting supply chain disruptions, optimizing IT performance, personalizing customer journeys, and ensuring compliance through continuous monitoring. Their actions will reshape not only operations but also how organizations think about governance, accountability, and human oversight. For architects, this shift represents both a challenge and an extraordinary opportunity. The role is evolving from that of a data custodian focused on structure and governance to an ecosystem designer who engineers environments where data and AI can coexist, learn, and continuously create value.


10 benefits of an optimized third-party IT services portfolio

By entrusting day-to-day IT operations to trusted providers, organizations can reallocate internal resources toward higher-value initiatives such as digital transformation, automation, and product innovation. This accelerates adoption of emerging technologies, and allows internal teams to deepen business expertise, strengthen cross-functional collaboration, and focus on driving growth where it matters most. ... A well-structured third-party IT services portfolio can provide flexibility to scale up or down based on business needs. This is particularly valuable for CEOs who need to adapt to changing market conditions and seize growth opportunities. Securing talent in the market today is challenging and time consuming, so tapping into the talent pools of your strategic IT services partner base allows organizations to leverage their bench strength to fill immediate needs for talent. ... IT service providers continuously invest in advanced tech and talent development, enabling clients to benefit from cutting-edge innovations without bearing the full cost of adoption. As AI, automation, and cybersecurity evolve, providers offer the subject matter expertise and tools organizations need to stay ahead of disruption. ... With operational stability ensured through a balance of internal talent and trusted third parties, CIOs can dedicate more focus to long-term strategic initiatives that fuel growth and innovation. 


Modernizing SOCs with Agentic AI and Human-in-the-Loop: A Guide to CISOs

Traditional SOCs were not built for today’s speed and scale. Alert fatigue, manual investigations, disconnected tools, and talent shortages all contribute to the operational drag. Many security leaders are stuck in a reactive loop with no clear path to improvement. ... Legacy SOCs rely heavily on outdated technologies and rule-based detection, generating high volumes of alerts, many of which are false positives, leading to analyst burnout. Analysts are compelled to manually inspect and triage a deluge of meaningless signals, making the entire effort unsustainable. ... Before transformation can happen, one needs to understand where one stands. This can be accomplished with key benchmarking metrics for SOC performance, such as MTTD (Mean time to detect), MTTR (Mean time to respond), case closure rates, and tool effectiveness. ... Agentic AI represents the next evolution of AI-powered cybersecurity, which is modular, explainable, and autonomous. Through a coordinated system of AI agents, the Agentic SOC continuously responds and adapts to the evolving security environment in real time. It is designed to accelerate threat detection, investigation, and response by 10x, bringing speed, precision, and clarity to every function of SecOps. Agentic AI is the technology shift that changes the game. Unlike traditional automation, Agentic AI is decision-oriented, self-improving, and always operating with human-in-the-loop for oversight.


3 SOC Challenges You Need to Solve Before 2026

2026 will mark a pivotal shift in cybersecurity. Threat actors are moving from experimenting with AI to making it their primary weapon, using it to scale attacks, automate reconnaissance, and craft hyper-realistic social engineering campaigns. ... Attackers have mastered evasion. ClickFix campaigns trick employees into pasting malicious PowerShell commands by themselves. LOLBins are abused to hide malicious behavior. Multi-stage phishing hides behind QR codes, CAPTCHAs, rewritten URLs, and fake installers. Traditional sandboxes stall because they can't click "Next," solve challenges, or follow human-dependent flows. Result? Low detection rates for the exact threats exploding in 2025 and beyond. ... Thousands of daily alerts, mostly false positives. An average SOC handles 11,000 alerts daily, with only 19% worth investigating, according to the 2024 SANS SOC Survey. Tier 1 analysts drown in noise, escalating everything because they lack context. Every alert becomes a research project. Every investigation starts from zero. Burnout hits hard. Turnover doubles, morale tanks, and real threats hide in the backlog. By 2026, AI-orchestrated attacks will flood systems even faster, turning alert fatigue into a full-blown crisis. ... From a financial leadership perspective, security spending often feels like a black hole: money is spent, but risk reduction is hard to quantify. SOCs are challenged to justify investments, especially when security teams seem to be a cost center without clear profit or business-driving impact.


Digital surveillance tools are reshaping workplace privacy, GAO warns

Privacy concerns intensify when surveillance data feeds into automated systems that evaluate performance, set productivity metrics, or flag workers for potential discipline. GAO found that employers often rely on flawed benchmarks and incomplete measurements. Tools rarely capture the full range of work performed, such as research, mentoring, reading, or off-screen tasks, and frequently misinterpret normal behavior as inefficiency. When employers trust these tools “at face value,” the report notes, workers can be unfairly labeled unproductive or noncompliant despite doing their jobs well. ... Meanwhile, past federal efforts to issue guidance on reducing surveillance related harms such as transparency practices, human oversight, and safeguards against discriminatory impacts have been rescinded or paused since January by the Trump administration as agencies reassess their policy priorities. GAO also notes that existing federal privacy protections are narrow. The Electronic Communications Privacy Act restricts covert interception of communications, but it does not cover most forms of digital monitoring, such as keystroke logging, location tracking, biometric data collection, or algorithmic productivity scoring. ... The report concludes that while digital surveillance can improve safety, efficiency, and health monitoring, its benefits depend wholly on how employers use it.


How to avoid becoming an “AI-first” company with zero real AI usage

A competitor declared they’re going AI-first. Another publishes a case study about replacing support with LLMs. And a third shares a graph showing productivity gains. Within days, boardrooms everywhere start echoing the same message: “We should be doing this. Everyone else already is, and we can’t fall behind.” So the work begins. Then come the task forces, the town halls, the strategy docs and the targets. Teams are asked to contribute initiatives. But if you’ve been through this before, you know there’s often a difference between what companies announce and what they actually do. Because press releases don’t mention the pilots that stall, or the teams that quietly revert to the old way, or even the tools that get used once and abandoned. ... By then, your company’s AI-first mandate will have set into motion departmental initiatives, vendor contracts and maybe even some new hires with “AI” in their titles. The dashboards will be green, and the board deck will have a whole slide on AI. But in the quiet spaces where your actual work happens, what will have meaningfully changed? Maybe you'll be like the teams that never stopped their quiet experiments. ... That’s invisible architecture of genuine progress: Patient, and completely uninterested in performance. It doesn't make for great LinkedIn posts, and it resists grand narratives. But it transforms companies in ways that truly last. Every organization is standing at the same crossroads right now: Look like you’re innovating, or create a culture that fosters real innovation.

Daily Tech Digest - November 25, 2025


Quote for the day:

“Being kind to those who hate you isn’t weakness, it’s a different level of strength.” -- Dr. Jimson S


Invisible battles: How cybersecurity work erodes mental health in silence and what we can do about it

You’re not just solving puzzles. You’re responsible for keeping a digital fortress from collapsing under relentless siege. That kind of pressure reshapes your brain and not in a good way. ... One missed patch. One misconfigured access role. One phishing click. That’s all it takes to trigger a million-dollar disaster or worse: erode trust. You carry that weight. When something goes wrong, the guilt cuts deep. ... The business sees you as the blocker. The board sees you after the breach. And if you’re the lone cyber lead in an SME? You’re on an island, with no lifeboat. No peer to talk to, no outlet to decompress. Just mounting expectations and a growing feeling that nobody really gets what you do. ... The hero narrative still reigns; if you’re not burning out, you’re not trying hard enough. Speak up about being overwhelmed? You risk looking weak. Or worse, replaceable. So you hide it. You overcompensate. And eventually, you break, quietly. ... They expect you to know it all, yesterday. Certifications become survival badges. And with the wrong culture, they become the only form of recognition you get. Systemic chaos builds personal crisis. The toll isn’t abstract. It’s physical, emotional and measurable. ... Cybersecurity professionals are fighting two battles. One is against adversaries. The other is against a system that expects perfection, rewards self-sacrifice and punishes vulnerability.


How to Build Engineering Teams That Drive Outcomes, not Outputs

Aligning teams around clear outcomes reframes what success looks like. They go from saying “this is what we shipped” to “this is what changed” as their role evolves from delivering features to meaningful solutions. ... One way is by changing how teams refer to themselves. This might sound oversimplistic, but a simple shift in team name acts as a constant reminder that their impact is tethered to customer and business outcomes. ... Leaders should treat outcome-based teams as dynamic investments. Rigid predictions are the enemy of innovation. Instead, teams should regularly reevaluate goals, empower adaptation, and allow KPIs to evolve organically from real-world learnings. The desired outcomes don’t necessarily change, but how they are achieved can be fluid. This is how team priorities are defined, new business challenges are solved and evolving customer expectations are met. ... Breaking down engineering silos means reappraising what ownership looks like. If your team’s focus has evolved from “bug fixing” to “continually excellent user experience,” then success is no longer the domain of engineers alone. It’s a collective effort across product, design, and tech — working together as one team. ... Outcome-based teams go beyond a structural change — it’s a mindset shift. By challenging teams to focus on delivering impact, to stay aligned with evolving needs, and to collaborate more effectively, organizations can build durable, customer-centric teams that can grow, adapt, and never sit still.


Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control. ... AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command. AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer. ... While we must distinguish between governance and guardrails, the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability. ... Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.


Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. ... Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you. This might include: Analyzing your face through a video selfie or photo; Examining your voice; Looking at your online behavior—what you watch, what you like, what you post; Checking your existing profile data. Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right? ... Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.


Aircraft cabin IoT leaves vendor and passenger data exposed

The cabin network works by having devices send updates to a central system, and other devices are allowed to receive only certain updates. In this system an authorized subscriber is any approved participant on the cabin network, usually a device or a software component that is allowed to receive a certain type of data. The privacy issue begins after the data arrives. Information is protected while it travels, but once it reaches a device that is allowed to read it, that device can view the entire message, including details it does not need for its task. The system controls who receives a message, but it does not control how much those devices can learn from it. The study finds that this creates the biggest risk inside the cabin. Trusted devices have valid credentials and follow all the rules, and they can examine messages closely enough to infer raw sensor readings that were never meant to be exposed. This internal risk matters because it influences how different suppliers share data and trust each other. Someone in the cabin might also try to capture wireless traffic, but the protections on the wireless link prevent them from reading the data as it travels.  ... The researchers found that these raw motion readings can carry extra clues such as small shifts linked to breathing, slight tremors or hints about a person’s body shape. Details like these show why movement data needs protection before it is shared across the cabin network.


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... Observability isn’t a pile of graphs; it’s a way to answer questions. We want traceability from request to database and back, structured logs that actually structure, and metrics that reflect user experience. ... Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. We also wire deploy markers into traces and logs, so “What changed?” doesn’t require Slack archaeology. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. Every deployment should come with a baked-in rollback that doesn’t require a council meeting. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.


Anatomy of an AI agent knowledge base

“An internal knowledge base is essential for coordinating multiple AI agents,” says James Urquhart, field CTO and technology evangelist at Kamiwaza AI, maker of a distributed AI orchestration platform. “When agents specialize in different roles, they must share context, memory, and observations to act effectively as a collective.” Designed well, a knowledge base ensures agents have access to up-to-date and comprehensive organizational knowledge. Ultimately, this improves the consistency, accuracy, responsiveness, and governance of agentic responses and actions. ... Most knowledge bases include procedures and policies for agents to follow, such as style guides, coding conventions, and compliance rules. They might also document escalation paths, defining how to respond to user inquiries. ... Lastly, persistent memory helps agents retain context across sessions. Access to past prompts, customer interactions, or support tickets helps continuity and improves decision-making, because it enables agents to recognize patterns. But importantly, most experts agree you should make explicit connections between data, instead of just storing raw data chunks. ... At the core of an agentic knowledge base are two main components: an object store and a vector database for embeddings. Whereas a vector database is essential for semantic search, an object store checks multiple boxes for AI workloads: massive scalability without performance bottlenecks, rich metadata for each object, and immutability for auditability and compliance.


Trust, Governance, and AI Decision Making

Issues like bias, privacy, and explainability aren’t just technical problems requiring technical solutions. They have to be understood by everyone in the business. That said, the ideal governance structure depends on each company’s business model. ... The word ethics can feel very far from a developer’s everyday world. It can feel like a philosophical thing, whereas they need to write code and build solutions. Also, many of these issues weren’t part of their academic training, so we have to help them understand. ... Kahneman’s idea is that humans use two different cognitive modes when we make decisions. For everyday decisions and small, familiar problems—like riding a bicycle—we use what he called System One, or Thinking Fast, which is automatic and almost unconscious. In System Two, or Thinking Slow, we have this other way of making decisions that requires a lot of time and attention, either because we are confronted with a problem that’s not familiar to us or because we don’t want to make a mistake. ... We compare Thinking Fast to the data-driven machine learning approach—just give me a lot of data, and I will give you the solution without showing you how I got there or even being able to explain it. Thinking Slow, on the other hand, corresponds to a more traditional, rule-based approach to solving problems. ... It’s similar to what we see with agentic AI systems—the focus is not on any one solver, agent, or tool but rather in the governance of the whole system. 


The Global Race for Digital Trust: Where Does India Stand?

In the modern hyperconnected world, trust has replaced convenience as the true currency of digital engagement. Every transaction, whether on a banking app or an e-governance portal, is based on an unspoken belief: systems are secure and intentions are transparent. Nevertheless, this belief remains under constant pressure. ... India’s digital trust framework is further significantly reinforced with the inauguration of the National Centre for Digital Trust (NCDT) in July 2025. Established by the Ministry of Electronics and Information Technology (MeitY), this Centre serves as the national hub for digital assurance. It unites key elements, including public key infrastructure, authentication as well as post-quantum cryptography under a unified mission. This, in turn, signals the country’s commitment to treating trust as a public good. ... For firms and government agencies alike, compliance signals maturity. It reassures citizens that the systems they rely on, from hospital monitoring networks to smart city command centres, are governed by clear, ethical and verifiable standards. It also encourages global partners that India’s digital infrastructure can operate efficiently throughout jurisdictions. In the long run, this “compliance premium” could well define which countries earn the confidence to lead the global digital economy. ... The world will measure digital strength not by how fast technology advances, but by how deeply trust is embedded within it.


The privacy paradox is turning into a data centre weak point

While consumers’ failure to adopt basic cyber hygiene might seem like a personal problem, it has wide-reaching implications for infrastructure providers. As cloud services, hosted applications and mobile endpoints interact with backend systems, poor user behaviour becomes an attack vector. Insecure credentials, password reuse and unsecured mobile devices all provide potential entry points, especially in hybrid or multi-tenant environments. ... Putting data centres on an equal footing as water, energy and emergency services systems, will mean the data centre sector can now expect greater Government support in anticipating and recording critical incidents. This designation reflects their strategic importance but also brings greater regulatory scrutiny. It also comes against the backdrop of the UK Government’s Cyber Security Breaches Survey in 2024, which reported that 50% of businesses experienced some form of cyber breach in the past 12 months, with phishing accounting for 84% of incidents. This underscores how easily compromised direct or indirect endpoints can threaten core infrastructure. ... The privacy paradox may begin at the consumer level, but its consequences are absorbed by the entire digital ecosystem. Recognising this is the first step. Acting on it through better design, stronger defaults, and user-focused education allows data centre operators to safeguard not just their infrastructure, but the trust that underpins it.

Daily Tech Digest - November 24, 2025


Quote for the day:

"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn



The incredible shrinking shelf life of IT skills

IT workers have seen the half-life of IT skills compressed even more dramatically, with researchers saying some skills today go from hot to not in less than two years — sometimes mere months. It’s putting a lot of pressure on IT teams. As Anand says, “Technology is developing faster than tech workers can upskill.” Ever-quickening churn in the IT skills market is upending more than individuals’ career plans, too. It is impacting the entire IT function and the organization as a whole. That in turn is forcing CIOs, HR leaders, and other executives to devise strategies to create an environment where workers are capable of reinvention at a rapid clip. ... CIOs and IT advisers also say the shortening shelf life of skills is not experienced universally, as some organizations still have a lot of legacy tech in place. Data from the 2025 Tech Salary Report from Dice, a job-searching platform for tech professionals, hints at these dual realities. ... “Certain skills will come up very quickly and then go away very quickly, so now that person has to be seen as someone who can build up skills quickly,” he adds. Info-Tech Research Group’s Leier-Murray says CIOs must free up time for their staffers to upskill and provide more coaching to their team members to ensure they keep pace with the work demands of a modern IT shop. She and others advise CIOs to hire workers with or cultivate in existing staffers a growth mindset.
 ... “The way that everybody is working is continuously being redefined,” Jones says.


Are Organizations Overinvesting in an AI Bubble? - Part 1

Demand for generative AI reasoning is driving investment, said Arun Chandrasekaran, distinguished vice president analyst at Gartner. "These partnerships signal the model providers' insatiable need for compute to satisfy the enormous growth and usage, mainly in the consumer AI space." When asked to confirm an AI bubble, Chandrasekaran said, "It is hard to predict if there is a bubble and when it will burst. But we'll likely see a correction and shake-out among players that can't deliver value to users and build profitable growth strategies." Continuous investment with a large amount of money being invested, at high valuations for AI companies, "is unsustainable," Umesh Padval, investor, entrepreneur and former managing director of Thomvest Ventures, told Information Security Media Group. ... "Enterprises are excited about gen AI's speed of delivery. However, the punitively high cost of maintaining, fixing or replacing AI-generated artifacts such as code, content and design can erode gen AI's promised return on investments," Chandrasekaran said. "By establishing clear standards for reviewing and documenting AI-generated assets and tracking technical debt metrics in IT dashboards, enterprises can take proactive steps to prevent costly disruptions." Chandrasekaran warns about overinvestment without determining the "value path." He said organizations should realize that the expected payoff, including ROI, is much more long term, which can lead to risks.


The CISO’s greatest risk? Department leaders quitting

The trend of talented and dedicated functional security leaders quietly eyeing the exit is not an anomaly — it’s a predictable outcome of systemic issues that have been building within the profession for years, says Brandyn Fisher, V-CISO capability lead at Centric Consulting. “As CISOs, we are seeing our most critical layer of management, our directors and senior managers, burn out,’’ Fisher says. “This isn’t happening in a vacuum. It’s the result of a dangerous convergence of unrealistic expectations, resource starvation, and a fundamentally broken career model.” Security leaders operate on an unsustainable premise, Fisher says. “We expect our leaders to be right every single time, while an attacker only needs to be right once. This creates a culture of hyper-vigilance that is simply not sustainable 24/7/365.” ... Another issue is tool creep, with 40-plus security tools managing the same alerts and poor integrations, Malik says. There is also “role overload and context switching” on projects, as well as relentless audit cycles, reviews, and meetings, which Malik says leaves little time for career development. “Many organizations have a CISO plus a flat layer of ‘heads of X’” who don’t always have a clear path to moving into higher levels, she says. And CISOs are constantly asking their leaders to do more with less, Fisher adds. “As cybersecurity is still widely viewed as a cost center rather than a business enabler, budgets are the first to be slashed while the threat landscape grows exponentially,’’ he says.


Preparing for the Next Wave of AI: Agentic Workflows

Agentic AI blends intelligence and automation into a single operational layer that can manage outcomes rather than just execute steps. Instead of relying on humans to define every possible rule, agentic systems understand goals and context. They can reason through multiple inputs, choose the best path forward, and adapt as conditions change. ... Optimizing for agentic AI isn’t just about adding smarter tools, it begins re-architecting the environment those tools inhabit. Organizations that thrive will have integrated, high-quality data foundations and unified workflows. Fragmented systems or poor data hygiene can cripple an AI agent’s ability to reason effectively. For many enterprises, this means modernizing their systems of record – CRMs, ERPs, and HR platforms – that make up digital operations. Equally important is the need for well-defined guardrails. Businesses must define what good decisions look like, the limits of an agent’s autonomy, and the ethical or compliance constraints that must be followed. This balance between freedom and control is critical. Too many restrictions, and the AI can’t act usefully, but too few and it risks acting outside the organization’s intentions. ... On the flip side, unclear use cases/business value was the top answer for other respondents. While both groups cited risk and compliance concerns as a top challenge, it’s clear there’s a divide on where employees fit into the agentic AI puzzle.


The privacy tension driving the medical data shift nobody wants to talk about

Current frameworks lock data into silos. These isolated systems make it difficult to combine information across hospitals, labs, and research groups. This limits what can be learned from real-world evidence, which is especially important for improving treatments, studying outcomes, and reducing costs. ... Outdated rules can worsen inequities by limiting access to new tools and restricting research to well-funded institutions. This contradicts the principle of justice, which is meant to promote fairness and access. The authors emphasize that privacy still matters. They write that, “privacy protections exist for many reasons, addressing risks to individual patients as well as the public at large.” But they argue that privacy cannot stand alone as the primary value in a system where data powers both scientific progress and new forms of risk. ... The most significant proposal in the research is a gradual move toward an open data model. In this approach, healthcare data would be treated as a shared resource rather than locked property. Access would come with responsibilities and consequences for misuse instead of blanket restrictions on legitimate use. ... A key argument is that penalties should target bad behavior rather than access. Current rules assume data must be kept behind walls to prevent harm, even though perfect anonymization is no longer possible. The researchers argue that the system should focus on preventing malicious reidentification and unethical use. This approach, they say, is more realistic and gives space for innovation. 


The expanding role of the CISO

New research from HackerOne has revealed that 84 per cent of CISOs are now responsible for AI security, while 82 per cent are charged with protecting data privacy. The result is an already burdened CISO being asked to monitor and secure technologies that are evolving at breakneck speed. New technology is constantly being implemented across businesses, and when complex technologies such as AI are adopted by 78 per cent of organisations – a 23 per cent increase from the previous year – the scale and intensity of the task become clear. This rapid adoption, often driven by different parts of the business eager for a competitive edge, creates entirely new attack surfaces which must remain under constant surveillance to ensure no security risks go unnoticed. For a CISO, this task can seem insurmountable – even the most skilled internal teams will struggle if they lack the specialised knowledge. Faced with a variety of unique vulnerabilities, CISOs will need the right tools and support in order to keep the business safe. ... Unfortunately, the lack of talent and resources serves as a significant barrier to adopting this full-scale offensive security programme, with 39 per cent of CISOs highlighting this lack of skilled personnel as a major challenge. On a global scale, the cybersecurity industry urgently needs around four million more professionals to bridge the current gap in key roles. However, taking a crowdsourced security approach offers a powerful, scalable solution for businesses to tackle this problem. 


A Day in the Life of a Connected Patient: How Real-Time Data Is Powering Smarter Care

Health data arrives in bursts and fragments. It comes from different tools, moves at different speeds, and rarely follows the same format. Making sense of it all takes more than storage. It takes design that expects disorder—and knows how to organize it. Data pipelines help bridge this complexity. They link together systems like EHRs, insurance claims, wearables, and diagnostic tools—so that the information can move securely and consistently. Standards like HL7 and FHIR help make these handoffs work, even across aging platforms. As the data moves, it’s shaped into something usable. Behind the scenes, it’s cleaned, structured, and enriched before reaching analytics teams or clinical systems. The work happens in moments, but its impact is lasting. ... Discharge no longer means disconnection. For patients managing chronic conditions, remote care programs have changed what happens after they leave the hospital. One such initiative pulled continuous data from wearables, implants, and diagnostic devices into a secure cloud system. Care teams could monitor trends, identify risks early, and step in before issues got worse. In patients with chronic conditions, timely support made a measurable difference. Readmissions dropped by almost 40%. Simple check-ins and reminders helped people stay on course—not through pressure, but with steady, well-timed guidance. At scale, the results were even clearer. For every 10,000 patients, the program saved more than USD 1 million a year. 


Micro-Frontends: A Sociotechnical Journey Toward a Modern Frontend Architecture

As organisations demand faster delivery, greater autonomy, and continuous modernisation, our frontend architectures must evolve in step with our teams. The distributed frontend era is here, but it’s not defined by new frameworks or fancy tooling. It’s defined by the way we align people, processes, and architecture around a shared goal: delivering value faster without losing control. ... Micro-frontends are often introduced as a technical pattern - a way to break a large frontend into smaller, independently deployable pieces. But that framing misses the point. Micro-frontends are not a new stack; they are a new way of structuring work. They represent a sociotechnical shift - one that mirrors Conway’s Law, which tells us that system design reflects communication structures. When teams are forced to coordinate through a single release train, decision-making slows. When every change requires syncing across multiple domains, creativity fades. The result is not just technical debt but organisational inertia. Micro-frontends reverse that dynamic. They allow teams to own slices of the product end-to-end - domain, design, delivery - without waiting for centralised approval. ... But micro-frontends are not a silver bullet. For small teams or products with limited complexity, the overhead might outweigh the benefits. The goal is not to adopt a pattern for its own sake but to solve concrete problems: delivery bottlenecks, scaling limits, and the inability to modernise safely.


Software Testing in the AI Era - Evolving Beyond the Pyramid

The past few years have seen a radical departure from the previous approach with the shift to LLM-based tools. Ideally, each approach to automation should not only meet code coverage goals, but also integrate seamlessly with industrial-scale continuous deployment workflows as a matter of practical purposes. The latter wasn’t really the case until AI came along. ... Despite the underlying strategy, search algorithms contain a key component - the “fitness function,” i.e., the goal criteria used to guide the algorithm towards better solutions. Code coverage, though simplistic, is an often-used metric to gauge how good a software testing suite is, and is therefore a commonly used fitness function when generating tests using search algorithmic approaches. In practical applications of this technique, several open source tools have been developed, with EvoSuite being a popular option using a genetic-algorithm approach to generate unit tests for Java code. ... Test generation can be considered a subfield under LLMSE, with the key components of an LLM-based test generation strategy including inputs such as the code under test, prompt generators, test validation, and prompt refiners to tune and refine the generated tests in a feedback loop. Compared to search-based strategies, this technique is still in its infancy but has gained traction since tests generated using prompt refining on predictive AI output human-readable tests requiring little post-processing.


The rise (and fall?) of shadow AI

“The security surface extends far beyond traditional concerns. For AI systems, the model and data become the primary attack vectors,” said Meerah Rajavel, chief information officer at Palo Alto Networks, on the company’s own blog. “While frontier models from providers like Google and OpenAI carry lower risk due to extensive testing, most AI applications incorporate multiple specialised models.” ... “Organisations must scan models for vulnerabilities, manage permissions appropriately and protect data access. Runtime security becomes critical because prompts function like code and the LLM acts as an operating system. That has to be protected like a software supply chain,” said Rajavel. ... Shadow AI detection and control is a growing marketplace. Other vendors that operate here include Netskope with its Netskope One platform, which includes AI security capabilities to detect shadow AI usage. Not exactly a like-for-like competitor but still in the same core operational arena, the SaaS management toolset from Zylo is built to discover and manage all their SaaS applications, including unauthorised AI tools, by centralising data, risk scores and usage. “To address the risk [of shadow AI], CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said Arun Chandrasekaran at magical analyst house Gartner.

Daily Tech Digest - November 23, 2025


Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln



Lean4: How the theorem prover works and why it's the new competitive edge in AI

Lean4 is both a programming language and a proof assistant designed for formal verification. Every theorem or program written in Lean4 must pass a strict type-checking by Lean’s trusted kernel, yielding a binary verdict: A statement either checks out as correct or it doesn’t. This all-or-nothing verification means there’s no room for ambiguity – a property or result is proven true or it fails. ... Lean4’s value isn’t confined to pure reasoning tasks; it’s also poised to revolutionize software security and reliability in the age of AI. Bugs and vulnerabilities in software are essentially small logic errors that slip through human testing. What if AI-assisted programming could eliminate those by using Lean4 to verify code correctness? ... Beyond software bugs, Lean4 can encode and verify domain-specific safety rules. For instance, consider AI systems that design engineering projects. A LessWrong forum discussion on AI safety gives the example of bridge design: An AI could propose a bridge structure, and formal systems like Lean can certify that the design obeys all the mechanical engineering safety criteria. ... For enterprise decision-makers, the message is clear: It’s time to watch this space closely. Incorporating formal verification via Lean4 could become a competitive advantage in delivering AI products that customers and regulators trust. We are witnessing the early steps of AI’s evolution from an intuitive apprentice to a formally validated expert. 


How pairing SAST with AI dramatically reduces false positives in code security

In our opinion, the path to next-generation code security is not choosing one over the other, but integrating their strengths. So, along with Kiarash Ahi, founder, Virelya Intelligence Research Labs and the co-author of the framework, I decided to do exactly that. Our novel hybrid framework combines the deterministic rigor and the speed of traditional SAST with the contextual reasoning of a fine-tuned LLM to deliver a system that doesn’t just find vulnerabilities, but also validates them. ... The framework embeds the relevant code snippet, the data flow path and surrounding contextual information into a structured JSON prompt for a fine-tuned LLM. We fine-tuned Llama 3 8B on a high-quality dataset of vetted false positives and true vulnerabilities, specifically covering major flaw categories like those in the OWASP Top 10 to form the core of the intelligent triage layer. Based on the relevant security issue flagged, the prompt then asks a clear, focused question, such as, “Does this user input lead to an exploitable SQL injection?” ... A SAST and LLM synergy marks a necessary evolution in static code security. By integrating deterministic analysis with intelligent, context-aware reasoning, we can finally move past the false positive crisis and equip developers with a tool that provides high signal security feedback at the pace of modern development with LLMs.


Quantum Progress Demands Manufacturing Revolution, Martinis Says

Quantum computing’s next breakthroughs will come from factories, not physics labs, according to John Martinis ... He argued that a general-purpose quantum computer will require at least a million physical qubits, a number that is far beyond today’s devices and out of reach without a fundamental shift in how the hardware is built. ... Current machines rely on dense tangles of wires, components and cooling structures that dwarf the tiny chip at the bottom of the machine. He writes that “The complexity of the plumbing completely overwhelms the quantum device itself.” Martinis said the solution is to abandon today’s hand-built, research-lab approach and move to fully integrated chips similar to the transformation that turned 1960s mainframes into the microchips inside smartphones. The field, he argued, must invest in cryogenic integrated circuits that can operate at the ultra-low temperatures required for superconducting qubits. Using that approach, Martinis suggests that engineers could place about 20,000 qubits on a single wafer and reach the million-qubit scale by linking wafers together. That level of integration would also require abandoning manufacturing methods that date back more than half a century. He singled out the “lift-off” process still used in many quantum labs as too dirty and too limited for industrial-scale production.


Dream of quantum internet inches closer after breakthrough helps beam information over fiber-optic networks

"By demonstrating the versatility of these erbium molecular qubits, we're taking another step toward scalable quantum networks that can plug directly into today's optical infrastructure,” David Awschalom, the study's principal investigator and a professor of molecular engineering and physics at the University of Chicago, said in the statement. ... That's largely where the comparison ends, though. Whereas classical bits compute in binary 1s and 0s, qubits behave according to the weird rules of quantum physics, allowing them to exist in multiple states at once — a property known as superposition. A pair of qubits could, therefore, be 0-0, 0-1, 1-0 and 1-1 simultaneously. Qubits typically come in three forms: superconducting qubits, which are made from tiny electrical circuits; trapped ion qubits, which store information in charged atoms held in place by electromagnetic fields; and photonic qubits, which encode quantum states in particles of light. ... Operating at telecom wavelengths provides two key advantages, the first being that signals can travel long distances with minimal loss — vital for transmitting quantum data across fiber networks. The second is that light at fiber-optic wavelengths passes easily through silicon. If it didn't, any data encoded in the optical signal would be absorbed and lost. Because the optical signal can pass through silicon to detectors or other photonic components embedded beneath, the erbium-based qubit is ideal for chip-based hardware, the researchers said.


AWS Outage Fallout: Lessons In Resilience

The impact of the AWS outage has led to multiple warnings about the issues when relying on one cloud provider. But experts warn it’s important to keep in mind that moving to multi-cloud can also cause problems. Multi-cloud is “not the default answer,” says Ryan Gracey, partner and technology lawyer at law firm Gordons. “For a few crown jewel services, splitting across providers can reduce single-supplier risk and satisfy regulators, but it also raises cost and complexity, and opens new ways to fail. Chasing a lowest common denominator setup often means giving up the very features that make cloud attractive.” ... The takeaway from the latest outage is not just to buy more redundancy, says Gracey. “It’s about designing systems that bend, not break. They should slow down gracefully, drop non-essential features and protect the most important customer tasks when things go wrong. A part of this is running drills so teams know who decides what actions to take, what to say to customers and what to do first.” For the cloud service provider, it’s important to recognise where a potential single point of failure – or “race condition” in the case of AWS – may exist, says Jones. “AWS will be looking at its architecture to ensure single points of failure are eliminated and the potential blast radius of any incident is dramatically reduced.” Maintaining operations during outages requires “architectural and operational preparation,” says Nazir.


AI Is Not Just a Tool

At some point in every panel, someone leans into the microphone and says it: “AI is just a tool, like a camera.” It’s meant to end the argument, a warm blanket for anxious minds. Art survived photography; we’ll survive this. But it is wrong. A camera points at the world and harvests what’s already there. A modern AI system points at us and proposes a world — filling gaps, making claims, deciding what should come next. That difference is not semantics. It’s jurisdiction. ... A photo is protectable because a human author made it. Purely AI-generated material, absent sufficient human control, isn’t. The law refuses to pretend the prompt is the picture. That alone should retire the analogy. That doesn’t mean the output is “authorless”; it means the law refuses to pretend the user’s prompt equals human creative control. Cameras yield photographs authored by people; models yield artifacts whose legal status relies on the extent to which a human actually contributed. Different authorship rules = different things. ... The model is not a person, but it isn’t an empty pipe. It embodies choices that will be made (over and over) at human scale, with the same confidence we misread as competence. That’s why generative AI feels creative without being human. It performs composition: not presence, but pattern. It produces objects that look like testimony. Cameras can lie (through framing), but models conjecture. They create the very thing we then argue about.


Are Small Businesses at Risk by Outsourcing Parts of Their Operations?

When you outsource a function or department, you're doing more than simply delegating tasks. Every third-party vendor, managed service provider, virtual assistant, or consultant who requires access to your critical systems carries an element of risk; they're ostensibly a potential entry point into your business. ... Some organizations are bound by specific, stringent regulatory frameworks and standards, depending on their sector(s) of operation. Some remote-working IT or marketing contractors may not be subject to the same data privacy laws that govern your organization, for example. Similarly, an HR outsourcing provider may store employee information in cloud servers that are deemed security-compliant in some jurisdictions but not in others. These compliance gaps create additional security vulnerabilities that threat actors would actively exploit without hesitation if the opportunity arose. ... As AI becomes more ingrained into business operations, the process of outsourcing becomes increasingly gray. According to recent statistics, more than half of businesses have experienced AI-related security vulnerabilities. What's more, cybercriminals are harnessing generative AI technology to escalate and amplify their attacks. ... The biggest danger that SMBs face when outsourcing is the assumption that someone else is now responsible for upholding security standards. 


Why AI Integration in DevOps is so Important

Traditional DevOps pipelines rely heavily on a high degree of automated testing and monitoring. The drawback is that they often lack the machine intelligence needed to recognize new or evolving threats. AI addresses this gap by introducing learning-based security systems capable of real-time behavioral analysis. Instead of waiting for known vulnerabilities to appear or be actively exploited, these systems recognize the predicate behavior and code activity. Once detected, engineers are alerted before an incident occurs. Within DevOps, AI is able to fortify each stage of the process: Reviewing commits for suspicious or vulnerable code, monitoring container environment integrity and evaluating system logs for anomalies that may have escaped real-time recognition. Insights like these help teams locate weak spots and reduce the impact of human error over time. ... AI integration with existing CI/CD workflows gives DevOps teams real-time visibility into security risks. AI-powered automated scanners analyze components automatically. Source code, dependencies and container images are all scanned for hidden vulnerabilities before the build phase is complete. This helps identify issues that could otherwise slip through manual reviews. AI-driven monitoring tools also track activity across the entire delivery pipeline, identifying potential attacks such as credential theft, code injection or dependency poisoning. As these tools learn over time, they adapt to new threat behaviors that traditional scanners might overlook.


NTT: How Japan Leads in Cybersecurity Amid Rising Threats

The Active Cyber Defense Law passed in May 2025 is intended to minimise the damage caused by substantive cyberattacks that can compromise national security, while Japan has also established new requirements for critical infrastructure companies to enhance their cybersecurity practices under the revised Economic Security Promotion Act. ... Gen AI has lowered the bar for adversaries to launch cyberattacks, meaning defenders have no choice but to empower themselves to automate at least partially their tasks including log or phishing analysis, threat detection, behavioural analysis and incident report drafting. This is crucial for defenders who are overwhelmed by ever increasing work around the clock to minimize burnout risks. ... As Japanese companies are increasingly expanding their businesses globally, multiple firms have reported their overseas subsidiaries being hit by ransomware attacks in the United States, Vietnam, Thailand, Singapore and Taiwan. To manage supply chain risks and ensure business continuity, it is becoming more crucial than ever to ensure global governance in cybersecurity and keep proper data backups, the principle of least privilege and network segmentation. Surprisingly, Japan is the country where ransomware infection ratio is lowest amongst 15 major countries such as the United States, the United Kingdom, France and Germany. 


From Data Bottlenecks to Data Products: Building for Speed and Scale

As it stands now, the central data team oversees data quality only at the final stage, a process that is not currently working. This is because it has resulted in the domain team, who create the data, being the only ones who have the full context necessary for proper accuracy and integrity. If businesses shift left with their approach, app developers themselves will take responsibility for the data created by applications. By giving the producer ownership of the quality, ongoing issues can be stopped before trickling down into data dashboards or machine-learning models. Ultimately, this is more than just a technical change. Shifting left will be a culture change that moves toward Data Mesh principles. By embedding ownership and quality within the domains that produce and use data, organisations replace central gatekeeping with shared accountability. Each domain now becomes a creator and protector of reliable data, ensuring governance is built in from the start rather than enforced later. ... Understandably, giving ownership of data to the teams creating it may seem chaotic. But it isn’t about losing control over it; rather, it is about giving teams the freedom and tools to work faster and smarter. At the end stands the lighthouse vision of a self-service data platform where every consumer can independently generate insights for standard questions and only reach out for support when tackling more advanced analyses.

Daily Tech Digest - November 22, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone



How CIOs can get a better handle on budgets as AI spend soars

Everyone wants to become AI-centric or AI-native, says West Monroe’s Greenstein. “But nobody has extra buckets of money to do this unless it’s existential to their company,” he says. So moving money from legacy projects to AI is a popular strategy. “It’s a shift of priorities within companies,” he says. “They look at their investments and ask how many are no longer needed because of AI, or how many can be done with AI. Plus, they’re putting pressure on vendors to drive down costs. They’re definitely squeezing existing suppliers.” Even large, tech-forward companies might have to do this kind of juggling. ... “AI is in a self-funding model at the moment,” he says. “We’re shifting investment from legacy technologies to AI.” ... Another challenge to budgeting is the demands that AI places on people, systems, and data. One of the most significant challenges to managing AI costs is talent, says Principal’s Arora. “Skill gaps and cross-team dependencies can slow deliveries and drive up costs,” he says. Then there’s the problem of evolving regulations, and the need to continuously adapt governance frameworks to stay resilient in the face of these changes. Organizations also often underestimate how much money will be needed to train employees, and to bring data and other foundational systems in line with what’s needed for AI. “Legacy environments add complexity and expense,” he adds. “These one-time costs are heavy but essential to avoid long-term inefficiencies.”


AI agent evaluation replaces data labeling as the critical path to production deployment

It's a fundamental shift in what enterprises need validated: not whether their model correctly classified an image, but whether their AI agent made good decisions across a complex, multi-step task involving reasoning, tool usage and code generation. If evaluation is just data labeling for AI outputs, then the shift from models to agents represents a step change in what needs to be labeled. Where traditional data labeling might involve marking images or categorizing text, agent evaluation requires judging multi-step reasoning chains, tool selection decisions and multi-modal outputs — all within a single interaction. "There is this very strong need for not just human in the loop anymore, but expert in the loop," Malyuk said. He pointed to high-stakes applications like healthcare and legal advice as examples where the cost of errors remains prohibitively high. ... The challenge with evaluating agents isn't just the volume of data, it's the complexity of what needs to be assessed. Agents don't produce simple text outputs; they generate reasoning chains, make tool selections, and produce artifacts across multiple modalities. ... While monitoring what AI systems do remains important, observability tools measure activity, not quality. Enterprises require dedicated evaluation infrastructure to assess outputs and drive improvement. These are distinct problems requiring different capabilities.


How IT leaders can build successful AI strategies — the VC view

It’s clear now that AI is transforming existing business structures, operational layers, organizational charts, and processes. “As a CIO, if you look at long term, you get better visibility of the outcomes of AI,” said Sandhya Venkatachalam, founder and partner at Axiom Partners. “Today, a lot of these net new capabilities are taking the form of AI performing the work or producing the outcomes that humans do, versus emulating or automating software tools,” Venkatachalam said. The shift will inevitably displace legacy systems and processes. She cited customer support as an early area ripe for upheaval. ... VCs typically don’t look at what buyers need right now; they look ahead. Similarly, IT leaders should look at how AI can transform their industry in the future. The real value of AI is in displacing legacy stacks and processes, and short wins or scattered AI initiatives mean nothing, Venkatachalam said. Adding AI to existing workflows — like building an internal large language model (LLM) — is often a waste. Enterprises are also wasting time building proprietary tools and infrastructures, which duplicates work already commoditized by big research labs, Venkatachalam said. ... AI strategies link IT directly to core products, which dictates market survival. IT decision-makers should align AI strategies to their verticals markets. Physical AI is considered the next big AI technology after agents in some areas. 


Could AI transparency backfire for businesses?

Work is underway to devise common ways to disclose the use of AI in content creation. The British Standards Institute’s (BSI) common standard (BS ISO/IEC 42001:2023) provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system (AIMS), ensuring AI applications are developed and operated ethically, transparently, and in alignment with regulatory standards. It helps manage AI-specific risks such as bias and lack of transparency. Mark Thirwell, the BSI’s global digital director, says that such standards are critical for building trust in AI. For his part, Thirwell is mainly focused on improving the transparency of underlying training data over whether content is disclosed as AI-generated. “You wouldn’t buy a toaster if someone hadn’t checked it to make sure it wasn’t going to set the kitchen on fire,” he argues. Thirwell posits that common standards can, and must, interrogate the trustworthiness of AI. Does it do what it says it’s going to do? Does it do that every time? Does it not do anything else – as hallucination and misinformation become increasingly problematic? Does it keep your data secure? Does it have integrity? And unique to AI, is it ethical? “If it’s detecting cancers or sifting through CVs,” he says, “is there going to be a bias based on the data it holds?” This is where transparency of the underlying data becomes key. 


The Importance of Having and Maintaining a Data Asset List and how to create one

The explosive growth of structured and unstructured data has made it increasingly difficult for organizations to track what information they hold across networks, devices, SaaS applications, and cloud platforms. Without clear visibility, businesses face higher risks, including security gaps, audit failures, regulatory penalties, and rising storage costs. ... Before we get into how to build a data asset inventory, it’s important to understand why regulators now expect organizations to maintain one. The compliance landscape in 2025 is more demanding than ever, and nearly every major framework explicitly or implicitly requires data mapping and data inventory management. ... A data asset inventory is a structured, centralized record of all the data types and systems that power your organization. The goal is to gain full visibility into what data exists, where it’s stored, who manages it, and how it flows, while also capturing any compliance obligations tied to that data. ... Many organizations rely on third-party providers to manage or process sensitive data, which can improve efficiency but also introduce new risks. External partnerships expand your organization’s digital footprint, increase the potential attack surface, and add complexity to data governance. ... A data asset inventory isn’t a one-time task, it’s a living, evolving document. As your organization adopts new tools, expands into new markets, or grows its teams, your inventory should evolve to reflect these changes. 


Building and Implementing Cyber Resilience Strategies

Currently, there is no unified standard for managing cyber resilience. Although many vendors offer their own solutions and some general standardization efforts are underway, a clear and consistent framework has yet to be established. As a result, organizations are forced to develop their own methods based on internal priorities and interpretations. The main challenge is that cyberattacks have become unavoidable and frequent. Traditional protective measures alone are no longer sufficient to fight modern threats. Another problem is the lack of coordination between IT, information security, and business units. ... In practice, however, its implementation largely depends on the organization’s maturity, scale, and specific infrastructure characteristics. The main difference lies in the level of detail: as a company grows, its infrastructure becomes more complex, the number of stakeholders increases, and each stage of analysis requires greater depth. In small organizations, identifying critical services is relatively quick, while in large enterprises, the process may involve analyzing hundreds of interconnected operations. Likewise, the scope of security measures varies—from basic hardening of key systems to multi-layered protection across distributed environments. At the same time, core principles such as threat analysis, incident response planning, and regular audits remain largely unchanged across all organizations.


Security researchers develop first-ever functional defense against cyberattacks on AI models

Researchers now warn that the most advanced of these attacks, called cryptanalytic extraction, can rebuild a model by asking it thousands of carefully chosen questions. Each answer helps reveal tiny clues about the model’s internal structure. Over time, those clues form a detailed map that exposes the model’s weights and biases. These attacks work surprisingly well when used on neural networks that rely on ReLU activation functions. Because these networks behave like piecewise linear systems, attackers can hunt for points where a neuron’s output flips between active and inactive and use those moments to uncover the neuron’s signature. ... Early methods could only recover partial information, but newer techniques can figure out both the size and the direction of the weights. Some even work using nothing more than the model’s predicted labels. All rely on the same core assumption. Neurons in a given layer behave differently enough that their signals can be separated. When that is true, the attack can cluster each neuron’s critical points and rebuild the entire network with surprising accuracy. ... The team tested this defense on neural networks that previous studies had broken in just a few hours. One of the clearest results comes from a model trained on the MNIST digit dataset with two small hidden layers. 


Draft Trump executive order signals new battle ahead over state AI powers

By eliminating that federal framework, the Trump White House positions itself not simply as preempting state authority, but also as reversing its immediate federal predecessor’s regulatory approach. The draft EO further states that the U.S. must sustain AI leadership through a “balanced, minimal regulatory environment,” language that signals a clear ideological orientation against safety-first or rights-protective models of AI governance. The administration wants the Department of Justice to challenge state AI laws it views as obstructive; the Department of Commerce to catalogue and publicly criticize state statutes deemed “burdensome;” and agencies like the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) to establish national standards that would override state requirements. ... The move immediately raises questions not only about the future of AI governance but also about the structure of American federalism. For years, states have been the primary actors experimenting with AI regulation. They have advanced bills aimed at biometric privacy, algorithmic fairness, deepfake disclosure, automated decision-making transparency, and even restrictions on government use of facial recognition. These experiments, often more aggressive than anything contemplated in Congress, have become the country’s de facto laboratories of AI oversight. 


Engineering the Perfect Product Launch: Lessons from Prototype to Production

Rushing a product to market without a strong quality framework is a gamble most companies regret. Recalls, warranty claims and reputational damage cost far more than investing in quality upfront. The smarter approach is to build quality into the process from the start rather than bolting it in the end. ... During the product rollout I supported, we built proactive quality checkpoints at every stage of assembly. This meant small defects were caught early, long before they reached final testing. In one instance, a supplier batch with a minor material inconsistency was identified at the first inspection step, preventing what could have been a costly recall. Conversely, I’ve also seen how skipping just one validation step resulted in weeks of rework.  ... When all three elements: Development, quality and ERP work in harmony, product launches move faster and run smoothly. Costs are kept in check because inefficiencies are addressed early. Time-to-market accelerates because bottlenecks are anticipated. Manufacturing excellence becomes the standard from the first unit shipped, not something achieved after painful trial and error. ... Engineering a product launch is about orchestrating dozens of small, interconnected decisions across design, quality and enterprise systems. The companies that consistently succeed treat the launch as an engineering challenge, not just a marketing deadline.


Organisations struggle with non-human identity risks & AI demands

Growth in digital identities-both human and non-human-continues to strain legacy identity and access management practices. This identity sprawl raises the risk of credential-based threats and increases the attack surface for cybercriminals. "With organizations struggling to govern an expanding mesh of digital identities across human, machine, and AI entities, over-permissioned roles, shadow identities, and disconnected IAM systems will continue to expose organizations to credential-based attacks and lateral movement. AI will also reshape traditional social engineering: synthetic voices, deepfakes, and adaptive phishing will erode the reliability of static authentication, forcing organizations to adopt continuous and context-aware verification as the new baseline," said Benoit Grange ... "The NIS2 directive has ushered in stricter cybersecurity measures and reporting for a wider range of critical infrastructure and essential services across the European Union. For industries newly brought under this directive, including manufacturing, logistics and certain digital services, 2026 will bring new growing pains. The sectors, many long accustomed to minimal compliance oversight, now face strict governance and reporting requirements. In contrast, mature sectors like finance and healthcare will adapt more smoothly. The disparity will expose structural weaknesses in organizations unfamiliar with continuous compliance, making them attractive targets for attackers exploiting regulatory confusion," said Niels Fenger.