Daily Tech Digest - November 16, 2025


Quote for the day:

"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll


Hybrid AI: The future of certifiable and trustworthy intelligence

An emerging approach in AI innovation is hybrid AI, which combines the scalability of machine learning (ML) with the constraint-checking and provenance of symbolic models. Hybrid AI forms a foundation for system-level certification and helps CIOs balance the pursuit of performance with the need for accountability. ... Clustering, a core unsupervised learning technique, organizes unlabeled data into groups based on similarity. It’s widely used to segment customers, group documents or analyze sensor data by measuring distances in a numeric feature space. But conventional clustering works on similarity alone and has no grasp of meaning. This can group items by coincidence rather than concept. ... For enterprise leaders, verifiability isn’t optional; it’s a governance requirement. Systems that support strategic or regulatory decisions must show constraint conformance and leave a traceable decision path. Ontology-driven clustering provides that foundation, creating an auditable chain of logic aligned with frameworks such as the NIST AI Risk Management Framework. In both government and industry, this hybrid approach makes AI more accountable and reliable. Trustworthiness is not a checkbox but an assurance case that connects data science, compliance and oversight. An organization that cannot trace what was allowed into a model or which constraints were applied does not truly control the decision.


Upwork study shows AI agents excel with human partners but fail independently

The research challenges both the hype around fully autonomous AI agents and fears that such technology will imminently replace knowledge workers. "AI agents aren't that agentic, meaning they aren't that good," Andrew Rabinovich, Upwork's chief technology officer and head of AI and machine learning, said in an exclusive interview with VentureBeat. "However, when paired with expert human professionals, project completion rates improve dramatically, supporting our firm belief that the future of work will be defined by humans and AI collaborating to get more work done, with human intuition and domain expertise playing a critical role." ... The research reveals stark differences in how AI agents perform with and without human guidance across different types of work. For data science and analytics projects, Claude Sonnet 4 achieved a 64% completion rate working alone but jumped to 93% after receiving feedback from a human expert. In sales and marketing work, Gemini 2.5 Pro's completion rate rose from 17% independently to 31% with human input. OpenAI's GPT-5 showed similarly dramatic improvements in engineering and architecture tasks, climbing from 30% to 50% completion. The pattern held across virtually all categories, with agents responding particularly well to human feedback on qualitative, creative work requiring editorial judgment — areas like writing, translation, and marketing — where completion rates increased by up to 17 percentage points per feedback cycle.


Debunking AI Security Myths for State and Local Governments

As state and local governments adopt AI, they must return to cybersecurity basics and strengthen core principles to help build resilience and earn public trust. For AI workloads, governments should apply zero-trust principles; for example, continuously verifying identities, limiting access by role and segmenting system components. Clear data policies for access, protection and backups help safeguard sensitive information and keep systems resilient. Perhaps most important, security teams need to be involved early in AI design conversations to build in security from the start. ... As state and local governments deploy more sophisticated AI systems, it’s crucial to view the technology as a partner, not a replacement for human intelligence. There is a misconception that advanced AI — particularly agentic AI, which can make its own decisions — eliminates the need for human oversight. The truth is, responsible AI deployment hinges on human oversight and strong governance. The more autonomous an AI system becomes, the more essential human governance is. ... Securing AI is not a one-time milestone. It’s an ongoing process of preparation and adaptation as the threat landscape evolves. For state and local governments advancing their AI initiatives, the path forward centers on building resilience and confidence. And the good news is, they don’t need to start from scratch. The tools and strategies already exist.


When Open Source Meets Enterprise: A Fragile Alliance

The answer is by no means simple; it is determined by a number of factors, of which the vendor’s ethos is one of the most important. Some vendors genuinely give back to the open-source communities from which they gain value. Others are more extractive, building closed proprietary layers atop open foundations and pushing little back to the community. The difference matters enormously. Organisations hold true optionality when a vendor actively maintains the open-source core, while keeping its proprietary features genuinely additive rather than substitutive. In theory, they could shift to another provider or take the open-source components in-house should the relationship sour. ... Commercial open-source vendors can provide training, certification, and managed services to fill this gap, for a fee naturally. Then there is innovation velocity. Open-source communities can move incredibly quickly, with contributions from numerous sources, enabling organisations to adopt cutting-edge features faster than conventional enterprise procurement cycles allow. Conversely, vital security patches can stall if a project lacks maintainers, creating unacceptable exposure for risk-averse organisations. ... Ultimately, the question is not whether open source should exist within the enterprise; that debate has been resolved. The challenge lies in thoughtfully incorporating open-source components into broader technology strategies that balance innovation, resilience, sovereignty, and pragmatic risk management.


The Hidden Cost of Technical Debt in Databases

At its core, technical debt represents the trade-off between speed and quality. When a development team chooses a “quick and dirty” path to meet a deadline, debt is incurred. The database world sees the same phenomenon. ... The first step to eliminating technical debt is recognition. DBAs must adopt a mindset that managing technical debt is part of the job. Although it can be enticing to quickly fix a problem and move on, it should always be a part of the job to reflect on the potential future impact of any change that is made. ... Importantly, DBAs also sit at the crossroads between technical staff and business stakeholders. They can explain how technical debt translates into business impact: lost productivity, slower application delivery, higher infrastructure costs, and greater operational risk. This ability to connect database health to business outcomes is essential for winning support to tackle debt. In practice, the DBA’s role involves three things: identification, communication, and advocacy. DBAs must identify where debt exists, communicate its impact clearly, and advocate for resources to remediate it. Sometimes that means lobbying for time to redesign a schema, other times it means convincing leadership that archiving inactive data will save more money than buying new storage. Yet other times it may involve championing a new tool or process to be put in place to automate required tasks to thwart technical debt.


Seek Skills, Not Titles

Titles feel good—at first. They make your resume and LinkedIn profile look prettier. But when you confuse your title for your identity, you’re setting yourself up for a rude awakening. Titles can be taken away. Or they just expire, like milk in the back of the fridge. Your skills, on the other hand? No one can take those away from you. ... Some roles taught me how to work hard and build trust. Some taught me to communicate clearly and adapt quickly. Others taught me to see the big picture and act decisively. The titles didn’t teach me those skills; the experience did. ... It’s easy to let your job title become your identity, especially when you’re leading at a high level. Everyone wants something from you. Board members, investors, employees. They project their version of who they think you should be. You must have clarity on your core values. Not the company’s core values, but your own. Otherwise, you’ll find yourself playing a dozen different roles without knowing which one is actually you. ... Don’t wait for the title to teach you a skill. Start now. The best way to grow is to pursue skills that will open up opportunities, especially the ones that align with your personal values. Because when your values and skills match, your impact multiplies, regardless of the title. When has pursuing a title led you away from the skills you truly needed? What impact have you seen when your skills are aligned with your values? How might you need to detour to get back on the right track?


Strategic Autarky for the AI Age

AI is still emerging. Overspecifying rules, enforcing rigid certification pathways, or creating sector wise chokepoints too early can stifle the very innovation we aim to promote. Burdensome compliance layers, mandated algorithmic disclosures, prescriptive model testing protocols, and fragmented approval processes can all create friction. Overregulation can discourage experimentation, elevate the cost of market entry, and drain our fastest growing startups. The risk is simple. Innovation flight. Loss of competitive edge. A domestic ecosystem slowed down before it reaches maturity. Balancing sovereignty and innovation, therefore, becomes the central task. India cannot afford to remain dependent, but it also cannot smother its own technological growth. India’s new AI Governance Framework addresses this balance directly. It follows seven guiding principles built around trust, accountability, transparency, privacy, security, human centricity, and collaboration. The standout feature is its “light touch” approach. Instead of imposing rigid controls, the framework sets high level principles that can evolve with technology. It relies on India’s existing legal foundation, including the Digital Personal Data Protection Act and the Information Technology Act, and is supported by institutional structures like the AI Governance Group and the AI Safety Institute. The framework contains several strong provisions. It encourages voluntary risk assessments rather than mandatory rigid audits for most systems.


Google Brain founder Andrew Ng thinks you should still learn to code - here's why

"Because AI coding has lowered the bar to entry so much, I hope we can encourage everyone to learn to code -- not just software engineers," Ng said during his keynote. How AI will impact jobs and the future of work is still unfolding. Regardless, Ng told ZDNET in an interview that he thinks everyone should know the basics of how to use AI to code, equivalent to knowing "a little bit of math," -- still a hard skill, but applied more generally to many careers for whatever you may need. "One of the most important skills of the future is the ability to tell a computer exactly what you want it to do for you," he said, noting that everyone should know enough to speak a computer's language, without needing to write code yourself. "Syntax, the arcane incantations we use, that's less important." ... The new challenge for developers, Ng said during the panel, will be coming up with the concept of what they want. Hedin agreed, adding that if AI is doing the coding in the future, developers should focus on their intuition when building a product or tool. "The thing that AI will be worst at is understanding humans," he said. ... He cited the overhiring sprees tech companies went on -- and then ultimately reversed -- during the COVID-19 pandemic as the primary reason entry-level coding jobs are hard to come by. Beyond that, though, it's a question of grads having the right kind of coding skills.


How Development Teams Are Rethinking the Way They Build Software

While low-code/no-code platforms accelerate development, they can become challenging when trying to achieve high levels of customization or when dealing with complex systems. Custom solutions might be more cost-effective for highly specialized applications. Low-code and no-code platforms must provide clear guidance to users within a structured framework to minimize mistakes, and they may offer less flexibility compared to traditional coding. AI tools can be easily used to generate code, suggest optimizations, or even create entire applications based on natural language prompts. However, they work best when integrated into a broader development ecosystem, not as standalone solutions. ... The future of software development appears to be a blended approach, where traditional programming, low-code/no-code platforms, and AI each play a role. The key to success in this dynamic landscape is understanding when to use each method, ensuring C-level executives, team leaders, and team members are versatile and leverage technology to enhance, rather than replace, human ingenuity. Let me share my firsthand experience. When I asked my developers a year ago how they thought using AI tools at work would evolve, many said: “I expect that as the tools improve, I’ll shift from mostly writing code to mostly reviewing AI-generated code.” Fast forward a year, and when we posed the same question, a common theme emerged: “We are spending less time writing the mundane stuff.”


Businesses must bolster cyber resilience, now more than ever

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practising hands-on defence is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organisation’s resilience. ... Boards aren’t expected to manage technical defences, but they are responsible for ensuring the organisation can withstand, recover from, and learn after a cyber disruption. Cyber incidents have evolved into full business continuity events, affecting operations, supply chains, and reputation. Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organisation’s response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.

Daily Tech Digest - November 15, 2025


Quote for the day:

“Be content to act, and leave the talking to others.” -- Baltasa



Why engineering culture should be your top priority, not your last

Most engineering leaders treat culture like an HR checkbox, something to address after the roadmap is set and the features are prioritized. That’s backwards. Culture directly affects how fast your team ships code, how often bugs make it to production, and whether your best developers are still around when the next major project kicks off. ... Many engineering leaders are Boomers or Gen X. They built their careers in environments where you kept your head down, shipped your code, and assumed no news was good news. That approach worked for them. It doesn’t work for the developers they’re managing now. This creates a perception problem that compounds the engagement gap. Most C-suite leaders say they put employee well-being first. Most employees don’t see it that way. Only 60% agree their employer actually prioritizes their well-being. The gap matters because employees who think their company cares more about output than people feel overwhelmed nearly three-quarters of the time. When employees feel supported, that number drops to just over half. That difference is where attrition starts. ... Most engineering teams try to fix retention with the same approach that worked decades ago, when people stayed at companies for years and stability mattered more than engagement. That’s not how careers work anymore. The typical response is to roll out generic culture programs designed for large enterprises. 


Integrated deployment must become the default

It’s intuitive that off-site and modular construction models reduce on-site build timelines in general construction, but we are observing the benefits within the data center space being amplified due to the increased density of services catering to larger rack loads. One of the main deterrents to modular adoption has been the perception of limited scalability and design repetition, combined with the inefficiency of transporting large volumes of unused space, essentially “shipping air.” As a result, traditional stick-build methods have long remained the default approach. But that’s all changing. The services, be it telecom, electrical, or cooling, are getting bigger, heavier, and more densely packed, and the timeframe needed is being whittled down, so naturally the emphasis has moved towards fully integrated solutions. These systems are assembled and commissioned offsite wherever possible, then delivered ready for installation with minimal site work required. Offsite integration also negates a lot of the complexities of trade-to-trade sequencing and handover of areas, which absorb site resources and hinder programme delivery. When systems arrive pre-aligned, factory-tested, and installation-ready on-site, activity shifts from coordination and correction to simple assembly. The cumulative impact is significant: reduced project timelines, fewer site dependencies, and greater confidence in delivery schedules.


The Myth Of Executive Alignment: Why Top Teams Need Honesty, Not Harmony

The idea that executive teams should think alike is comforting but unrealistic. Direction needs coherence, but total agreement usually means someone stopped speaking up. Lencioni has said that real clarity can’t be manufactured through slogans or slide decks. “Alignment and clarity,” he wrote, “cannot be achieved in one fell swoop with a series of generic buzzwords and aspirational phrases crammed together.” The strongest teams I’ve seen operate through visible, respected tension. Finance pushes for discipline. Strategy pushes for expansion. Risk pushes for protection. Culture pushes for capacity. Together they form an internal ecosystem of checks and balances. Call it necessary misalignment or structured divergence—it’s what keeps a company honest. The work isn’t to erase difference but to make it safe. ... Executive behavior multiplies downward. When the top team loses coherence, the entire system learns to mimic its caution. Lencioni has often written that when trust is strong, conflict transforms. “When there is trust,” he explained, “conflict becomes nothing but the pursuit of truth.” And the reward for that truth, he reminds us, is organizational health. “The single greatest advantage any company can achieve,” Lencioni wrote, “is organizational health.” Those two ideas—truth and health—connect directly with Gallup’s research. They’re not soft metrics; they’re what make trust and accountability visible.


Why Cybersecurity Jobs Are Likely To Resist AI Layoff Pressures: Experts

The bottom line is that there will “always” be a need for a significant number of cybersecurity professionals, Edross said. “I do not believe this technology will ever make the human obsolete.” The notion that SOC analyst jobs and other roles requiring security expertise might be at risk would have been unthinkable just a few years ago — making the sudden shift to discussions around AI-driven redundancy for humans in the SOC all the more startling. “If you go back about two years ago, there’s this constant hum in the industry that we have a few million less cybersecurity professionals than we need,” Palo Alto Networks CEO Nikesh Arora said. ... “AI still has a significant propensity to make mistakes, which in the security world is quite problematic,” said Boaz Gelbord, senior vice president and chief security officer of Akamai. “So you’re always going to need a human check on that.” At the same time, human orchestration of the AI systems will be an ongoing necessity as well, according to experts. “You need that creativity. You need to understand and piece together and review the LLM’s work,” said Dov Yoran, co-founder and CEO of Command Zero, a startup offering an LLM-powered cyber investigation platform. “I don’t see how the human goes away.” And while entry-level security analysts may find parts of their roles becoming redundant due to AI, most organizations will want to continue employing them, if only to prepare them to become higher-tier analysts over time, Yoran said.


MCP doesn’t move data. It moves trust

Many assume MCP will replace APIs, but it can’t and shouldn’t. MCP defines how AI models can safely call tools; APIs remain the mechanisms that connect those tools to the real world. Without APIs, an MCP-enabled AI can think, reason and recommend, but it can’t act. Without MCP, those same APIs remain open highways with no traffic rules. Autonomy requires both. MCP will give rise to a new class of enterprise software: AI control planes that sit between reasoning and execution. These systems will combine access policy, auditing, explainability and version control — the governance scaffolding for safe autonomy. But governance alone isn’t enough. Logging requests does not make them effective. Without APIs, MCP remains a supervisory layer, not an operational one. The future belongs to systems that can both decide responsibly and act reliably. ... MCP will not eliminate complexity. It will simply move it — from data management to decision management. The challenge ahead is to make that complexity visible, traceable and accountable. In enterprise AI, the real challenge is no longer technical feasibility; it’s moral architecture. The question is shifting from what AI can do to what it should be allowed to do. ... MCP represents the architecture of restraint, a new language of control between reasoning and reality. APIs will keep moving data. MCP will govern how intelligence uses it. And when those two layers work in harmony, enterprises will finally move from systems that record what happened to systems that make things happen.


AI Copilots for Good Governance and Efficient Public Service Delivery

While AI copilots hold immense potential for public service delivery, several challenges must be addressed before large-scale adoption can be facilitated in India. While India’s digital and policy landscape provides fertile ground for AI copilots, several challenges need to be addressed to ensure their responsible and effective adoption. One of the foremost concerns is data privacy and security. Copilots in governance will inevitably process large volumes of sensitive personal and financial data from citizens and businesses. Without adequate safeguards, this raises risks of misuse, unauthorised access, or surveillance overreach. The Digital Personal Data Protection Act, 2023, establishes a strong legal framework for data fiduciaries. Yet, its principles must be operationalised through privacy-preserving sandboxes, anonymised training datasets, and clear consent mechanisms tailored for AI-driven interfaces. ... Equally pressing is the challenge of algorithmic bias and fairness. AI copilots, if trained on unbalanced or non-representative datasets, can perpetuate linguistic, gender, or regional biases, disadvantaging marginalised users. To prevent such inequities, India’s AI governance could mandate fairness audits, algorithmic transparency, and explainability in all government-deployed copilots. This may be complemented by inclusive design standards that ensure accessibility across India’s diverse languages and digital contexts. 


Fighting AI with AI: Adversarial bots vs. autonomous threat hunters

Attackers already have systemic advantages that AI amplifies dramatically. While there are some great examples of how AI can be used for defense, these methods, if used against us, could be devastating. ... It’s hard to gain context at that scale. Most companies have multiple defensive layers — and they all have flaws. Using weaknesses in those layers, attackers weave through them and create attack paths. The question is: How are we finding those paths before they do? ... The use of AI bots within a digital twin enables continuous, multi-threaded threat hunting and attack path validation without impacting production environments. This addresses the prioritization challenges that security and IT teams struggle with in a meaningful way. Really, digital twins offer the same benefits to security teams as physical twins provided to NASA scientists more than 55 years ago: accurate simulations of how a given change might impact large, complex and highly dynamic attack surfaces. Plus, it’s exciting to imagine how the UX might evolve to help defenders visualize what’s happening in unprecedented ways. ... AI is a truly transformational technology and it’s exciting to think about how AI defense can evolve over the next few years. I encourage product builders to think big. Why not draw inspiration from science fiction? 


AI is shaking up IT work, careers, and businesses - and here's how to prepare

"AI opened a whole new can of worms for security," said Tsai. "Overall, the demand for IT jobs is going to increase at three times the rate of all jobs." This generally presents a positive outlook for the IT industry, but it's also fueling a shift in how companies conduct hiring and what they are looking for. Spiceworks previewed its 2026 State of IT report, a survey that gathers insights from over 800 IT professionals at small and medium-sized companies on current trends, and found that the skills most in demand are reflecting the growth of AI. ... "If you are in IT, perhaps upleveling your skills, learning about AI is a very smart thing to do now. It can make you very productive, and it can help you do more or less," said Tsai. Taking it upon yourself to do this work is especially important because, as I cited during the panel, companies are investing a lot of money into AI solutions, but training is increasingly left behind or not prioritized. ... "When it comes to AI, whether it is bringing in completely and maybe doing a small language model to AI, or doing inferencing, or you can run many of the LLMs internally," said Rapozza. "Businesses are building up your construction to support those kinds of things." Does this level of investment mean companies are seeing an immediate ROI? Not exactly, but there is progress being made in that direction. As Rodrigo Gazzaneo, senior GTM Specialist, generative AI, Amazon Web Services (AWS), noted, companies are already seeing positive outcomes.


A developer’s Hippocratic Oath: Prioritizing quality and security with the fast pace of AI-generated coding

In the context of the medical field, physicians are taught ‘do no harm,’ and what that means is their highest duty of care is to make sure that the patient is first, and that they do not conduct any sort of treatments on the patient without first validating that that’s what’s best for the patient, ... The responsibility for software engineers is similar; When they’re asked to make a change to the codebase, they need to first understand what they’re being asked to do and make sure that’s the best course of action for the codebase. “We’re inundated with requests,” Johnson said. “Product managers, business partners, customers are demanding that we make changes to applications, and that’s our job, right? It’s our job to build things that provide humanity and our customers and our businesses value, but we have to understand what is the impact of that change. How is it going to impact other systems? Is it going to be secure? Is it going to be maintainable? Is it going to be performant? Is it ultimately going to help the customer?” ... “We all love speed, right? But faster coding is not actually producing a high quality product being shipped. In fact, we’re seeing bottlenecks and lower quality code.” He went on to say that testing is the discipline that could be most transformed by generative AI. It is really good at studying the code and determining what tests you’re missing and how to improve test coverage.


API Key Security: 7 Enterprise-Proven Methods to Prevent Costly Data Breaches

To prevent API keys from leaking, the first and foremost rule is, as you guessed, never store them in the code. Embedding API keys directly in client-side code or committing them to version control systems is, no doubt, a recipe for disaster: Anyone who can access the code or the repository can steal the keys. ... Implementing an API key storage system? Out of the question, because securely storing and managing API keys bring tremendous operational overhead, like storage overhead, management overhead, usage overhead, and distribution overhead. ... API Gateways, like AWS API Gateway, Kong, etc., are designed to solve these problems, simplifying and centralizing the management of all APIs, providing a single entry point for all requests. Features like limiting, throttling, and DDoS protection are baked in; API gateways can also provide centralized logging and monitoring; they even provide more features like input validation, data masking, and response filtering. ... All the above practices enhance API security in either the usage/storage or production environment, but there is another area where API keys could be compromised: the continuous integration/continuous deployment systems and pipelines. By nature, CI/CD involves running automation scripts and executing commands in a non-interactive way, which sometimes requires API keys, and this means the keys need to be stored somewhere and passed to the pipelines at runtime.

Daily Tech Digest - November 14, 2025


Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh



When will browser agents do real work?

Vision-based agents treat the browser as a visual canvas. They look at screenshots, interpret them using multimodal models, and output low-level actions like “click (210,260)” or “type “Peter Pan”.” This mimics how a human would use a computer—reading visible text, locating buttons visually, and clicking where needed. ... DOM-based agents, by contrast, operate directly on the Document Object Model (DOM), the structured tree that defines every webpage. Instead of interpreting pixels, they reason over textual representations of the page: element tags, attributes, ARIA roles, and labels. ... Running a browser agent once successfully doesn’t mean it can repeat the task reliably. The next frontier is learning from exploration: transforming first-time behaviors into reusable automations. A promising strategy starting to be deployed more and more is to let agents explore workflows visually, then encode those paths into structured representations like DOM selectors or code. ... With new large language models excelling at writing and editing code, these agents can self-generate and improve their own scripts, creating a cycle of self-optimization. Over time, the system becomes similar to a skilled worker: slower on the first task, but exponentially faster on repeat executions. This hybrid, self-improving approach—combining vision, structure, and code synthesis—is what makes browser automation increasingly robust. 


Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore

LLMs have been a boon for developers since OpenAI’s ChatGPT was publicly released in November 2022, followed by other AI models. Developers were quick to utilize the tools, which significantly increased productivity for overtaxed development teams. However, that productivity boost came with security concerns, such as AI models trained on flawed code from internal or publicly available repositories. Those models introduced vulnerabilities that sometimes spread throughout the entire software ecosystem. One way to address the problem was by using LLMs to make iterative improvements to code-level security during the development process, under the assumption that LLMs, given the job of correcting mistakes, would amend them. The study, however, turns that assumption on its head. Although previous studies (and extensive real-world experience, including our own data) have demonstrated that an LLM can introduce vulnerabilities in the code it generates, this study went a step further, finding that iterative refinement of code can introduce new errors. ... The security degradation introduced in the feedback loop raises troubling questions for developers, tool designers and AI safety researchers. The answer to those questions, the authors write, involves human intervention. Developers, for instance, must maintain control of the development process, viewing AI as a collaborative assistant rather than an autonomous tool.


Are We in the Quantum Decade?

It would be prohibitively expensive even for a Fortune 100 company to own, operate and maintain its own quantum computer. It would require a quantum ecosystem that includes government, academia and industry entities to make it accessible to an enterprise. In most cases, the push and funding could come from the government or through cooperation among nations. Historically, new computing technology was rented and used as a service. Compute resources financed by government were booked in advance. Processing occurred in batches using resource-sharing techniques such as time slicing. Equivalent models are expected for quantum processing. ... The era of quantum computing looms large, but enterprises and IT teams should be thinking about it today. Infrastructure needs to be deployed and algorithms need to be written for executing business use cases. "For several years to come, CIOs may not have much to do with quantum computing. But they need to know what it is, what it can do and how much it costs," said Lawrence Gasman, president of Communications Industry Researchers. "Quantum networks and cybersecurity will become necessary for secure communications by 2030 or even earlier." Quantum computing will not replace classical computing, but data center providers need to be thinking about how they will integrate the two architectures using interconnects like co-packaged optics.


When Data Gravity Meets Disaster Recovery

Data starts to pull everything else toward it: apps, analytics, integrations, even people and processes, the more it aggregates in one place. That environment becomes a tightly woven web of dependencies, over time. While it may be fine for day-to-day operations, it becomes a nightmare when something breaks. At that point, DR turns into a delicate task of relocating an entire ecosystem, not just a matter of simply copying files. You have to think about relationships, which systems rely on which datasets, how permissions are mapped, and how applications expect to find what they need. Of course, the bigger that web gets, the heavier the “gravitational field.” Moving petabytes of interconnected data across regions or clouds isn’t fast or easy. It takes time, bandwidth, and planning, and every extra gigabyte adds friction – in other words, the more gravity your data has, the harder it is to recover from disaster quickly. ... To push back against gravity, organizations are rethinking their architectures. Instead of forcing all data into one environment, they’re distributing it intelligently, keeping mission-critical workloads close to where they’re created, while replicating copies to nearby or complementary environments for protection. Hybrid and multi-cloud DR strategies have become the go-to solution for this. They blend the best of both worlds: the low-latency performance of local infrastructure with the flexibility and geographic reach of cloud storage. 


What’s Driving the EU’s AI Act Shake-Up?

The move to revise the AI Act follows sustained lobbying from US tech giants. In October, the Computer and Communications Industry Association (CCIA), whose members include Apple, Meta, and Amazon, launched a campaign pushing for simplification not only of the AI Act but of the EU’s entire digital rulebook. Meanwhile, EU officials have reportedly engaged with the Trump administration on these issues. ... The potential delay reflects pressure from national authorities. Denmark and Germany have both pushed for a one-year extension. A spokesperson from Germany’s Federal Ministry for Digital Transformation and Government Modernization said that a delay “would allow sufficient time for the practical application of European standards by AI providers, with standards still currently being elaborated.” ... Another major reform under consideration is expanding and centralizing oversight powers within the Commission’s AI Office. Currently responsible for general-purpose AI models (GPAI), the office would gain new authority to oversee all AI systems based on GPAI and conduct conformity assessments for certain high-risk systems. The Commission would also gain new authority to perform conformity assessments for certain high-risk systems and supervise online services deemed to pose “systemic risk” under the Digital Services Act. This would shift more power to Brussels and expand the mandate of the Commission’s AI Office beyond its current role supervising GPAI.


BITS & BYTES : The Foundational Lens for Enterprise Transformation

BITS serves as high-level strategic governance—ensuring balanced maturity assessments across business alignment, information-centric decision-making, technology enablement, and security resilience—while leveraging BDAT’s detailed sub-domains (layers and components) for tactical implementation and operational oversight. This allows organizations to maintain BDAT’s precision in decomposing complex IT landscapes (e.g., mapping specific data architectures or application portfolios) within BITS’s overarching pillars, fostering adaptive governance that scales from atomic “bits” of change to enterprise-wide transformations ... If BITS defines what must be managed, BYTES (Balanced Yearly Transformation to Enhance Services) define how change must be processed. BYTES is more than a set of principles; it is a derivative of the core architectural lifecycle: Plan (Balanced Yearly), Design& Build (Transformation Enhancing) , and Run (Services). Each component of BYTES directly maps to the mandatory stages of a continuous transformation framework, enabling architects to manage change at its source. ... The BITS & BYTES framework is not intended to replace existing architecture frameworks (e.g., TOGAF, Zachman, DAMA, IT4IT, SAFe). Instead, it acts as a meta-framework—a simplified, high-level matrix that accommodates and contextualizes the applicability of all existing models. 


Unlocking GenAI and Cloud Effectiveness With Intelligent Archiving

Unlike tiering, which functions like a permanent librarian selectively fetching individual files from deep storage, true archiving is a one-time event that moves files based on defined policies, such as last access or modification date. Once archived, files are stored on a long-term platform and remain accessible without reliance on any intermediary system or application. In this context, one of the main challenges is that most enterprise data is unstructured, including everything from images and videos to emails and social media content. Collectively, these vast and diverse data lakes present a formidable management challenge, and without rigorous control, organizations risk falling victim to the classic “garbage in, garbage out” problem. ... Modern archiving technologies that connect directly to both primary and archive storage platforms eliminate the need for a middleman, drastically improving migration speed, accuracy, and long-term data accessibility. This means organizations can migrate only what’s necessary, ensuring high-value data is cloud-ready while offloading cold data to cost-efficient archival platforms. This not only reduces cloud storage costs but also supports the adoption of cloud-native formats, enabling greater scalability and performance for active workloads. ... For modern enterprises, where more than 60% of enterprise data is typically inactive and often goes untouched for years, organizations are still consuming high-performance (and high-cost) storage.


Why 60% of BI Initiatives Fail (and How Enterprises Can Avoid It)

Many BI projects fail because goals and outcomes aren’t clearly defined. While enterprises may be confident that they understand BI gaps, often their goals are vague, lacking proper detailing and no internal consensus. ... Poor project management practices, vague processes, and changing responsibilities create even more confusion. In many failed BI projects, BI is viewed as “just another IT initiative,” whereas it should be treated as part of a business transformation program. Without active sponsorship and accountability, the technology may be delivered, but its adoption and impact suffer. ... Agile and iterative methods are often preferred since they are effective for BI. Whereas, the waterfall method is not recommended for BI projects since it lacks the necessary agility to adapt to changing requirements, iterative data exploration, and continuous business feedback. Under the waterfall approach, the users are engaged only in the beginning of the project and during the end, which leaves gaps for development or data analysis incase of issues. ... A system is only as good as the users who use it; research has shown that 55% of users lack confidence in BI tools due to insufficient training. Enterprises often expend considerable resources on deployment, but neglect enablement. If employees can’t find how to navigate dashboards, understand the data quality, data visualizations, or use insights to make daily decisions, the adoption rates suffer.


Authentication in the age of AI spoofing

Unlike traditional malware, which may find its way into networks through a compromised software update or downloads, AI-powered threats utilize machine learning to analyze how employees authenticate themselves to access networks, including when they log in, from which devices, typing patterns and even mouse movements. The AI learns to mimic legitimate behavior while collecting login credentials and is ultimately deployed to evade basic detection. ... Beyond the statistics, AI’s effectiveness is driven by its exponentially improving abilities to social engineer humans — replicating writing style, voice cadence, facial expressions or speech with subtle nuance and adding realistic context by scanning social media and other publicly available references. The data is striking and reflects the crucial need for a multi-layer approach to help sidestep the exponentially escalating ability for AI to trick humans. ... Cryptographic protection complements biometric authentication, which verifies “Is this the right person?” at the device level, while passkeys are used to verify “Is this the right website or service?” at the network level. Multi-modal biometrics, such as facial recognition plus fingerprint scanning or biometrics plus behavioral patterns, further strengthen this approach. As AI-powered attacks make credential theft and impersonation attacks more sophisticated, the only sustainable line of defense is a form of authentication that cannot be tricked or must be cryptographically verified. 


Why your security strategy is failing before it even starts

The biggest mistake I see among organizations is initiating cybersecurity efforts with technology rather than prioritizing risk and business alignment. Cybersecurity is often mischaracterized as a technical issue, when in reality it’s a business risk management function. Failure to establish this connection early often results in fragmented decision-making and limited executive engagement. Effective cybersecurity strategies should be embedded into business objectives from the outset. This requires identifying the business’s critical assets, assessing potential threats and motivations, and evaluating the impact of assets becoming compromised. Too often, CISOs jump straight into acquiring cybersecurity tools without addressing these questions. ... First, the threat landscape shifted dramatically. Cybersecurity attacks today target OT and ICS. In food manufacturing, those systems run production lines, refrigeration, and safety processes. A cyber incident in these areas extends beyond data loss, it can disrupt production and even compromise food safety, introducing a far more complex level of risk. Second, it became evident to me that cybersecurity cannot operate in isolation. It must support and enable business operations and growth. Today, my approach is risk-based and aligned with our business prioritizes, while still built on zero trust principles. We focus on resilience, not just compliance, and OT security is a core pillar of that strategy. 

Daily Tech Digest - November 13, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Does your chatbot have 'brain rot'? 4 ways to tell

Oxford University Press, publisher of the Oxford English Dictionary, named "brain rot" as its 2024 Word of the Year, defining it as "the supposed deterioration of a person's mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging." ... Trying to draw exact connections between human cognition and AI is always tricky, despite the fact that neural networks -- the digital architecture upon which modern AI chatbots are based -- were modeled upon networks of organic neurons in the brain. ... That said, there are some clear parallels: as the researchers note in the new paper, for example, models are prone to "overfitting" data and getting caught in attentional biases ... If the ideal AI chatbot is designed to be a completely objective and morally upstanding professional assistant, these junk-poisoned models were like hateful teenagers living in a dark basement who had drunk way too much Red Bull and watched way too many conspiracy theory videos on YouTube. Obviously, not the kind of technology we want to proliferate. ... Obviously, most of us don't have a say in what kind of data gets used to train the models that are becoming increasingly unavoidable in our day-to-day lives. AI developers themselves are notoriously tight-lipped about where they source their training data from, which means it's difficult to rank consumer-facing models


7 behaviors of the AI-Savvy CIO

"The single most critical takeaway for CIOs is that a strong data foundation isn't optional -- it's critical for AI success. AI has made it easy to build prototypes, but unless you have your data in a single place, up to date, secured, and well governed, you'll struggle to put those prototypes into production. The team laying the groundwork for that foundation and getting enterprises' data AI-ready is data engineering. CIOs who still see data engineering as a back-office function are already five years behind, and probably training their future competitors. ... "Your data will never be perfect. And it doesn't have to be. It needs to be indicative of your company's reality. But your data will get a lot better if you first use AI to improve the UX. Then people will use your systems more, and in the way intended, creating better data. That better data will enable better AI. And the virtuous cycle will have begun. But it starts with the human side of the equation, not the technological." ... CIOs don't need deep technical mastery such as coding in Python or tuning neural networks -- but they must understand AI fundamentals. This includes grasping core AI principles, machine learning concepts, statistical modeling, and ethical implications. Mastery starts with CIOs understanding AI as an umbrella of technologies that automate different things. With this foundational fluency, they can ask the right questions, interpret insights effectively, and make informed strategic decisions. Let's look at the three AI domains.


The economics of the software development business

Some software companies quietly tolerated piracy, figuring that the more their software spread—even illegally—the more legitimate sales would follow in the long run. The argument was that if students and hobbyists pirated the software, it would lead to business sales when those people entered the workforce. The catchphrase here was “piracy is cheaper than marketing.” This was never an official position, but piracy was often quietly tolerated. ... Over the years, the boxes got thinner and the documentation went onto the Internet. For a time, though, “online help” meant a *.hlp file on your hard drive. People fought hard to keep that type of online help well into the Internet age. “What if I’m on an airplane? What if I get stranded in northern British Columbia?” Eventually, the physical delivery of software went away as Internet bandwidth allowed for bigger and bigger downloads. ... SaaS too has interesting economic implications for software companies. The marketplace generally expects a free tier for a SaaS product, and delivering free services can become costly if not done carefully. The compute costs money, after all. An additional problem is making sure that your free tier is good enough to be useful, but not so good that no one wants to move up to the paid tiers. ... The economics of software have always been a bit peculiar because the product is maddeningly costly to design and produce, yet incredibly easy to replicate and distribute. The years go by, but the problem remains the same: how to turn ideas and code into a profitable business?


Beyond the checklist: Shifting from compliance frameworks to real-time risk assessments

One of the most overlooked aspects of risk assessments is cadence. While gap analyses are sometimes done yearly or to prepare for large-scale audits, risk assessments need to be continuous or performed on a regular schedule. Threats do not respect calendar cycles. Major changes, including new technologies, mergers, regulatory changes or implementing AI, need to trigger reassessments. ... Risk assessments should culminate in outputs that business leaders can act on. This includes a concise risk heat map, a prioritized remediation roadmap and clear asks, such as budget, ownership and timelines. These deliverables convert technical findings into strategic decisions. They also help build trust with stakeholders, especially in organizations that may be new to formal risk management. ... Targeted risk assessments can be viewed as a low-cost, fundamental option. They are best suited to companies that have limited budget or are not prepared for a full review of the framework. With reduced scope, shorter turnaround and transparent business value, such assessments enable rapid establishment of trust, delivering prioritized outcomes. ... Risk assessments are not just checkboxes. They are tools for making decisions. The best programs are aligned with the business, focused, consistent and made to change over time.


Legitimate Interest as a Lawful Basis: Pros, Cons and the Indian DPDP Act’s Stance

Under the EU’s General Data Protection Regulation (GDPR), for example, a company can process data if it is “necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (Article 6(1)(f) GDPR), so long as those interests are not overridden by the individual’s rights. However, India’s new Digital Personal Data Protection Act, 2023 (DPDP Act) pointedly does not include legitimate interest as a standalone lawful ground for processing. Instead, the Indian law relies primarily on consent and a limited set of “legitimate uses” explicitly enumerated in the statute. This divergence raises important questions about the pros and cons of the legitimate interest basis, its impact on the free flow of data, and whether India might benefit from adopting a similar concept. ... India’s decision to omit a general legitimate interest clause has sparked debate. There are advantages and disadvantages to this approach, and its impact on data flows and innovation is a key consideration. Pros / Rationale for Omission: From a privacy rights perspective, the absence of an open-ended legitimate interest basis means stronger individual control and legal certainty. The law explicitly tells citizens and businesses what the non-consensual exceptions are mostly common-sense or public interest scenarios and everything else by default requires consent.


CIOs: Collect the right data now for future AI-enabled services

In its Technology Trends Outlook for 2025 report, McKinsey suggests the technology landscape continues to undergo significant innovation-sponsored shifts. The consultant says success will depend on executives identifying high-impact areas where their businesses can use AI, while addressing external factors such as regulatory shifts and ecosystem readiness. CIOs, as the guardians of enterprise technology, will be expected to embrace this challenge, but how? For Steve Lucas, CEO at technology specialist Boomi, digital leaders must start with a recognition that the surfeit of data held in modern enterprises is simply a starting point for what comes next. “There’s plenty of data,” he says. “In fact, there’s too much of it. We worry about collecting, storing, and accessing data. I think a successful approach is about determining the data that matters. As a CIO, do you understand what data matters today and what emerging technologies will matter tomorrow?” ... While it can be tough to find the wood for the trees, Corbridge suggests CIOs should search for established data roots within the enterprise. “It’s about going back to the huge volumes of data you’ve got already and working out how you put that information in the right place so it can be used in the right way for your AI projects,” he says. Focusing on the fine details is an approach that chimes with Ian Ruffle, head of data and insight at UK breakdown specialist RAC. 


How TTP-based Defenses Outperform Traditional IoC Hunting

To fight modern ransomware, organizations must shift from chasing IoCs to detecting attacker behaviors — known as Tactics, Techniques, and Procedures (TTPs). The MITRE ATT&CK framework provides a detailed overview of these behaviors throughout the attack lifecycle, from initial access to impact. TTPs are challenging for attackers to modify because they represent core behavioral patterns and strategic approaches, unlike IoCs which are surface-level elements that can be easily altered. This shift is reinforced by the so-called ‘Pyramid of Pain’ – a conceptual model that ranks indicators by how difficult they are for adversaries to alter. At the base are easily changed elements like hash values and IP addresses. At the top are TTPs, which represent the attacker’s core behaviors and strategies. Disrupting TTPs forces adversaries to change their entire strategy, which makes behavior-based detection the most effective and resource-consuming method for them to avoid. ... When security and networking are natively integrated, policy enforcement is consistent, micro-segmentation is practical, and containment actions can be executed inline without stitching together multiple consoles. The cloud model also enables continuous, global updates to prevention logic and the ability to apply AI/ML on aggregated, high‑fidelity data feeds to reduce noise and improve detection quality. All this reminds me of the OODA military model that can help speed up incident response.


Healthcare security is broken because its systems can’t talk to each other

To maintain effectiveness, healthcare organizations should continually evaluate their security toolset for relevance, integration potential, and overall value to the security program. Prioritizing solutions that support open standards, and seamless integration helps minimize context switching and alert fatigue, while ensuring that the security team remains engaged and productive. Ultimately, the decision to balance specialized point solutions with broader integrated platforms must be guided by strategic priorities, resource capacity, and the need to support both operational efficiency and clinical excellence. ... A critical consideration is the interoperability of security tools across both cloud and on-premises environments. Healthcare organizations must assess if their security solutions need to span multiple cloud providers, support on-premises systems, or both, and determine how long on-premises support will be necessary as applications gradually shift to the cloud. Cloud providers are increasingly acquiring and integrating advanced security technologies, offering unified solutions that reduce the need for multiple monitoring platforms. This consolidation not only lessens alert fatigue but also enhances real-time visibility to security threats, an important advantage for healthcare, where timely detection is vital for protecting patient data and ensuring clinical continuity.


The Hard Truth About Trying to AI-ify the Enterprise

Every company has a few proofs of concept running. The problem isn't experimentation, it's scalability. How do you take those POCs and embed them into your business fabric? Many enterprises get trapped in "PowerPoint transformation": ambitious visions that never cross into operational workflows. "I've seen organizations invest millions in analytics platforms and data lakes. But when you ask how that's translating into faster underwriting, better risk models or smarter supply chains, there's often silence," Sen said. "That's because AI doesn't fail at the technology layer - it fails at the architecture of adoption." ... The central challenge, Sen argues, is that most enterprises treat AI as an overlay rather than an integral part of their operational core. "You can't bolt it on top of outdated systems and expect it to transform decision-making. The technology stack, data flow and governance model all need to evolve together," he said. That evolution is what Gartner describes as the pivot from "defending AI pilots to expanding into agentic AI ROI." Organizations that mastered generative AI in 2024 are now moving toward autonomous, interconnected systems that can execute tasks without human micromanagement. Sen points to his own experience at Linde plc as an early example. His team's first gen AI deployment at Linde was built for the audit department. "Our internal audit head wanted a semantic search database and a generative model to cut audit report generation time by half," he said.


Designing Edge AI That Works Where the Mission Happens

The fastest way to make federal AI deployments more reliable is to build on existing systems rather than start from scratch. Every mission already generates valuable data – drone imagery, satellite feeds, radar signals, logistics updates — that tells part of the operational story. AI at the edge helps teams interpret that data faster and more accurately. Instead of rebuilding infrastructure, agencies can embed lightweight, mission-specific AI models directly into their existing systems. ... When AI leaves the data center, its security model must accompany it. Systems operating at the edge face distinct risks, including limited oversight, contested networks, and the potential for physical compromise. Protection has to be built into every layer of the system, from the silicon up, to ensure full-scale security. That starts with end-to-end encryption, protecting data at rest, in transit, and during inference. Hardware-based features, such as secure boot and Trusted Execution Environments, prevent unauthorized code from running, while confidential computing keeps information encrypted even as it’s being processed. ... A decade ago, deploying AI in remote or contested locations required racks of hardware and constant connectivity. Today, a laptop or a single sensor array can deliver that same intelligence locally, securely, and autonomously. The power of AI and edge computing isn’t measured in size or speed; it’s in relevance.

Daily Tech Digest - November 12, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Agentic AI and Solution Architects

Agentic AI tools are intelligent systems designed to operate with autonomy, agency, and authority—three foundational concepts that define their ability to act independently, pursue goals on behalf of users, and make impactful decisions within defined boundaries. These systems are often built using a multi-agent architecture, where multiple specialized or generalist agents collaborate, either in centralized or decentralized environments. ... As (IT) architects we drive change that creates business opportunities through technical innovation. One of the key activities of a Solution Architect is to design solutions by applying methods and techniques combined with technical and business expertise. The actual solution design process will follow a similar pattern to that of a creative technology design process. An architect will combine and group the different components together according to stakeholder group and will, over several sessions, develop concept views related to key architectural components, establishing different options. Deciding the “right” option will mean balancing the various criteria like functionality, value for money, compliance, quality, and sustainability. IT architecture design involves complex decision-making, planning, and problem-solving that require human expertise and experience. That is where most of the architect’s work is focused on – using knowledge and experience to research a particular subject, to apply design thinking and to solve problems to establish a solution. 


Shadow AI risk: Navigating the growing threat of ungoverned AI adoption

Only half (52%) of global organizations claim to have comprehensive controls in place, with smaller companies lagging even further behind. This lack of robust governance and visibility leaves organizations vulnerable to data breaches, compliance failures, and security risks. For many organizations, AI controls are lacking. ... As AI systems become more autonomous and capable of acting on behalf of users, the risks grow even more complex. The rise of agentic AI, which can make decisions and take independent action within systems, amplifies the impact of weak identity security controls. As these advanced AI systems are given more control over critical systems and data, the potential risk of security breaches and compliance failures grows exponentially. To keep pace, security teams must evolve their identity security strategies to include these emerging machine entities, treating them with the same rigor as human identities. ... To effectively mitigate the risks associated with shadow AI and ungoverned AI adoption, organizations need to start with a solid foundation of governance and visibility. That means implementing clear acceptable use guidelines, access controls, activity logging and auditing, and identity governance for AI entities. By treating AI entities as identities that are subject to the same authentication, authorization, and monitoring as human users, organizations can safely harness the benefits of AI without compromising security.


Secure Product Development Framework: More Than Just Compliance

Security risk assessment is a key SPDF activity that starts early in development and continues throughout the product life cycle through on-market support and eventual product retirement. FDA guidance references AAMI SW96, “Standard for medical device security - Security risk management for device manufacturers,” as a recommended standard for a security risk assessment process. Security risk assessment considers both safety and business security risks ... Implementing a clear and consistent security risk assessment process within the SPDF can also save time (and money). Focus can be placed on those areas of the design with the highest security risk, instead of on design areas with little to no security risk. Decisions on whether patches need to be applied in the field are easier to make when based on security risk. Leveraging the same security risk process across products and business areas allows teams to focus on execution rather than designing a new process. Once a product is launched, an SPDF can assist with managing that product. Postmarket SPDF activities include vulnerability monitoring/disclosure, patch management, and incident response. A critical component of vulnerability monitoring is the maintenance and continuous use of a software bill of materials (SBOM). The SBOM provides a machine-readable inventory of all custom, commercial, open-source, and third-party software components within the device. 


Vibe Coding Can Create Unseen Vulnerabilities

Vibe coding does accelerate app prototyping and makes software collaboration easier, but it also has several shortcomings. Security is a serious concern. Large language models (LLMs) are inherently vulnerable to security risks when used by those without sufficient security experience. Moreover, the risk is amplified by the fact that AI is so flexible that it’s impossible to give out simple, universal rules on how to make AI write secure code for you. LLMs may use outdated libraries, lack input validation, or fail to follow secure practices. AI code generators also lack an understanding of trust boundaries and system architectures. When using vibe coding, programmer oversight and review are necessary to prevent these issues from entering production code. Working with black-box code also makes it difficult to provide context about the app. For example, improper configurations may expose internal logic by sending sensitive code snippets to external APIs. This can be a real problem in highly regulated industries with strict rules about code handling. Vibe coding also tends to add technical debt, accumulating unreviewed or unexplained blocks of code. Over time, these code blocks proliferate, creating a glut and making code maintenance more difficult. Since less experienced developers tend to use vibe coding, they can overlook security issues. Consider the recent Tea Dating Advice hack. A hacker was able to access 72,000 images stored in a public 


The state of cloud-native computing in 2025

“We’ve reached a level of maturity in the cloud-native ecosystem that people might think that things are now a bit boring. While AI is a natural extension of Kubernetes and cloud-native architectures, there are changes required in the architecture to support AI workloads compared to previous workloads. Platform engineering continues to have strong customer interest… and new AI enhancements allow for even greater productivity for developers and operators. ...” said Miniman ... “However, runaway complexity and cost threaten to derail mass enterprise success. The modern observability stack has become an exorbitant black hole, delivering insufficient value for its exorbitant cost and demands a fundamental rethink of data management. Simultaneously, the data lakehouse gamble failed, proving too complex and expensive. The imperative is clear and necessitates pulling workloads back from the brink with democratized data management to pull workloads back onto central platforms,” said Zilka. ... “The focus has shifted from how quickly I can deploy, to how I can get a handle on costs and how resilient my platform is to changes or outages like we saw recently with AWS. Teams are recognising the overhead these technologies have introduced for developers and are centralising that work. We’re seeing more platform teams set best practices, use tooling to enforce them and move from “adoption mode” to “operational excellence,” said Rajabi.


Insurability now a core test for boardroom AI & climate strategy

Organisations face growing threats from data poisoning and cyber-attacks, prompting insurers to play a more decisive role in risk management. Levent Ergin, Chief Climate, Sustainability & AI Strategist at Informatica, highlighted the increasing scrutiny on what businesses can insure against. ... AI is now a fixture at board meetings due to its direct impact on company valuation. However, he observes a gap between current boardroom discussions and the transformative potential of AI. "AI is now a standing item in every board meeting because it directly shapes valuation. Investors see it as a signal of how forward-thinking a company really is. But many boards are still asking the wrong question: 'How can we use AI to automate or augment our existing processes?' when they should be asking 'What's possible?' It's not just about automating what already exists; it's about reimagining how things are done. ..." said Hanson. ... "Too many businesses still treat AI projects like any other investment, where the return has to be quantified against a specific outcome. In truth, they should be budgeting for failure. The best innovators plan for things not to work first time, just as pharmaceutical companies or tech giants do, because even a 98% failure rate can still produce world-changing results. The moment we stop fearing failure and start funding it, we'll see genuine AI innovation break through," said Hanson.


Are we in a cyber awareness crisis?

To improve cyber awareness, organizations need to move beyond box-ticking exercises and build engagement through relevance and creativity. This is the advice of Simon Backwell, a member of the Emerging Trends Working Group at professional association ISACA, and head of information security at software company Benefex. He advocates for interactive, rather than static training, where employees can explore why something was suspicious, as they learn by doing, rather than guessing the right answer and moving on. ... Not only does AI present new risks from its use within the business, but also from the way criminals are using it. “Email phishing attacks frequently use gen AI chatbots, and vishing attacks, such as robocall scams, now use deepfakes,” notes Candrick. “AI puts social engineering on steroids, yet cybersecurity leaders are still using the same awareness measures that were already insufficient.” While regulatory pressure will play a role in improving AI-related cybersecurity, regulations will always struggle to keep pace, especially in the UK where the process takes time. For example, the EU’s AI Act and Data Act is only now filtering through, much like GDPR did back in 2018, says Backwell. But with how fast AI is advancing – almost weekly – these rules risk becoming outdated as soon as they’re released. ... “As board alignment weakens, CISOs have to work harder to translate cyber risk into business impact, because boards now rank business valuation as their top post-incident concern,” says Cooke.


How to build a supercomputer

When it comes to Hunter’s architecture, Utz-Uwe Haus, head of HPC/AI EMEA research lab, at HPE, describes the Cray EX design as “the architecture that HPE, with its great heritage, builds for the top systems.” A single cabinet in an EX4000 system can hold up to 64 compute blades – high-density modular servers that share power, cooling, and network resources – within eight compute chassis, all of which are cooled by direct-attached liquid-cooled cold plates supported by a cooling distribution unit (CDU). “It's super integrated," he says. “The back part, which is the whole network infrastructure (HPE Slingshot), matches the front part, which contains the blades.” For Hunter, HLRS has selected AMD hardware, but Haus explains that with Cray EX systems, customers can, more or less, select their processing unit of choice from whichever vendor they want, and the compute infrastructure can be slotted into the system without the need to total reconfiguration. “Should HLRS decide at some point to swap [Hunter’s] AMD plates for the next generation, or use another competitor’s, the rest of the system stays the same. They could have also decided not to use our network – keep the plates and put a different network in, if we have that in the form factor. [HPE Cray EX architecture] is really tightly matched, but at the same time, it’s flexible," he says. Hunter itself is intended as a transitional system to the Herder exascale supercomputer, which is due to go online in 2027. 


The AI Reskilling Imperative: Bridging India's talent and gender gap

Policies should shift from less general policies to specific interventions. Initiatives such as Digital India and Skill India need to be bolstered with AI-specific courses available online in the local language. The government can: Sponsor and encourage scholarships and mentorships for Women in AI. Develop financial reward systems for companies reaching gender diversity in their AI teams. Introduce AI literacy and ethics into the national education system, beginning at the secondary school level. ... As the main consumer of AI talent, the private sector should be at the forefront. The first one is the skills-first approach to hiring, but reskilling as an ongoing investment is not an option. Companies should: Devote a huge proportion of CSR budgets to simple AI and digital literacy efforts, especially among women in low-income and rural communities; Launch internal reskilling programs to shift existing workers out of positions at risk of automation (e.g., manual software testing, simple data entry) and into new roles, such as AI integrators or data annotators; Embrace explicit ethical standards for the application of AI, including a workforce transition and support strategy. ... The universities will be obliged to redesign courses that incorporate AI's technical wisdom and infuse them with morals, critical thinking, and subject knowledge. Collaboration between industry and academia is important to ensure courses are practical and incorporate real-world projects.


Enterprises to focus AI spend on cost savings & data control

"CIOs will move from experimenting with AI to orchestrating it, governing outcomes, agents, and data. AI leadership will evolve from pilots to performance. CIOs will be accountable for tangible business outcomes, defining clear frameworks that connect AI investments to enterprise KPIs and ROI. That means managing a new hybrid workforce of humans and digital agents, complete with job descriptions, correlated KPIs and measurement standards, and governance guardrails. Yet none of this will succeed without secure information management, ensuring that the data fueling and training these agents is accurate, compliant, and trustworthy. Simply put, good data results in good AI outcomes. As AI accelerates, traditional network and security operations will be reimagined for an always-on, agent-driven enterprise, where value is derived as much from data discipline as from innovation itself," said Bell. ... "A Major brand fallout will force AI accountability. In the next year, we'll likely see a major brand face real damage from AI misuse. It won't be a cyberattack in the traditional sense but something more subtle, like a plain text prompt injection that manipulates a model into acting against intent. These attacks can force hallucinations, expose proprietary or sensitive information, or break customer trust in seconds. Enterprises will need to verify AI behavior the same way they secure their networks, by checking every input and output. The companies that build AI systems with accountability and transparency at the core will be those that keep their reputations intact," said Berry.