Daily Tech Digest - September 24, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


Managing Technical Debt the Right Way

Here’s the uncomfortable truth: most executives don’t care about technical purity, but they do care about value leakage. If your team can’t deliver new features fast enough, if outages are too frequent, if security holes are piling up, that is financial debt—just wearing a hoodie instead of a suit. The BTABoK approach is to make debt visible in the same way accountants handle real liabilities. Use canvases, views, and roadmaps to connect the hidden cost of debt to business outcomes. Translate debt into velocity lost, time to market, and risk exposure. Then prioritize it just like any other investment. ... If your architects can’t tie debt decisions to value, risk, and strategy, then they’re not yet professionals. Training and certification are not about passing an exam. They are about proving you can handle debt like a surgeon handles risk—deliberately, transparently, and with the trust of society. ... Let’s not sugarcoat it: some executives will always see debt as “nerd whining.” But when you put it into the lifecycle, into the transformation plan, and onto the balance sheet, it becomes a business issue. This is the same lesson learned in finance: debt can be a powerful tool if managed, or a silent killer if ignored. BTABoK doesn’t give you magic bullets. It gives you a discipline and a language to make debt a first-class concern in architectural practice. The rest is courage—the courage to say no to shortcuts that aren’t really shortcuts, to show leadership the cost of delay, and to treat architectural decisions with the seriousness they deserve.


How National AI Clouds Undermine Democracy

The rapid spread of sovereign AI clouds unintentionally creates a new form of unchecked power. It combines state authority with corporate technology in unclear public-private partnerships. This combination centralizes surveillance and decision-making power, extending far beyond effective democratic oversight. The pursuit of national sovereignty undermines the civic sovereignty of individuals. ... The unique and overlooked danger is the rise of a permanent, unelected techno-bureaucracy. Unlike traditional government agencies, these hybrid entities are shielded from democratic pressures. Their technical complexity acts as a barrier against public understanding and journalistic inquiry. ... no sovereign cloud should operate without a corresponding legislative data charter. This charter, passed by the national legislature, must clearly define citizens' rights against algorithmic discrimination, set explicit limits on data use, and create transparent processes for individuals harmed by the system. It should recognize data portability as an essential right, not just a technical feature. ... every sovereign AI initiative should be mandated to serve the public good. These systems must legally demonstrate that they fulfill publicly defined goals, with their performance measured and reported openly. This directs the significant power of AI toward applications that benefit the public, such as enhancing healthcare outcomes or building climate resilience.


IT’s renaissance risks losing steam

IT-enabled value creation will etiolate without the sustained light of stakeholder attention. CIOs need to manage IT signals, symbols, and suppositions with an eye toward recapturing stakeholder headspace. Every IT employee needs to get busy defanging the devouring demons of apathy and ignorance surrounding IT operations today. ... We need to move beyond our “hero on horseback” obsession with single actors. Instead we need to return our efforts forcefully to l’histoire des mentalités — the study of the mental universe of ordinary people. How is l’homme moyen sensual (the man on the street) dealing with the technological choices arrayed before him? ... The IT pundits’ much discussed promise of “technology transformation” will never materialize if appropriate exothermic — i.e., behavior-inducing and energy creating — IT ideas have no mass following among those working at the screens around the world. ... As CIO, have you articulated a clear vision of what you want IT to achieve during your tenure? Have you calmed the anger of unmet expectations, repaired the wounds of system outages, alleviated the doubts about career paths, charted a filled-with-benefits road forward and embodied the hopes of all stakeholders? ... The cognitive elephant in the room that no one appears willing to talk about is the widespread technological illiteracy of the world’s population. 


How One Bad Password Ended a 158-Year-Old Business

KNP's story illustrates a weakness that continues to plague organizations across the globe. Research from Kaspersky analyzing 193 million compromised passwords found that 45% could be cracked by hackers within a minute. And when attackers can simply guess or quickly crack credentials, even the most established businesses become vulnerable. Individual security lapses can have organization-wide consequences that extend far beyond the person who chose "Password123" or left their birthday as their login credential. ... KNP's collapse demonstrates that ransomware attacks create consequences far beyond an immediate financial loss. Seven hundred families lost their primary income source. A company with nearly two centuries of history disappeared overnight. And Northamptonshire's economy lost a significant employer and service provider. For companies that survive ransomware attacks, reputational damage often compounds the initial blow. Organizations face ongoing scrutiny from customers, partners, and regulators who question their security practices. Stakeholders seek accountability for data breaches and operational failures, leading to legal liabilities. ... KNP joins an estimated 19,000 UK businesses that suffered ransomware attacks last year, according to government surveys. High-profile victims have included major retailers like M&S, Co-op, and Harrods, demonstrating that no organization is too large or established to be targeted.


Has the UK’s Cyber Essentials scheme failed?

There are several reasons why larger organisations may steer clear of CE in its current form, explains Kearns. “They typically operate complex, often geographically dispersed networks, where basic technical controls driven by CE do not satisfy organisational appetite to drive down risk and improve resilience,” she says. “The CE control set is also ‘absolute’ and does not allow for the use of compensating controls. Large complex environments, on the other hand, often operate legacy systems that require compensating controls to reduce risk, which prevents compliance with CE.” The point-in-time nature of assessment is also a poor fit for today’s dynamic IT infrastructure and threat environments, argues Pierre Noel, field CISO EMEA at security vendor Expel. ... “For large enterprises with complex IT environments, CE may not be comprehensive enough to address their specific security needs,” says Andy Kays, CEO of MSSP Socura. “Despite these limitations, it still serves a valuable purpose as a baseline, especially for supply chain assurance where larger companies want to ensure their smaller partners have a minimum level of security.” Richard Starnes is an experienced CISO and chair of the WCIT security panel. He agrees that large enterprises should require CE+ certification in their supplier contracts, where it makes sense. “This requirement should also include a contract flow-down to ensure that their suppliers’ downstream partners are also certified,” says Starnes.


Is Your Data Generating Value or Collecting Digital Dust?

Economic uncertainty is prompting many com­panies to think about how to do more with less. But what if they’re actually positioned to do more with more and just don’t realize it? Many organizations already have the resources they need to improve efficiency and resilience in challenging times. Close to two-thirds of organi­zations manage 1 petabyte or more of data, which represents enough data to cover 500 billion standard pages of text. More than 40% of companies store even more data. Much of that data sits unanalyzed while it incurs costs related to collection, compliance, and storage. It also poses data breach risks that require expensive security measures to prevent. ... Engaging with too many apps often makes employees less efficient than they could be. In 2024, companies used an average of 21 apps just for HR tasks. Multiply that across different functions, and it’s easy to see how finding ways to reduce the total could bring down costs. Trimming the number of apps can also increase productivity by reducing employee overwhelm. Constantly switching between different apps and systems has been shown to distract employees while increasing their levels of stress and frustration. Across the orga­nization, switching among tasks and apps consumes 9% of the average employee’s time at work by chipping away at their atten­tion and ability to focus a few seconds at a time with each of the hundreds of tasks switches they perform every day.


The history and future of software development

For any significant piece of software back then, you needed stacks of punch cards. Yes, 1000 lines of code needed 1000 cards. And you needed to have them in order. Now, imagine dropping that stack of 1000 cards! It would take me ages to get them back in order. Devs back then experienced this a lot—so some of them went ahead and had creative ways of indicating the order of these cards. ... y the mid 1970s affordable home computers were starting to become a reality. Instead of a computer just being a work thing, hobbyists started using computers for personal things—maybe we can call these, I don't know...personal computers. ... Assembler and assembly tend to be used interchangeably. But are in reality two different things. Assembly would be the actual language, syntax—instructions being used and would be tightly coupled to the architecture. While the assembler is the piece of software that assembles your assembly code into machine code—the thing your computer knows how to execute. ... What about writing the software? Did they use git back then? No, git only came out in 2005, so back then software version control was quite the manual effort. From developers having their own way of managing source code locally to even having wall charts where developers can "claim" ownership of certain source code files. For those that were able to work on a shared (multi-user) system, or have an early version of some networked storage—Source code sharing was as easy as handing out floppy disks.


Why the operating system is no longer just plumbing

Many enterprises still think of the operating system as a “static” or background layer that doesn’t need active evolution. The reality is that modern operating systems like Red Hat Enterprise Linux (RHEL) are dynamic, intelligent platforms that actively enable and optimize everything running on top of them. Whether you're training AI models, deploying cloud-native applications, or managing edge devices, the OS is making thousands of critical decisions every second about resource allocation, security enforcement, and performance optimization. ... With image mode deployments, zero-downtime updates, and optimized container support, RHEL ensures that even resource-constrained environments can maintain enterprise-grade reliability. We’ve also focused heavily on security—confidential computing, quantum-resistant cryptography, and compliance automation—because edge environments are often exposed to greater risk. These choices allow RHEL to deliver resilience in conditions where compute power, space, and connectivity are limited. ... We don't just take community code and ship it — we validate, harden, and test everything extensively. Red Hat bridges this gap by being an active contributor upstream while serving as an enterprise-grade curator downstream. Our ecosystem partnerships ensure that when new technologies emerge, they work reliably with RHEL from day one.


Ransomware now targeting backups, warns Google’s APAC security chief

Backups often contain sensitive data such as personal information, intellectual property, and financial records. Pereira warned that attackers can use this data as extra leverage or sell it on the dark web. The shift in focus to backup systems underscores how ransomware has become less about disruption and more about business pressure. If an organisation cannot restore its systems independently, it has little choice but to consider paying a ransom. ... Another troubling trend is “cloud-native extortion,” where attackers abuse built-in cloud features, such as encryption or storage snapshots, to hold systems hostage. Pereira explained that many organisations in the region are adapting by shifting to identity-focused security models. “Cloud environments have become the new perimeter, and attackers have been weaponising cloud-native tools,” he said. “We now need to enforce strict cloud security hygiene, such as robust MFA, least privilege access, proactively monitoring of role access changes or credential leaks, using automation to detect and remediate misconfigurations, and anomaly detection tools for cloud activities.” He pointed to rising investments in identity and access management tools, with organisations recognising their role in cutting down the risk of identity-based attacks. For APAC businesses, this means moving away from legacy perimeter defences and embracing cloud-native safeguards that assume breaches are inevitable but limit the damage.


AI Won't Replace Developers, It Will Make the Best Ones Indispensable

The replacement theory assumes AI can work independently, but it can't. Today's AI coding tools don't run themselves, they need active steering. Most AI tools today operate on a "prompt and pray" model: give the AI instructions, get code back, hope it works. That's fine for demos or side projects, but production environments are far less forgiving. ... AI doesn't level the playing field between developers, it widens it. Using AI effectively requires the same skills that make great developers great: understanding system architecture, recognizing security implications, writing maintainable code. ... Tomorrow's junior developers will need to get productive in a different way. Instead of spending months learning basic syntax and patterns, they'll start by learning to collaborate with AI agents effectively. Those who can adapt will find opportunities, and those who can't might struggle to break in. This shift actually creates more demand for senior engineers, because someone needs to train these AI-assisted junior developers, architect systems that can handle AI-generated code at scale, and establish the processes and standards that keep AI tools from creating chaos. ... The teams succeeding with AI coding treat agents like exceptionally capable junior teammates who need oversight. They provide detailed context, review generated code, and test thoroughly before deployment rather than optimizing purely for speed.

Daily Tech Digest - September 21, 2025


Quote for the day:

"The world's most deadly disease is hardening of the attitudes." -- Zig Ziglar



AI sharpens threat detection — but could it dull human analyst skills?

While AI offers clear advantages, there are real risks when used without caution. Blind trust in AI-generated recommendations can lead to missed threats or incorrect actions, especially when professionals rely too heavily on prebuilt threat scores or automated responses. A lack of curiosity to validate findings weakens analysis and limits learning opportunities from edge cases or anomalies. This mirrors patterns seen in internet search behavior, where users often skim for quick answers rather than dig deeper. It bypasses critical thinking that strengthens neural connections and sparks new ideas. In cybersecurity — where stakes are high and threats evolve fast — human validation and healthy skepticism remain essential. ... AI literacy is becoming a must-have skill for cybersecurity teams, especially as more organizations adopt automation to handle growing threat volumes. Incorporating AI education into security training and tabletop exercises helps professionals stay sharp and confident when working alongside intelligent tools. When teams can spot AI bias or recognize hallucinated outputs, they’re less likely to take automated insights at face value. This kind of awareness supports better judgment and more effective responses. It also pays off, as organizations that use security AI and automation extensively save an average of $2.22 million in prevention costs. 


Repatriation games: the mid-market reevaluates its public cloud consumption

Many IT decision-makers were quick to blame public cloud service providers. But it’s more likely that the applications and workloads were never intended for public cloud environments. Or that cloud-enabled applications and workloads were incorrectly configured. Either way, poor application and workload performance meant that the expected efficiency gains and cost savings from public cloud adoption did not materialize. This led to budgeting and resourcing problems, as well as friction between IT management, senior leadership teams, and other stakeholders. ... Concerns over data sovereignty and compliance have also influenced decisions to repatriate public cloud workloads and adopt a hybrid cloud model, particularly due to worries about DORA, GDPR and the US Cloud Act compliance. DORA and GDPR both place greater emphasis on data sovereignty, so organizations need to have greater control over where their data resides. This makes a strong case for repatriation of specific workloads to maintain compliance with both sets of regulations – especially within highly regulated industries or for sensitive information such as HR or financial data. ... Nearly a third of respondents say cybersecurity specialists are the most difficult roles to hire or retain. Some mid-market organizations may lack the in-house skills to configure and manage cybersecurity in public cloud environments or even understand their default settings. 


A guide to de-risking enterprise-wide financial transformation

Distilling the lessons from these large-scale initiatives, a clear blueprint emerges for leaders embarking on their own transformation journeys:Define a data-driven vision: A successful transformation begins with a clear vision for how data will function as a strategic asset. The goal should be to create a single source of truth that is granular, accessible and enables a shift from reactive reporting to proactive analysis. Lead with process, not technology: Technology is an enabler, not the solution itself. Invest heavily in understanding and harmonizing end-to-end business processes before a single line of code is written. This effort is the foundation for a sustainable, low-customization system. De-risk with a phased, modular approach: Avoid the “big bang.” Break the program into logical phases, delivering tangible business value at each step. This builds momentum, facilitates organizational learning and significantly reduces the risk of catastrophic failure. Prioritize the user experience: Even the most powerful system will fail if it is not adopted. Engage end users throughout the design and implementation process. Build intuitive tools, like the FIRST microsite, and invest in robust training and change management to drive adoption and proficiency. ... Such forums are critical for breaking down silos and ensuring the end-to-end process is optimized.  ... Transforming the financial core of a global technology leader is not merely a technical undertaking, it is a strategic imperative for enabling scale, agility and insight.


5 things IT managers get wrong about upskilling tech teams

One of the most pervasive issues in IT upskilling is what Patrice Williams-Lindo, CEO at career coaching service Career Nomad, called the “training-and-forgetting” approach. “Many managers send teams to training without any plan for application,” she said. “Employees return to overloaded sprints” with no guidance on how to incorporate what they’ve learned. Without application in their work, “new skills atrophy fast.” This problem is rooted in basic learning science.  ... Another major pitfall is the overemphasis on certifications as proof of capability. Managers often assume that a certification is going to solve a problem without considering whether it fits the day-to-day job, said Tim Beerman, CTO at managed service provider Ensono. What’s more, certification alone doesn’t equal real-world capability and doesn’t necessarily indicate that a person is competent, according to CGS’ Stephen. While a certification shows that someone has the capability to obtain learned knowledge, he said, it doesn’t guarantee practical application skills. ... Many IT managers fall into the trap of pursuing trendy technologies without connecting them to actual business needs. Williams-Lindo warned that focusing on hype skills without business alignment backfires. While AI, cloud, and blockchain sound strategic, she said, if they aren’t tied to current or near-future business objectives, teams will spend time learning irrelevant tools while core needs are ignored.


Gen AI risks are getting clearer. How much would you pay for digital trust?

“As AI becomes more pervasive and kind of invades various dimensions of our lives and our work, how we interact with it and how safe and trustworthy it is, has become paramount,” said Dan Hays ... What do trust and safety issues look like, when it comes to AI agents in customer interactions? Hays gave several examples: Should AI agents remember everything that a particular customer says to them, or should it “forget” interactions, particularly as years or decades pass? The memory capabilities of bots also relate to the question of, what parameters should be placed on how AI agents are allowed to interact with customers? ... “As organizations across nearly all industries dive head-first into AI and digital transformations, they’re running into new risks that could undermine the trust they’ve built with consumers. Right now, many don’t have the guardrails or experience to handle these evolving threats — and the ripple effects are being felt across entire companies and industries,” the PwC report said. However, it seems that people who can, are willing to pay for digital environments and services that they can trust — much like subscribers to paywalled content sites can generally trust what they are getting, while those looking for free news might end up reading information that is garbled or deliberately twisted with the help of AI.


Object Storage: The Last Line of Defense Against Ransomware

Object storage provides intrinsic advantages in immutability, as it does not provide “edit in place” functionality as with file systems which are designed to allow direct file modifications. Unlike traditional file or block storage, object storage interacts through “get and put” access and write APIs, which means malware and ransomware actors have to attempt to write (or overwrite modified objects) via the API to the object store. ... As ransomware continues to evolve, organizations must design storage strategies that protect at every level. Cyber resilience in the storage layer involves a layered defense that spans architecture, APIs, and operational practices. ... A successful data center attack not only disrupts service but also undermines the partner’s reputation for reliability. Technology partners must demonstrate their infrastructure can isolate tenants, withstand attacks, and deliver continuous availability even in adverse conditions. In both cases, cyber-resilient storage is no longer optional. ... Business continuity leaders should prioritize S3-compatible object storage with ransomware-proof capabilities such as object locking, versioning, and multi-layered access controls. Just as importantly, they should evaluate whether their current storage platforms deliver end-to-end cyber resilience that spans both technology and process.


Time to Embrace Offensive Security for True Resilience

Offensive engagements utilize an attacker mindset to focus on truly exploitable weaknesses, weeding out the noise of unprioritized lists of vulnerabilities. Through remediation of high-impact findings, organizations prevent spreading resources over low-impact issues. Additionally, offloading sophisticated simulations to specialized teams or utilizing automated penetration testing speeds testing cycles and maximizes security investments. Essentially, each dollar invested in offensive testing can pre-empt multiples of breach response, legal penalties, lost productivity, and reputational loss. Successful security testing takes more than shallow scans; it needs fully immersed, real-world simulations that mimic the methods employed by actual threat actors to test your systems. Below is an overview of the most effective methods: ... Red teaming exercises goes beyond standard testing by simulating skilled threat actors with secretive, multi-step attack scenarios. These exercises check not just technical weaknesses but also the organization’s ability to notice, respond to, and recover from real security breaches. Red teams often use methods like social engineering, lateral movement, and privilege escalation to test incident response teams. This uncovers flaws in technology and human procedures during realistic attack simulations.


7 Enterprise Architecture Best Practices for 2025

The foundational principle of effective enterprise architecture is its direct and unbreakable link to business strategy. This alignment ensures that every technological decision, architectural blueprint, and IT investment serves a clear business purpose. It transforms the EA function from a cost center focused on technical standards into a strategic partner that drives business value, innovation, and competitive advantage. ... Adopting a framework establishes a shared understanding among stakeholders, from IT teams to business leaders. It provides a standardized set of tools, templates, and terminologies, which reduces ambiguity and improves communication. This structured approach is fundamental to creating a holistic and integrated view of the enterprise, allowing architects to manage complexity, mitigate risks, and align technology initiatives with strategic goals in a systematic way. ... While a strong strategy provides the direction for enterprise architecture, robust governance provides the necessary guardrails and decision-making framework to keep it on track. EA governance establishes the processes, standards, and controls that ensure architectural decisions align with business objectives and are implemented consistently across the organization. It transforms architecture from a set of recommendations into an enforceable, value-driven discipline. 


Why Cloud Repatriation is Critical Post-VMware Exit

What began as a tactical necessity evolved into an expensive operational habit, with monthly bills that continue climbing without corresponding business value. The rush to cloud often bypassed careful workload assessment, resulting in applications running in expensive public cloud environments that would be more cost-effective on-premises. ... Equally important, the technology landscape has evolved since the initial cloud migration wave. We now have universal infrastructure-wide operating platforms that deliver cloud-like experiences on-premises, eliminating the operational gaps that initially drove workloads to public cloud. Combined with universal migration capabilities that can move workloads seamlessly from any source—whether VMware, other hypervisors, or major cloud providers—organizations finally have the tools needed to make cloud repatriation both technically feasible and economically compelling. ... The forced VMware migration creates the perfect opportunity to reassess the entire infrastructure portfolio holistically rather than making isolated platform decisions. ... This infrastructure reset enables IT teams to ask fundamental questions that operational inertia prevents: Which workloads benefit from cloud deployment? What applications could run more affordably on modern on-premises infrastructure? How can we optimize our total infrastructure spend across both on-premises and cloud environments?


4 Ways AI Revolutionizes Modern Cybersecurity Strategy

AI's true value doesn't lie in marketing promises, but in concrete results(link is external), such as reducing false positives, cutting detection time, and reducing operational costs. These are documented results from organizations that have implemented AI-human collaboration models balancing automation with expert judgment. This capability significantly exceeds the efficiency of human security teams, fundamentally transforming threat detection and response. Imagine a zero-day exploit detected and contained within minutes, not days, drastically reducing the window of vulnerability. ... Accelerating the transformation of legacy code represents one of the most impactful ways organizations are using AI to mitigate vulnerabilities. Legacy code accounts for a staggering 70% of identified vulnerabilities(link is external), but manually overhauling monolithic code bases is rarely feasible. Security teams know these vulnerabilities exist, but often lack the resources to address them. ... Manual SBOM creation cannot scale, not even for a 10-person startup. DevSecOps teams already stretched thin can't reasonably be expected to monitor the thousands of components in modern software stacks. Any sustainable approach to SBOM management for software-producing organizations must necessarily include automation. ... Compliance remains one of security's greatest frictions. 

Daily Tech Digest - September 20, 2025


Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Five forces shaping the next wave of quantum innovation

Quantum computers are expected to solve problems currently intractable for even the world’s fastest supercomputers. Their core strengths — efficiently finding hidden patterns in complex datasets and navigating vast optimization challenges — will enable the design of novel drugs and materials, the creation of superior financial algorithms and open new frontiers in cryptography and cybersecurity. ... The quantum ecosystem now largely agrees that simply scaling up today’s computers, which suffer from significant noise and errors that prevent fault-tolerant operation, won’t unlock the most valuable commercial applications. The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. ... Most early quantum computing companies tried a full-stack approach. Now that the industry is maturing, a rich ecosystem of middle-of-the-stack players has emerged. This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines and quantum software development from firms ... recent innovations in quantum networking technology have made a scale-out approach a serious contender. 


Post-Modern Ransomware: When Exfiltration Replaces Encryption

Exfiltration-first attacks have re-written the rules, with stolen data providing criminals with a faster, more reliable payday than the complex mechanics of encryption ever could. The threat of leaking data like financial records, intellectual property, and customer and employee details delivers instant leverage. Unlike encryption, if the victim stands firm and refuses to pay up, criminal groups can always sell their digital loot on the dark web or use it to fuel more targeted attacks. ... Phishing emails, once known for being riddled with tell-tale grammar and spelling mistakes, are now polished, personalized and delivered in perfect English. AI-powered deepfake voices and videos are providing convincing impersonations of executives or trusted colleagues that have defrauded companies for millions. At the same time, attackers are deploying custom chatbots to manage ransom negotiations across multiple victims simultaneously, applying pressure with the relentless efficiency of machines. ... Yet resilience is not simply a matter of dashboards and detection thresholds – it is equally about supporting those on the frontlines. Security leaders already working punishing hours under relentless scrutiny cannot be expected to withstand endless fatigue and a culture of blame without consequence. Organizations must also embed support for their teams into their response frameworks, from clear lines of communication and decompression time to wellbeing checks. 


The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time

The uncertainty is driving concern. “There's been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. ... “There's a nervousness around deployment of AI, and I think that nervousness comes from -- definitely in conversations with other CIOs -- not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don't necessarily need all the bells and whistles of the public cloud providers, whether that's for latency or performance reasons, or whether it's for cost or whether that's for sovereignty reasons,” explained Woo. 


Enterprise AI enters the age of agency, but autonomy must be governed

Agentic AI systems don’t just predict or recommend, they act. These intelligent software agents operate with autonomy toward defined business goals, planning, learning, and executing across enterprise workflows. This is not the next version of traditional automation or static bots. It’s a fundamentally different operating paradigm, one that will shape the future of digital enterprises. ... For many enterprises, the last decade of AI investment has focused on surfacing insights: detecting fraud, forecasting demand, and predicting churn. These are valuable outcomes, but they still require humans or rigid automation to respond. Agentic AI closes that gap. These agents combine machine learning, contextual awareness, planning, and decision logic to take goal-directed action. They can process ambiguity, work across systems, resolve exceptions, and adapt over time. ... Agentic AI will not simply automate tasks. It will reshape how work is designed, measured, and managed. As autonomous agents take on operational responsibility, human teams will move toward supervision, exception resolution, and strategic oversight. New KPIs will emerge, not just around cost or cycle time, but around agent quality, business impact, and compliance resilience. This shift will also demand new talent models. Enterprises must upskill teams to manage AI systems, not just processes. 


Cybersecurity in smart cities under scrutiny

The digital transformation of public services involves “an accelerated convergence between IT and OT systems, as well as the massive incorporation of connected IoT devices,” she explains, which gives rise to challenges such as an expanding attack surface or the coexistence of obsolete infrastructure with modern ones, in addition to a lack of visibility and control over devices deployed by multiple providers. ... “According to the European Cyber ​​Security Organisation, 86% of European local governments with IoT deployments have suffered some security breach related to these devices,” she says. Accenture’s Domínguez adds that the challenge is to consider “the fragmentation of responsibilities between administrations, concessionaires, and third parties, which complicates cybersecurity governance and requires advanced coordination models.” De la Cuesta also emphasizes the siloed nature of project development, which significantly hinders the development of an active cybersecurity strategy. ... In the integration of new tools, despite Spain holding a leading position in areas such as 5G, “technology moves much faster than the government’s ability to react,” he says. “It’s not like a private company, which has a certain agility to make investments,” he explains. “Public administration is much slower. Budgets are different. Administrative procedures are extremely long. From the moment a project is first discussed until it is actually executed, many years pass.”


Your SDLC Has an Evil Twin — and AI Built It

Welcome to the shadow SDLC — the one your team built with AI when you weren't looking: It generates code, dependencies, configs, and even tests at machine speed, but without any of your governance, review processes, or security guardrails. ... It’s not just about insecure code sneaking into production, but rather about losing ownership of the very processes you’ve worked to streamline. Your “evil twin” SDLC comes with: Unknown provenance → You can’t always trace where AI-generated code or dependencies came from. Inconsistent reliability → AI may generate tests or configs that look fine but fail in production. Invisible vulnerabilities → Flaws that never hit a backlog because they bypass reviews entirely. ... AI assistants are now pulling in OSS dependencies you didn’t choose — sometimes outdated, sometimes insecure, sometimes flat-out malicious. While your team already uses hygiene tools like Dependabot or Renovate, they’re only table stakes that don’t provide governance. ... The “evil twin” of your SDLC isn’t going away. It’s already here, writing code, pulling dependencies, and shaping workflows. The question is whether you’ll treat it as an uncontrolled shadow pipeline — or bring it under the same governance and accountability as your human-led one. Because in today’s environment, you don’t just own the SDLC you designed. You also own the one AI is building — whether you control it or not.


'ShadowLeak' ChatGPT Attack Allows Hackers to Invisibly Steal Emails

Researchers at Radware realized the issue earlier this spring, when they figured out a way of stealing anything they wanted from Gmail users who integrate ChatGPT. Not only was their trick devilishly simple, but it left no trace on an end user's network — not even an iota of the suspicious Web traffic typical of data exfiltration attacks. As such, the user had no way of detecting the attack, let alone stopping it. ... To perform a ShadowLeak attack, attackers send an outwardly normal-looking email to their target. They surreptitiously embed code in the body of the message, in a format that the recipient will not notice — for example, in extremely tiny text, or white text on a white background. The code should be written in HTML, being standard for email and therefore less suspicious than other, more powerful languages would be. ... The malicious code can instruct the AI to communicate the contents of the victim's emails, or anything else the target has granted ChatGPT access to, to an attacker-controlled server. ... Organizations can try to compensate with their own security controls — for example, by vetting incoming emails with their own tools. However, Geenens points out, "You need something that is smarter than just the regular-expression engines and the state machines that we've built. Those will not work anymore, because there are an infinite number of permutations with which you can write an attack in natural language." 


UK: World’s first quantum computer built using standard silicon chips launched

This is reportedly the first quantum computer to be built using the standard complementary metal-oxide-semiconductor (CMOS) chip fabrication process which is the same transistor technology used in conventional computers. A key part of this approach is building cryoelectronics that connect qubits with control circuits that work at very low temperatures, making it possible to scale up quantum processors greatly. “This is quantum computing’s silicon moment,” James Palles‑Dimmock, Quantum Motion’s CEO, stated. ... In contrast to other quantum computing approaches, the startup used high-volume industrial 300 millimeter chipmaking processes from commercial foundries to produce qubits. The architecture, control stack, and manufacturing approach are all built to scale to host millions of qubits and pave the way for fault-tolerant, utility-scale, and commercially viable quantum computing. “With the delivery of this system, Quantum Motion is on track to bring commercially useful quantum computers to market this decade,” Hugo Saleh, Quantum Motion’s CEO and president, revealed. ... The system’s underlying QPU is built on a tile-based architecture, integrating all compute, readout, and control components into a dense, repeatable array. This design enables future expansion to millions of qubits per chip, with no changes to the system’s physical footprint.


Key strategies to reduce IT complexity

The cloud has multiplied the fragmentation of solutions within companies, expanding the number of environments, vendors, APIs, and integration approaches, which has raised the skill set, necessitated more complex governance, and prompted the emergence of cross-functional roles between IT and business. Cybersecurity also introduces further levels of complexity, introducing new platforms, monitoring tools, regulatory requirements, and risk management approaches that must be overseen by expert personnel. And then there’s shadow IT. With the ease of access to cloud technologies, it’s not uncommon for business units to independently activate services without involving IT, generating further risks. ... “Structured upskilling and reskilling programs are needed to prepare people to manage new technologies,” says Massara. “So is an organizational model capable of managing a growing number of projects, which can no longer be handled in a one-off manner. The approach to project management is changing because the project portfolio has expanded significantly, and a structured PMO is required, with project managers who often no longer reside solely in IT, but directly within the business.” ... While it’s true that an IT system with disparate systems leads to greater complexity, companies are still very cost-conscious and wary about heavily investing in unification right away. But as systems become obsolete, they become more harmonized.


Unshackling IT: Why Third-Party Support Is a Strategic Imperative, Especially for AI

One of the most compelling arguments for independent third-party support is its inherent vendor neutrality. When a company relies solely on a software vendor for support, that vendor naturally has a vested interest in promoting its latest upgrades, cloud migrations, and proprietary solutions. This can create a conflict of interest, potentially pushing customers towards expensive, unnecessary upgrades or discouraging them from exploring alternatives that might be a better fit for their unique needs. ... The recent acquisition of VMware by Broadcom provides a compelling and timely illustration of why third-party support is becoming increasingly critical. Following the merger, many VMware customers have expressed significant dissatisfaction with changes to licensing models, product roadmaps, and, crucially, support. Broadcom has been criticized for restructuring VMware’s offerings and reportedly reducing support for smaller customers, pushing them towards bundled, more expensive solutions. ... The shift towards third-party support isn’t just about cost savings; it’s about regaining control, accessing unbiased expertise, and ensuring business continuity in a rapidly changing technological landscape. For companies making critical decisions about AI integration and managing complex enterprise systems, providers like Spinnaker Support offer a strategic advantage.

Daily Tech Digest - September 19, 2025


Quote for the day:

"The whole secret of a successful life is to find out what is one's destiny to do, and then do it." -- Henry Ford


How CISOs Can Drive Effective AI Governance

For CISOs, finding that balance between security and speed is critical in the age of AI. This technology simultaneously represents the greatest opportunity and greatest risk enterprises have faced since the dawn of the internet. Move too fast without guardrails, and sensitive data leaks into prompts, shadow AI proliferates, or regulatory gaps become liabilities. Move too slow, and competitors pull ahead with transformative efficiencies that are too powerful to compete with. Either path comes with ramifications that can cost CISOs their job. In turn, they cannot lead a "department of no" where AI adoption initiatives are stymied by the organization's security function. It is crucial to instead find a path to yes, mapping governance to organizational risk tolerance and business priorities so that the security function serves as a true revenue enabler. ... Even with strong policies and roadmaps in place, employees will continue to use AI in ways that aren't formally approved. The goal for security leaders shouldn't be to ban AI, but to make responsible use the easiest and most attractive option. That means equipping employees with enterprise-grade AI tools, whether purchased or homegrown, so they do not need to reach for insecure alternatives. In addition, it means highlighting and reinforcing positive behaviors so that employees see value in following the guardrails rather than bypassing them.


AI developer certifications tech companies want

Certifications help ensure developers understand AI governance, security, and responsible use, Hinchcliffe says. Certifications from vendors such as Microsoft and Google, along with OpenAI partner programs, are driving uptake, he says. “Strategic CIOs see certifications less as long-term guarantees of expertise and more as a short-term control and competency mechanism during rapid change,” he says. ... While certifications aren’t the sole deciding factor in landing a job, they often help candidates stand out in competitive roles where AI literacy is becoming a crucial factor, Taplin says. “This is especially true for new software engineers, who can gain a leg up by focusing on certifications early to enhance their career prospects,” he says. ... “The real demand is for AI skills, and certifications are simply one way to build those skills in a structured manner,” says Kyle Elliott, technology career coach and hiring expert. “Hiring managers are not necessarily looking for candidates with AI certifications,” Elliott says. “However, an AI certification, especially if completed in the last year or currently in progress, can signal to a hiring manager that you are well-versed in the latest AI trends. In other words, it’s a quick way to show that you speak the language of AI.” Software developers should not expect AI certifications to be a “silver bullet for landing a job or earning a promotion,” Elliott says. 


How important is data analytics in cycling?

Beyond recovery and nutrition, data analytics plays a pivotal role in shaping race-day decisions. The team combines structured data like power outputs, route elevation, and weather forecasts with unstructured data gathered from online posts by cycling enthusiasts. These data streams are fed into predictive models that anticipate race dynamics and help fine-tune equipment selection, down to tire pressure and aerodynamic adjustments. Metrics like Training Stress Score (TSS) and Heart Rate Variability (HRV) help monitor each rider’s fatigue and readiness, ensuring that training plans are both challenging and sustainable. “We analyze how environmental conditions affect each rider’s output and recovery,” Ryder says. ... The team’s data-driven strategy even extends to post-race analysis. At their hub, they evaluate power output, rider positioning, and performance variances. ... Looking ahead, Ryder sees artificial intelligence playing a greater role. The team is exploring machine learning models that predict tactical behavior from opponents and identify when riders are close to burnout. Through conversational analytics in Qlik, they envision proactive alerts such as, “This rider may not be fit to race tomorrow,” based on cumulative stress and recovery data. The team’s ethos is clear. Success doesn’t only come from racing harder. It comes from racing smarter. 


Balancing Growth and Sustainability: How Data Centers Can Navigate the Energy Demands of the AI Era

Given the systemic limitations on reliable power sources, practical solutions are needed. We must address power sustainability, upstream power infrastructure, new data center equipment and trained labor to deliver it all. By being proactive, we can “bend” the energy growth curve by decoupling data center growth from AI computing’s energy consumption. ... Before the AI boom, large data centers could grin and bear longer lead times for utilities; however, the immediate and skyrocketing demand for data centers to power AI applications calls for creative solutions. Data center developers and designers planning to build in energy-constrained regions need to consider deploying alternative prime power sources and/or energy storage systems to launch new data centers. This includes natural gas turbines, HVO-fueled generators, wind, solar, fuel cells, battery energy storage systems (BESS), and to a limited degree, small modular reactors. ... The utility company and grid operator’s intimate knowledge of the grid and local regulatory, governmental and political landscape makes them critical partners in the site selection, design, permitting, and construction of new data centers. Utilities provide critical insights on power capacity, costs, carbon intensity, power quality, grid stability and load management to ensure sustainable and reliable operations. 


LLMs can boost cybersecurity decisions, but not for everyone

Resilience played a major role in the results. High-resilience individuals performed well with or without LLM support, and they were better at using AI guidance without becoming over-reliant on it. Low-resilience participants did not gain as much from LLMs. In some cases, their performance did not improve or even declined. This creates a risk of uneven outcomes. Teams could see gaps widen between those who can critically evaluate AI suggestions and those who cannot. Over time, this may lead to over-reliance on models, reduced independent thinking, and a loss of diversity in how problems are approached. According to Lanyado, security leaders need to plan for these differences when building teams and training programs. “Not every organization and/or employee interacts with automation in the same way, and differences in team readiness can widen security risks,” he said. ... The findings suggest that organizations cannot assume adding an LLM will raise everyone’s performance equally. Without design, these tools could make some team members more effective while leaving others behind. The researchers recommend designing AI systems that adapt to the user. High-resilience individuals may benefit from open-ended suggestions. Lower-resilience users might need guidance, confidence indicators, or prompts that encourage them to consider alternative viewpoints.


Augment or Automate? Two Competing Visions for AI’s Economic Future

Looked at more critically, ChatGPT has become a supercharged Google search that leaps from finding information to synthesizing and judging it, a clear homogenization of human capacity that might lead to a world of grey-zone AI slop. ... While ChatGPT follows the people, Claude is following the money, hoping to capitalize on business needs to improve efficiency and productivity. By focusing on complex, high-value work, the company is signaling it believes the future of AI lies not in making everyone more productive, but in automating knowledge work that once required specialized human expertise. ... These divergent strategies result in different financial trajectories. OpenAI enjoys massive scale, with hundreds of millions of users providing a broad funnel for subscriptions. It generates an overwhelming amount of traffic that is of relatively lower value. OpenAI is betting the real money will flow through licensing its tools to Microsoft, where it can be embedded in Copilot and Office products to generate recurring revenue streams to offset its infrastructure and operating costs. Anthropic has fewer users but stronger unit economics. Its focus on enterprise use means customers are better positioned to purchase more expensive premium services that can demonstrate strong return-on-investment.


4 four ways to overcome the skills crisis and prepare your workforce for the age of AI

Orla Daly, CIO at Skillsoft, told ZDNET that the research shows business leaders must keep pace with the changing requirements for capabilities in different operational areas. "Significant percentages of skills are no longer relevant. The skills that we'll need in 2030 are only just evolving now," she said. "If you're not making upskilling and learning part of your core business strategy, then you're going to ultimately become uncompetitive in terms of retaining talent and delivering on your organizational outcomes." ... Daly said companies must pay more attention to the skills of their employees, including measuring and testing those proficiencies. "That's about using a combination of benchmarks, which we use at Skillsoft, that allow you, through testing, to understand the skills that you have," she said. "It's also about how you understand that capability in terms of real-world applications and measuring those skills in the context of the jobs that are being done." ... "You need to make measurement central to the business strategy, and have a program around learning, so it's part of the everyday culture of the business," she said. "From the executive level down, you need to say learning is a core part of the organization. Learning then turns up in all of your business operating frameworks in terms of how you track and measure the outcomes of programs, similar to other investments that you would make."


Sovereign AI meets Stockholm’s data center future

Sovereign AI refers to the ability of a nation to develop and operate AI platforms within its own borders, under its own laws and energy systems. ... By ensuring that sensitive data and critical compute resources remain local, sovereign AI reduces exposure to geopolitical risk, supports regulatory compliance and builds trust among both public and private stakeholders. Recent initiatives in Stockholm highlight how sovereign AI can be embedded into existing data center ecosystems. Purpose-built AI compute clusters, equipped with the latest GPU architectures, are being deployed on renewable power and integrated into local district heating networks, where excess server heat is recycled back into the city grid. These facilities are designed not only for high-performance workloads but also for long-term sustainability, aligning with Sweden’s climate and digital sovereignty goals. The strategy is clear: pair advanced AI infrastructure with domestic control and clean energy. By doing so, Stockholm can position itself as a European leader in sovereign AI, where innovation, security and sustainability converge in a way that few other markets can match. ... Stockholm’s ecosystem radiates gravitational pull. With more green, efficient and sovereign-capable data centers emerging, they attract additional clients and investments and reinforce the region’s dominance.


Agentic AI poised to pioneer the future of cybersecurity in the BFSI sector

Enter agentic AI systems that represent a network of intelligent agents having the capability for independent decision-making and adaptive learning. This extends the capabilities of traditional AI systems by incorporating autonomous decision-making and execution, while adopting proactive security measures. It is poised to revolutionise cybersecurity in the banking and financial services sector while bridging the gap between the speed of cyber-attacks and the slow, human-driven incident response. ... Agentic AI will proactively and autonomously hunt for threats across the IT systems within the financial institution by actively looking for vulnerabilities and possible threat vectors before they are exploited by threat actors. Agentic AI systems leverage their capabilities in simulation, where potential attack scenarios are modeled to identify vulnerabilities in the security posture. Data from logs, network traffic, and activities from endpoints are correlated to spot attack vectors as a part of the threat hunting process. ... AI agents have to be deployed into both customer-facing for better customer experience as well as internal systems. By establishing an agentic AI ecosystem, agents can collaborate across functions. Risk management, compliance monitoring, operational efficiency, and fraud detection functions can be streamlined, too. 


Shai-Hulud Attacks Shake Software Supply Chain Security Confidence

This isn’t the first time NPM’s reputation has been put to the test. The JavaScript community has seen a trio of supply chain attacks in rapid succession. Just recently, we saw the “manifest confusion” exploit, which tricked dependency trackers, and prior to that, a series of typosquatting and account-takeover incidents—remember the infamous “coa” and “rc” package hijacks? Now comes the latest beast from the sand: the Shai-Hulud supply chain attack. This is, depending on how you count, the third major NPM incident in recent memory—and arguably the most insidious. ... According to the detailed analysis by JFrog, attackers compromised multiple popular packages, including several that mimicked or targeted legitimate CrowdStrike modules. Before you panic: this wasn’t a direct attack on CrowdStrike itself, but the attackers were clever—by using names like “crowdstrike” and latching onto a trusted security vendor’s brand, they hoped to worm their payloads into unsuspecting production environments. ... What makes these attacks so damaging is less about the technical sophistication (though, don’t get me wrong, this one is clever) and more about how they shake our trust in the very idea of open collaboration. Every dev who’s ever typed `npm install` had to trust not just the original author, but every maintainer, every transitive dependency, and the opaque process of package publishing itself.

Daily Tech Digest - September 18, 2025


Quote for the day:

"When your life flashes before your eyes, make sure you’ve got plenty to watch.” -- Anonymous


The new IT operating model: cloud-managed networking as a strategic lever

Enterprises are navigating an environment where the complexity of IT is increasing exponentially. Hybrid work requires consistent connectivity across homes, offices, and campuses. Edge computing and IoT generate massive volumes of data at distributed sites. Security risks escalate as the attack surface grows. Traditional, hardware-centric approaches leave IT teams struggling to keep up. Managing dozens or hundreds of controllers, patching firmware manually, and troubleshooting issues site by site is not sustainable. Cloud-managed networking changes that equation. By centralizing management, applying AI-driven intelligence, and extending visibility across distributed environments, it enables IT to shift from reactive firefighting to proactive strategy. ... Enterprises adopting cloud-managed networking are making a decisive shift from complexity to clarity. Success requires more than technology alone. It demands a partner that understands how to translate advanced capabilities into measurable business outcomes. ... Cloud-managed networking is not just another IT trend. It is the operating model that will define enterprise technology for the next decade. By elevating the network from infrastructure to strategy, it enables organizations to move faster, stay secure, and innovate with confidence.


Why Shadow AI Is the Next Big Governance Challenge for CISOs

In many respects, shadow AI is a subset of a broader shadow IT problem. Shadow IT is an issue that emerged more than a decade ago, largely emanating from employee use of unauthorized cloud apps, including SaaS. Lohrmann noted that cloud access security broker (CASB) solutions were developed to deal with the shadow IT issue. These tools are designed to provide organizations with full visibility of what employees are doing on the network and on protected devices, while only allowing access to authorized instances. However, shadow AI presents distinct challenges that CASB tools are unable to adequately address. “Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures and more ..,” Lohrmann noted. A key difference between IT and AI is the nature of data, the speed of adoption and the complexity of the underlying technology. In addition, AI is often integrated into existing IT systems, including cloud applications, making these tools more difficult to identify. Chuvakin added, “With shadow IT, unauthorized tools often leave recognizable traces – unapproved applications on devices, unusual network traffic or access attempts to restricted services. Shadow AI interactions, however, often occur entirely within a web browser or personal device, blending seamlessly with regular online activity or not leaving any trace on any corporate system at all.”


Cisco strengthens integrated IT/OT network and security controls

Melding IT and OT networking and security is not a new idea, but it’s one that has seen growing attention from Cisco. ... Cisco also added a new technology called AI-powered asset clustering to its Cyber Vision OT management suite. Cyber Vison keeps track of devices connected to an industrial network, builds a real-time map of how these devices talk to each other and to IT systems, and can detect abnormal behavior, vulnerabilities, or policy violations that could signal malware, misconfigurations, or insider threats, Cisco says. ... Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along with other Cisco management offerings such as ThousandEyes, which gives customers a shared inventory of assets, traffic flows and security. “What we are focusing on is helping our customers have the secure networking foundation and architecture that lets IT teams and operational teams kind of have one fabric, one architecture, that goes from the carpeted spaces all the way to the far reaches of their OT network,” Butaney said.


Global hiring risks: What you need to know about identity fraud and screening trends

Most organizations globally include criminal record checks in their pre-employment screening. Employment and education verifications are also common, especially in EMEA and APAC. ... “Employers that fail to strengthen their identity verification processes or overlook recurring discrepancy patterns could face costly consequences, from compliance failures to reputational harm,” said Euan Menzies, President and CEO of HireRight. ... More than three-quarters of businesses globally found at least one discrepancy in a candidate’s background over the past year. Thirteen percent reported finding one discrepancy for every five candidates screened. Employment verification remains the area where most inconsistencies are discovered, especially in APAC and EMEA. These discrepancies range from minor errors like incorrect dates to more serious issues such as fabricated job histories. ... Companies are increasingly adopting post-hire screening to address risks that emerge after someone is hired. In North America, only 38 percent of companies now say they do no post-hire screening, a sharp drop from 57 percent last year. Common post-hire checks include driver monitoring and periodic rescreening for regulated roles. These efforts help companies catch new issues such as undisclosed criminal activity, changes in legal eligibility to work, or evolving insider threats.


Doomprompting: Endless tinkering with AI outputs can cripple IT results

Some LLMs appear to be designed to encourage long-lasting conversation loops, with answers often spurring another prompt. ... “When an individual engineer is prompting an AI, they get a pretty good response pretty quick,” he says. “It gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you get to the point where it’s the classic sunk-cost fallacy, where the engineer is like, ‘I’ve spent all this time prompting, surely I can prompt myself out of this hole.’” The problem often happens when the project lacks definitions of what a good result looks like, he adds. “Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.” ... Govindarajan has seen some IT teams get stuck in “doom loops” as they add more and more instructions to agents to refine the outputs. As organizations deploy multiple agents, constant tinkering with outputs can slow down deployments and burn through staff time, he says. “The whole idea of doomprompting is basically putting that instruction down and hoping that it works as you set more and more instructions, some of them contradicting with each other,” he adds. “It comes at the sacrifice of system intelligence.”


Vanishing Public Record Makes Enterprise Data a Strategic Asset

“We are rapidly running out of public data that is credible and usable. More and more enterprises will start to assign value to their data and go beyond partnerships to monetize it. For example, wind measurements captured by a wind turbine company could be helpful to many businesses that are not competitors,” said Olga Kupriyanova, principal consultant of AI and data engineering at ISG. ... "We’re entering a defining moment in AI where access to reliable, scalable, and ethical data is quickly becoming the central bottleneck, and also the most valuable asset. As legal and regulatory pressure tightens access to public data, due to copyright lawsuits, privacy concerns, or manipulation of open data repositories, enterprises are being forced to rethink where their AI advantage will come from,” said Farshid Sabet, CEO and co-founder at Corvic AI, developer of a GenAI management platform. ... The economic consequences of such data loss are already visible. Analysts estimate that U.S. public data underpinned nearly $750 billion of business activity as recently as 2022, according to the Department of Commerce. The loss of such data blinds companies that build models for everything from supply chain forecasting to investment strategy and predictions.


The Architecture of Responsible AI: Balancing Innovation and Accountability

The field of AI governance suffers from what Mackenzie et al reaffirm as the “principal-agent problem,” where one party (the principal) delegates tasks to another party (the agent). But their interests are not perfectly aligned, leading to potential conflicts and inefficiencies. ... Architects occupy a unique position in this landscape. Unlike regulators who may impose constraints post-design, architects work at the intersection of possibility and constraint. They must balance competing requirements, such as performance and privacy, efficiency and equity, speed and safety, within coherent system designs. Every architectural decision must embed values, priorities, and assumptions about how systems should behave. ... current AI guidance suffers from systematic weaknesses: evidence quality is sacrificed for speed, commercial interests masquerade as objective advice, and some perspectives dominate while broader stakeholder voices remain unheard ... Architects, being well-placed to bridge the gap between strategy and technology, hold a key role in establishing the principles that govern how systems behave, interact, and evolve. In the context of AI, this principle set extends beyond technical design. It encompasses the ethical, social, and legal aspects as well. .


AI will make workers ‘busier in the future’ – so what’s the point exactly?

“I have to admit that I’m afraid to say that we are going to be busier in the future than now,” he told host Liz Claman. “And the reason for that is because a lot of different things that take a long time to do are now faster to do. I’m always waiting for work to get done because I’ve got more ideas.” ... “The more productive we are, the more opportunity we get to pursue new ideas,” Huang continued. Reading between the lines here, it seems the so-called efficiency gains afforded by AI will mean workers have more work dumped in their laps – onto the next task, no rest for the wicked, etc. Huang’s comments run counter to the prevailing sentiment among big tech executives on exactly what AI will deliver for both enterprises and individual workers. ... We’ve all read the marketing copy and heard it regurgitated by tech leaders on podcasts and keynote stages – AI will allow us to focus on the “more rewarding” aspects of our jobs. They’ve never fully explained what this entails, or how it will pan out in the workplace. To be quite honest, I don’t think they know what it means. Marketing probably made it up and they’ve stuck with it. ... Will we be busier spending time on those rewarding aspects of our jobs? I have to say, I’m doubtful. The reality is that workers will be pulled into other tasks and merely end up drowning in the same cumbersome workloads they’ve been dealing with since the pandemic.


Building Safer Digital Experiences Through Robust Testing Practices

Secure software testing forms the bedrock of resilient applications, proactively uncovering flaws before they become critical. Early testing practices can significantly reduce risks, costs, and exposure to threats. According to Global Market Insights, the growing number and size of data breaches have increased the need for security testing services. Organizations that heavily use security AI and automation save an average of USD 1.76 million compared to those that don’t. About 51% plan to increase their security spending. Early integration of techniques like Static Application Security Testing (SAST) can detect vulnerabilities in existing code. It can also help to fix bugs during development. ... Organizations must verify that their systems handle personal data securely and comply with global regulations like GDPR and CCPA. Testing ensures sensitive information is protected from leaks or unauthorized use. Americans are highly concerned about how companies use their private data. ... Stress testing evaluates how applications perform under extreme loads. It helps identify potential failures in scalability, response times, and resource management. Vulnerability assessments concentrate on uncovering security gaps. Verified Market Reports notes that, after recent financial crises, governments are putting stronger emphasis on stress testing.


Prompt Engineering Is Dead – Long Live PromptOps

PromptOps is gaining traction rapidly because it has the potential to address major challenges in the use of LLMs, such as prompt drift and suboptimal output. Yet incorporating PromptOps effectively into an organization is far from simple, requiring a structured and clear process, the right tools, and a mindset that enables collaboration and effective centralization. Digging deeper into what PromptOps is, why it is needed, and how it can be implemented effectively can help companies to find the right approach when incorporating this methodology for improving their LLM applications usage. ... Before PromptOps is implemented, an organization typically has prompts scattered across multiple teams and tools, with no structured management in place. The first stage of implementing PromptOps involves gathering every detail on LLM applications usage within an organization. It is essential to understand precisely which prompts are being used, by which teams, and with which models. The next stage is to build consistency into this practice by incorporating versioning and testing. Adding secure access control at this stage is also important, in order to ensure only those who need it have access to prompts. With these practices in place, organizations will be well-positioned to introduce cross-model design and embed core compliance and security practices into all prompt crafting.