Daily Tech Digest - September 26, 2025


Quote for the day:

“You may be disappointed if you fail, but you are doomed if you don’t try.” -- Beverly Sills



Moving Beyond Compliance to True Resilience

Organisations that treat compliance as the finish line are missing the bigger picture. Compliance frameworks such as HIPAA, GDPR, and PCI-DSS provide critical guidelines, but they are not designed to cover the full spectrum of evolving cyber threats. Cybercriminals today use AI-driven reconnaissance, deepfake impersonations, and polymorphic phishing techniques to bypass traditional defences. Meanwhile, businesses face growing attack surfaces from hybrid work models and interconnected systems. A lack of leadership commitment, underfunded security programs, and inadequate employee training exacerbate the problem. ... Building resilience requires more than reactive policies, it calls for layered, proactive defence mechanisms such as threat intelligence, endpoint detection and response (EDR), and intrusion prevention systems (IPS). These are essential in identifying and stopping threats before they can cause damage which should be at the front line of defence. Ultimately reducing exposure and giving teams the visibility they need to act swiftly. ... True cyber resilience means moving beyond regulatory compliance to develop strategic capabilities that protect against, respond to, and recover from evolving threats. This includes implementing both offensive and defensive security layers, such as penetration testing and real-time intrusion prevention, to identify weaknesses before attackers do.


Architecture Debt vs Technical Debt: Why Companies Confuse Them and What It Costs Business

The contrast is clear: technical debt reflects inefficiencies at the system level — poorly structured code, outdated infrastructure, or quick fixes that pile up over time. Architecture debt emerges at the enterprise level — structural weaknesses across applications, data, and processes that manifest as duplication, fragmentation, and misalignment. One constrains IT efficiency; the other constrains business competitiveness. Recognizing this difference is the first step toward making the right strategic investments. ... The difference lies in visibility: technical debt is tangible for developers, showing up in unstable code, infrastructure issues, and delayed releases. Architecture debt, by contrast, hides in organizational complexity: duplicated platforms, fragmented data, and misaligned processes. When CIOs and business leaders hear the word “debt,” they often assume it refers to the same challenge. It does not. ... Recognizing this distinction is critical because it determines where investments should be made. Addressing technical debt improves efficiency within systems; addressing architecture debt strengthens the foundations of the enterprise. One enables smoother operations, while the other ensures long-term competitiveness and resilience. Leaders who fail to separate the two-risk solving local problems while leaving the structural weaknesses that undermine the organization’s future unchallenged.


Data Fitness in the Age of Emerging Privacy Regulations

Enter the concept of Data Fitness: a multidimensional measure of how well data aligns with privacy principles, business objectives, and operational resilience. Much like physical fitness, data fitness is not a one-time achievement but a continuous discipline. Data fitness is not just about having high-quality data, but also about ensuring that data is managed in a way that is compliant, secure, and aligned with business objectives. ... The emerging privacy regulations have also introduced a new layer of complexity to data management. They shift the focus from simply collecting and monetizing data to a more responsible and transparent approach, which call for sweeping review and redesign of all applications and processes that handles data. ... The days of storing customer data forever are over. New regulations often specify that personal data can only be retained for as long as it's needed for the purpose for which it was collected. This requires companies to implement robust data lifecycle management and automated deletion policies. ... Data privacy isn't just an IT or legal issue; it's a shared responsibility. Organizations must educate and train all employees on the importance of data protection and the specific policies they need to follow. A strong privacy culture can be a competitive advantage, building customer trust and loyalty. ... It's no longer just about leveraging data for profit; it's about being a responsible steward of personal information. 


Independent Management of Cloud Secrets

An independent approach to NHI management can empower DevOps teams by automating the lifecycle of secrets and identities, thus ensuring that security doesn’t compromise speed or agility. By embedding secrets management into the development pipeline, teams can preemptively address potential overlaps and misconfigurations, as highlighted in the resource on common secrets security misconfigurations. Moreover, NHIs’ automation capabilities can assist DevOps enterprises in meeting regulatory audit requirements without derailing their agile processes. This harmonious blend of compliance and agility allows for a framework that effectively bridges the gap between speed and security. ... Automation of NHI lifecycle processes not only saves time but also fortifies systems by means of stringent access control. This is critical in large-scale cloud deployments, automated renewal and revocation of secrets ensure uninterrupted and secure operations. More insightful strategies can be explored in Secrets Security Management During Development. ... While the integration of systems provides comprehensive security benefits, there is an inherent risk in over-relying on interconnected solutions. Enterprises need a balanced approach that allows for collaboration between systems without compromising individual segment vulnerabilities. A delicate balance is found by maintaining independent secrets management systems, which operate cohesively but remain distinct from operational systems. 


Why cloud repatriation is back on the CIO agenda

Cost pressure often stems from workload shape. Steady, always-on services do not benefit from pay-as-you-go pricing. Rightsizing, reservations and architecture optimization will often close the gap, yet some services still carry a higher unit cost when they remain in public cloud. A placement change then becomes a sensible option. Three observations support a measurement-first approach. Many organizations report that managing cloud spend is their top challenge; egress fees and associated patterns affect a growing share of firms, and the finops community places unit economics and allocation at the centre of cost accountability. ... Public cloud remains viable for many regulated workloads, assisted by sovereign configurations. Examples include the AWS European Sovereign Cloud (scheduled to be released at the end of 2025), the Microsoft EU Data Boundary and Google’s sovereign controls and partner offerings. These options have scope limits that should be assessed during design. Public cloud remains viable for many regulated workloads when sovereign configurations meet requirements. ... Repatriation tends to underperform where workloads are inherently elastic or seasonal, where high-value managed services would need to be replicated at significant opportunity cost, where the organization lacks the run maturity for private platforms, or where the cost issues relate primarily to tagging, idle resources or discount coverage that a FinOps reset can address.


Colocation meets regulation

While there have been many instances of behind-the-meter agreements in the data center sector, the AWS-Talen agreement differed in both scale and choice of energy. Unlike previous instances, often utilizing onsite renewables, the AWS deal involved a regional key generation asset, which provides consistent and reliable power to the grid. As a result, to secure the go-ahead, PJM Interconnection, the regional transmission operator in charge of the utility services in the state, had to apply for an amendment to the plant's existing Interconnection Service Agreement (ISA), permitting the increased power supply. However, rather than the swift approval the companies hoped for, two major utilities that operate in the region, Exelon and American Electric Power (AEP), vehemently opposed the amended ISA, submitting a formal objection to its provisions. ... Since the rejection by FERC, Talen and AWS have reimagined the agreement, with it moving from behind to an in-front-of-the-meter arrangement. The 17-year PPA will see Talen supply AWS with 1.92GW of power, ramped up over the next seven years, with the power provided through PJM. This reflects a broader move within the sector, with both Talen and nuclear energy generator Constellation indicating their intention to focus on grid-based arrangements going forward. Despite this, Phillips still believes that under the correct circumstances, colocation can be a powerful tool, especially for AI and hyperscale cloud deployments seeking to scale quickly.


Employees learn nothing from phishing security training, and this is why

Phishing training programs are a popular tactic aimed at reducing the risk of a successful phishing attack. They may be performed annually or over time, and typically, employees will be asked to watch and learn from instructional materials. They may also receive fake phishing emails sent by a training partner over time, and if they click on suspicious links within them, these failures to spot a phishing email are recorded. ... "Taken together, our results suggest that anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks," the researchers said. According to the researchers, a lack of engagement in modern cybersecurity training programs is to blame, with engagement rates often recorded as less than a minute or none at all. When there is no engagement with learning materials, it's unsurprising that there is no impact. ... To combat this problem, the team suggests that, for a better return on investment in phishing protection, a pivot to more technical help could work. For example, imposing two or multi-factor authentication (2FA/MFA) on endpoint devices, and enforcing credential sharing and use on only trusted domains. That's not to say that phishing programs don't have a place in the corporate world. We should also go back to the basics of engaging learners. 


SOC teams face 51-second breach reality—Manual response times are officially dead

When it takes just 51 seconds for attackers to breach and move laterally, SOC teams need more help. ... Most SOC teams first aim to extend ROI from existing operations investments. Gartner's 2025 Hype Cycle for Security Operations notes that organizations want more value from current tools while enhancing them with AI to handle an expansive threat landscape. William Blair & Company's Sept. 18 note on CrowdStrike predicts that "agentic AI potentially represents a 100x opportunity in terms of the number of assets to secure," with TAM projected to grow from $140 billion this year to $300 billion by 2030. ... Kurtz's observation reflects concerns among SOC leaders and CISOs across industries. VentureBeat sees enterprises experimenting with differentiated architectures to solve governance challenges. Shlomo Kramer, co-founder and CEO of Cato Networks, offered a complementary view in a VentureBeat interview: "Cato uses AI extensively… But AI alone can't solve the range of problems facing IT teams. The right architecture is important both for gathering the data needed to drive AI engines, but also to tackle challenges like agility, connecting enterprise edges, and user experience." Kramer added, "Good AI starts with good data. Cato logs petabytes weekly, capturing metadata from every transaction across the SASE Cloud Platform. We enrich that data lake with hundreds of threat feeds, enabling threat hunting, anomaly detection, and network degradation detection."


Timeless inclusive design techniques for a world of agentic AI

Progressive enhancement and inclusive design allow us to design for as many users as possible. They are core components of user-centered design. The word "user" often hides the complex magnificence of the human being using your product, in all their beautiful diversity. And it’s this rich diversity that makes inclusive design so important. We are all different, and use things differently. While you enjoy that sense of marvel at the richness and wonder of your users' lives, there is no need to feel it for AI agents. These agents are essentially just super-charged "stochastic parrots" (to borrow a phrase from esteemed AI ethicist and professor of Computational Linguistics Emily M. Bender) guessing the next token. ... Every breakthrough since we learnt to make fire has been built on what came before. Isaac Newton said he could only see so far because he was "standing on the shoulders of giants". The techniques and approaches needed to enable this new wave of agent-powered AI devices have been around for a long time. But they haven't always been used. In our desire to ship the shiniest features, we often forget to make our products work for people who rely on accessibility features. ... Patterns are things like adding a "skip to content link" and implementing form validation in a way that makes it easier to recover from errors. Alongside patterns, there are a wealth of freely available accessibility testing tools that can tell you if your product is meeting necessary standards.


Stronger Resilience Starts with Better Dependency Mapping

As recent disruptions made painfully clear, you cannot manage what you cannot see. When a single upstream failure ripples through eligibility checks, billing, scheduling, or clinical systems, executives need answers in minutes, not months. Who is impacted? What services are degraded? Which applications are truly critical? What are our fourth-party exposures? In too many organizations, those answers require a scavenger hunt. ... Modern operations rely on external platforms for authorizations, payments, data enrichment, analytics, and communications, yet many organizations stop their mapping at the data center boundary. That blind spot creates serious risk, since a single vendor outage can ripple across multiple critical services. Regulators are responding. In the U.S., the OCC, Federal Reserve, and FDIC’s 2023 Interagency Guidance on Third-Party Risk Management requires banks to identify and monitor critical vendor relationships, including subcontractors and concentration risks. ... Dependency data without impact data is trivia. Mapping is only valuable when assets and services are tied to business impact analysis (BIA) outputs like recovery time objectives and maximum tolerable downtime. Without this, leaders face a flat picture of connections but no way to prioritize what to restore first, or how long they can operate without a service before consequences cascade.

Daily Tech Digest - September 24, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


Managing Technical Debt the Right Way

Here’s the uncomfortable truth: most executives don’t care about technical purity, but they do care about value leakage. If your team can’t deliver new features fast enough, if outages are too frequent, if security holes are piling up, that is financial debt—just wearing a hoodie instead of a suit. The BTABoK approach is to make debt visible in the same way accountants handle real liabilities. Use canvases, views, and roadmaps to connect the hidden cost of debt to business outcomes. Translate debt into velocity lost, time to market, and risk exposure. Then prioritize it just like any other investment. ... If your architects can’t tie debt decisions to value, risk, and strategy, then they’re not yet professionals. Training and certification are not about passing an exam. They are about proving you can handle debt like a surgeon handles risk—deliberately, transparently, and with the trust of society. ... Let’s not sugarcoat it: some executives will always see debt as “nerd whining.” But when you put it into the lifecycle, into the transformation plan, and onto the balance sheet, it becomes a business issue. This is the same lesson learned in finance: debt can be a powerful tool if managed, or a silent killer if ignored. BTABoK doesn’t give you magic bullets. It gives you a discipline and a language to make debt a first-class concern in architectural practice. The rest is courage—the courage to say no to shortcuts that aren’t really shortcuts, to show leadership the cost of delay, and to treat architectural decisions with the seriousness they deserve.


How National AI Clouds Undermine Democracy

The rapid spread of sovereign AI clouds unintentionally creates a new form of unchecked power. It combines state authority with corporate technology in unclear public-private partnerships. This combination centralizes surveillance and decision-making power, extending far beyond effective democratic oversight. The pursuit of national sovereignty undermines the civic sovereignty of individuals. ... The unique and overlooked danger is the rise of a permanent, unelected techno-bureaucracy. Unlike traditional government agencies, these hybrid entities are shielded from democratic pressures. Their technical complexity acts as a barrier against public understanding and journalistic inquiry. ... no sovereign cloud should operate without a corresponding legislative data charter. This charter, passed by the national legislature, must clearly define citizens' rights against algorithmic discrimination, set explicit limits on data use, and create transparent processes for individuals harmed by the system. It should recognize data portability as an essential right, not just a technical feature. ... every sovereign AI initiative should be mandated to serve the public good. These systems must legally demonstrate that they fulfill publicly defined goals, with their performance measured and reported openly. This directs the significant power of AI toward applications that benefit the public, such as enhancing healthcare outcomes or building climate resilience.


IT’s renaissance risks losing steam

IT-enabled value creation will etiolate without the sustained light of stakeholder attention. CIOs need to manage IT signals, symbols, and suppositions with an eye toward recapturing stakeholder headspace. Every IT employee needs to get busy defanging the devouring demons of apathy and ignorance surrounding IT operations today. ... We need to move beyond our “hero on horseback” obsession with single actors. Instead we need to return our efforts forcefully to l’histoire des mentalités — the study of the mental universe of ordinary people. How is l’homme moyen sensual (the man on the street) dealing with the technological choices arrayed before him? ... The IT pundits’ much discussed promise of “technology transformation” will never materialize if appropriate exothermic — i.e., behavior-inducing and energy creating — IT ideas have no mass following among those working at the screens around the world. ... As CIO, have you articulated a clear vision of what you want IT to achieve during your tenure? Have you calmed the anger of unmet expectations, repaired the wounds of system outages, alleviated the doubts about career paths, charted a filled-with-benefits road forward and embodied the hopes of all stakeholders? ... The cognitive elephant in the room that no one appears willing to talk about is the widespread technological illiteracy of the world’s population. 


How One Bad Password Ended a 158-Year-Old Business

KNP's story illustrates a weakness that continues to plague organizations across the globe. Research from Kaspersky analyzing 193 million compromised passwords found that 45% could be cracked by hackers within a minute. And when attackers can simply guess or quickly crack credentials, even the most established businesses become vulnerable. Individual security lapses can have organization-wide consequences that extend far beyond the person who chose "Password123" or left their birthday as their login credential. ... KNP's collapse demonstrates that ransomware attacks create consequences far beyond an immediate financial loss. Seven hundred families lost their primary income source. A company with nearly two centuries of history disappeared overnight. And Northamptonshire's economy lost a significant employer and service provider. For companies that survive ransomware attacks, reputational damage often compounds the initial blow. Organizations face ongoing scrutiny from customers, partners, and regulators who question their security practices. Stakeholders seek accountability for data breaches and operational failures, leading to legal liabilities. ... KNP joins an estimated 19,000 UK businesses that suffered ransomware attacks last year, according to government surveys. High-profile victims have included major retailers like M&S, Co-op, and Harrods, demonstrating that no organization is too large or established to be targeted.


Has the UK’s Cyber Essentials scheme failed?

There are several reasons why larger organisations may steer clear of CE in its current form, explains Kearns. “They typically operate complex, often geographically dispersed networks, where basic technical controls driven by CE do not satisfy organisational appetite to drive down risk and improve resilience,” she says. “The CE control set is also ‘absolute’ and does not allow for the use of compensating controls. Large complex environments, on the other hand, often operate legacy systems that require compensating controls to reduce risk, which prevents compliance with CE.” The point-in-time nature of assessment is also a poor fit for today’s dynamic IT infrastructure and threat environments, argues Pierre Noel, field CISO EMEA at security vendor Expel. ... “For large enterprises with complex IT environments, CE may not be comprehensive enough to address their specific security needs,” says Andy Kays, CEO of MSSP Socura. “Despite these limitations, it still serves a valuable purpose as a baseline, especially for supply chain assurance where larger companies want to ensure their smaller partners have a minimum level of security.” Richard Starnes is an experienced CISO and chair of the WCIT security panel. He agrees that large enterprises should require CE+ certification in their supplier contracts, where it makes sense. “This requirement should also include a contract flow-down to ensure that their suppliers’ downstream partners are also certified,” says Starnes.


Is Your Data Generating Value or Collecting Digital Dust?

Economic uncertainty is prompting many com­panies to think about how to do more with less. But what if they’re actually positioned to do more with more and just don’t realize it? Many organizations already have the resources they need to improve efficiency and resilience in challenging times. Close to two-thirds of organi­zations manage 1 petabyte or more of data, which represents enough data to cover 500 billion standard pages of text. More than 40% of companies store even more data. Much of that data sits unanalyzed while it incurs costs related to collection, compliance, and storage. It also poses data breach risks that require expensive security measures to prevent. ... Engaging with too many apps often makes employees less efficient than they could be. In 2024, companies used an average of 21 apps just for HR tasks. Multiply that across different functions, and it’s easy to see how finding ways to reduce the total could bring down costs. Trimming the number of apps can also increase productivity by reducing employee overwhelm. Constantly switching between different apps and systems has been shown to distract employees while increasing their levels of stress and frustration. Across the orga­nization, switching among tasks and apps consumes 9% of the average employee’s time at work by chipping away at their atten­tion and ability to focus a few seconds at a time with each of the hundreds of tasks switches they perform every day.


The history and future of software development

For any significant piece of software back then, you needed stacks of punch cards. Yes, 1000 lines of code needed 1000 cards. And you needed to have them in order. Now, imagine dropping that stack of 1000 cards! It would take me ages to get them back in order. Devs back then experienced this a lot—so some of them went ahead and had creative ways of indicating the order of these cards. ... y the mid 1970s affordable home computers were starting to become a reality. Instead of a computer just being a work thing, hobbyists started using computers for personal things—maybe we can call these, I don't know...personal computers. ... Assembler and assembly tend to be used interchangeably. But are in reality two different things. Assembly would be the actual language, syntax—instructions being used and would be tightly coupled to the architecture. While the assembler is the piece of software that assembles your assembly code into machine code—the thing your computer knows how to execute. ... What about writing the software? Did they use git back then? No, git only came out in 2005, so back then software version control was quite the manual effort. From developers having their own way of managing source code locally to even having wall charts where developers can "claim" ownership of certain source code files. For those that were able to work on a shared (multi-user) system, or have an early version of some networked storage—Source code sharing was as easy as handing out floppy disks.


Why the operating system is no longer just plumbing

Many enterprises still think of the operating system as a “static” or background layer that doesn’t need active evolution. The reality is that modern operating systems like Red Hat Enterprise Linux (RHEL) are dynamic, intelligent platforms that actively enable and optimize everything running on top of them. Whether you're training AI models, deploying cloud-native applications, or managing edge devices, the OS is making thousands of critical decisions every second about resource allocation, security enforcement, and performance optimization. ... With image mode deployments, zero-downtime updates, and optimized container support, RHEL ensures that even resource-constrained environments can maintain enterprise-grade reliability. We’ve also focused heavily on security—confidential computing, quantum-resistant cryptography, and compliance automation—because edge environments are often exposed to greater risk. These choices allow RHEL to deliver resilience in conditions where compute power, space, and connectivity are limited. ... We don't just take community code and ship it — we validate, harden, and test everything extensively. Red Hat bridges this gap by being an active contributor upstream while serving as an enterprise-grade curator downstream. Our ecosystem partnerships ensure that when new technologies emerge, they work reliably with RHEL from day one.


Ransomware now targeting backups, warns Google’s APAC security chief

Backups often contain sensitive data such as personal information, intellectual property, and financial records. Pereira warned that attackers can use this data as extra leverage or sell it on the dark web. The shift in focus to backup systems underscores how ransomware has become less about disruption and more about business pressure. If an organisation cannot restore its systems independently, it has little choice but to consider paying a ransom. ... Another troubling trend is “cloud-native extortion,” where attackers abuse built-in cloud features, such as encryption or storage snapshots, to hold systems hostage. Pereira explained that many organisations in the region are adapting by shifting to identity-focused security models. “Cloud environments have become the new perimeter, and attackers have been weaponising cloud-native tools,” he said. “We now need to enforce strict cloud security hygiene, such as robust MFA, least privilege access, proactively monitoring of role access changes or credential leaks, using automation to detect and remediate misconfigurations, and anomaly detection tools for cloud activities.” He pointed to rising investments in identity and access management tools, with organisations recognising their role in cutting down the risk of identity-based attacks. For APAC businesses, this means moving away from legacy perimeter defences and embracing cloud-native safeguards that assume breaches are inevitable but limit the damage.


AI Won't Replace Developers, It Will Make the Best Ones Indispensable

The replacement theory assumes AI can work independently, but it can't. Today's AI coding tools don't run themselves, they need active steering. Most AI tools today operate on a "prompt and pray" model: give the AI instructions, get code back, hope it works. That's fine for demos or side projects, but production environments are far less forgiving. ... AI doesn't level the playing field between developers, it widens it. Using AI effectively requires the same skills that make great developers great: understanding system architecture, recognizing security implications, writing maintainable code. ... Tomorrow's junior developers will need to get productive in a different way. Instead of spending months learning basic syntax and patterns, they'll start by learning to collaborate with AI agents effectively. Those who can adapt will find opportunities, and those who can't might struggle to break in. This shift actually creates more demand for senior engineers, because someone needs to train these AI-assisted junior developers, architect systems that can handle AI-generated code at scale, and establish the processes and standards that keep AI tools from creating chaos. ... The teams succeeding with AI coding treat agents like exceptionally capable junior teammates who need oversight. They provide detailed context, review generated code, and test thoroughly before deployment rather than optimizing purely for speed.

Daily Tech Digest - September 21, 2025


Quote for the day:

"The world's most deadly disease is hardening of the attitudes." -- Zig Ziglar



AI sharpens threat detection — but could it dull human analyst skills?

While AI offers clear advantages, there are real risks when used without caution. Blind trust in AI-generated recommendations can lead to missed threats or incorrect actions, especially when professionals rely too heavily on prebuilt threat scores or automated responses. A lack of curiosity to validate findings weakens analysis and limits learning opportunities from edge cases or anomalies. This mirrors patterns seen in internet search behavior, where users often skim for quick answers rather than dig deeper. It bypasses critical thinking that strengthens neural connections and sparks new ideas. In cybersecurity — where stakes are high and threats evolve fast — human validation and healthy skepticism remain essential. ... AI literacy is becoming a must-have skill for cybersecurity teams, especially as more organizations adopt automation to handle growing threat volumes. Incorporating AI education into security training and tabletop exercises helps professionals stay sharp and confident when working alongside intelligent tools. When teams can spot AI bias or recognize hallucinated outputs, they’re less likely to take automated insights at face value. This kind of awareness supports better judgment and more effective responses. It also pays off, as organizations that use security AI and automation extensively save an average of $2.22 million in prevention costs. 


Repatriation games: the mid-market reevaluates its public cloud consumption

Many IT decision-makers were quick to blame public cloud service providers. But it’s more likely that the applications and workloads were never intended for public cloud environments. Or that cloud-enabled applications and workloads were incorrectly configured. Either way, poor application and workload performance meant that the expected efficiency gains and cost savings from public cloud adoption did not materialize. This led to budgeting and resourcing problems, as well as friction between IT management, senior leadership teams, and other stakeholders. ... Concerns over data sovereignty and compliance have also influenced decisions to repatriate public cloud workloads and adopt a hybrid cloud model, particularly due to worries about DORA, GDPR and the US Cloud Act compliance. DORA and GDPR both place greater emphasis on data sovereignty, so organizations need to have greater control over where their data resides. This makes a strong case for repatriation of specific workloads to maintain compliance with both sets of regulations – especially within highly regulated industries or for sensitive information such as HR or financial data. ... Nearly a third of respondents say cybersecurity specialists are the most difficult roles to hire or retain. Some mid-market organizations may lack the in-house skills to configure and manage cybersecurity in public cloud environments or even understand their default settings. 


A guide to de-risking enterprise-wide financial transformation

Distilling the lessons from these large-scale initiatives, a clear blueprint emerges for leaders embarking on their own transformation journeys:Define a data-driven vision: A successful transformation begins with a clear vision for how data will function as a strategic asset. The goal should be to create a single source of truth that is granular, accessible and enables a shift from reactive reporting to proactive analysis. Lead with process, not technology: Technology is an enabler, not the solution itself. Invest heavily in understanding and harmonizing end-to-end business processes before a single line of code is written. This effort is the foundation for a sustainable, low-customization system. De-risk with a phased, modular approach: Avoid the “big bang.” Break the program into logical phases, delivering tangible business value at each step. This builds momentum, facilitates organizational learning and significantly reduces the risk of catastrophic failure. Prioritize the user experience: Even the most powerful system will fail if it is not adopted. Engage end users throughout the design and implementation process. Build intuitive tools, like the FIRST microsite, and invest in robust training and change management to drive adoption and proficiency. ... Such forums are critical for breaking down silos and ensuring the end-to-end process is optimized.  ... Transforming the financial core of a global technology leader is not merely a technical undertaking, it is a strategic imperative for enabling scale, agility and insight.


5 things IT managers get wrong about upskilling tech teams

One of the most pervasive issues in IT upskilling is what Patrice Williams-Lindo, CEO at career coaching service Career Nomad, called the “training-and-forgetting” approach. “Many managers send teams to training without any plan for application,” she said. “Employees return to overloaded sprints” with no guidance on how to incorporate what they’ve learned. Without application in their work, “new skills atrophy fast.” This problem is rooted in basic learning science.  ... Another major pitfall is the overemphasis on certifications as proof of capability. Managers often assume that a certification is going to solve a problem without considering whether it fits the day-to-day job, said Tim Beerman, CTO at managed service provider Ensono. What’s more, certification alone doesn’t equal real-world capability and doesn’t necessarily indicate that a person is competent, according to CGS’ Stephen. While a certification shows that someone has the capability to obtain learned knowledge, he said, it doesn’t guarantee practical application skills. ... Many IT managers fall into the trap of pursuing trendy technologies without connecting them to actual business needs. Williams-Lindo warned that focusing on hype skills without business alignment backfires. While AI, cloud, and blockchain sound strategic, she said, if they aren’t tied to current or near-future business objectives, teams will spend time learning irrelevant tools while core needs are ignored.


Gen AI risks are getting clearer. How much would you pay for digital trust?

“As AI becomes more pervasive and kind of invades various dimensions of our lives and our work, how we interact with it and how safe and trustworthy it is, has become paramount,” said Dan Hays ... What do trust and safety issues look like, when it comes to AI agents in customer interactions? Hays gave several examples: Should AI agents remember everything that a particular customer says to them, or should it “forget” interactions, particularly as years or decades pass? The memory capabilities of bots also relate to the question of, what parameters should be placed on how AI agents are allowed to interact with customers? ... “As organizations across nearly all industries dive head-first into AI and digital transformations, they’re running into new risks that could undermine the trust they’ve built with consumers. Right now, many don’t have the guardrails or experience to handle these evolving threats — and the ripple effects are being felt across entire companies and industries,” the PwC report said. However, it seems that people who can, are willing to pay for digital environments and services that they can trust — much like subscribers to paywalled content sites can generally trust what they are getting, while those looking for free news might end up reading information that is garbled or deliberately twisted with the help of AI.


Object Storage: The Last Line of Defense Against Ransomware

Object storage provides intrinsic advantages in immutability, as it does not provide “edit in place” functionality as with file systems which are designed to allow direct file modifications. Unlike traditional file or block storage, object storage interacts through “get and put” access and write APIs, which means malware and ransomware actors have to attempt to write (or overwrite modified objects) via the API to the object store. ... As ransomware continues to evolve, organizations must design storage strategies that protect at every level. Cyber resilience in the storage layer involves a layered defense that spans architecture, APIs, and operational practices. ... A successful data center attack not only disrupts service but also undermines the partner’s reputation for reliability. Technology partners must demonstrate their infrastructure can isolate tenants, withstand attacks, and deliver continuous availability even in adverse conditions. In both cases, cyber-resilient storage is no longer optional. ... Business continuity leaders should prioritize S3-compatible object storage with ransomware-proof capabilities such as object locking, versioning, and multi-layered access controls. Just as importantly, they should evaluate whether their current storage platforms deliver end-to-end cyber resilience that spans both technology and process.


Time to Embrace Offensive Security for True Resilience

Offensive engagements utilize an attacker mindset to focus on truly exploitable weaknesses, weeding out the noise of unprioritized lists of vulnerabilities. Through remediation of high-impact findings, organizations prevent spreading resources over low-impact issues. Additionally, offloading sophisticated simulations to specialized teams or utilizing automated penetration testing speeds testing cycles and maximizes security investments. Essentially, each dollar invested in offensive testing can pre-empt multiples of breach response, legal penalties, lost productivity, and reputational loss. Successful security testing takes more than shallow scans; it needs fully immersed, real-world simulations that mimic the methods employed by actual threat actors to test your systems. Below is an overview of the most effective methods: ... Red teaming exercises goes beyond standard testing by simulating skilled threat actors with secretive, multi-step attack scenarios. These exercises check not just technical weaknesses but also the organization’s ability to notice, respond to, and recover from real security breaches. Red teams often use methods like social engineering, lateral movement, and privilege escalation to test incident response teams. This uncovers flaws in technology and human procedures during realistic attack simulations.


7 Enterprise Architecture Best Practices for 2025

The foundational principle of effective enterprise architecture is its direct and unbreakable link to business strategy. This alignment ensures that every technological decision, architectural blueprint, and IT investment serves a clear business purpose. It transforms the EA function from a cost center focused on technical standards into a strategic partner that drives business value, innovation, and competitive advantage. ... Adopting a framework establishes a shared understanding among stakeholders, from IT teams to business leaders. It provides a standardized set of tools, templates, and terminologies, which reduces ambiguity and improves communication. This structured approach is fundamental to creating a holistic and integrated view of the enterprise, allowing architects to manage complexity, mitigate risks, and align technology initiatives with strategic goals in a systematic way. ... While a strong strategy provides the direction for enterprise architecture, robust governance provides the necessary guardrails and decision-making framework to keep it on track. EA governance establishes the processes, standards, and controls that ensure architectural decisions align with business objectives and are implemented consistently across the organization. It transforms architecture from a set of recommendations into an enforceable, value-driven discipline. 


Why Cloud Repatriation is Critical Post-VMware Exit

What began as a tactical necessity evolved into an expensive operational habit, with monthly bills that continue climbing without corresponding business value. The rush to cloud often bypassed careful workload assessment, resulting in applications running in expensive public cloud environments that would be more cost-effective on-premises. ... Equally important, the technology landscape has evolved since the initial cloud migration wave. We now have universal infrastructure-wide operating platforms that deliver cloud-like experiences on-premises, eliminating the operational gaps that initially drove workloads to public cloud. Combined with universal migration capabilities that can move workloads seamlessly from any source—whether VMware, other hypervisors, or major cloud providers—organizations finally have the tools needed to make cloud repatriation both technically feasible and economically compelling. ... The forced VMware migration creates the perfect opportunity to reassess the entire infrastructure portfolio holistically rather than making isolated platform decisions. ... This infrastructure reset enables IT teams to ask fundamental questions that operational inertia prevents: Which workloads benefit from cloud deployment? What applications could run more affordably on modern on-premises infrastructure? How can we optimize our total infrastructure spend across both on-premises and cloud environments?


4 Ways AI Revolutionizes Modern Cybersecurity Strategy

AI's true value doesn't lie in marketing promises, but in concrete results(link is external), such as reducing false positives, cutting detection time, and reducing operational costs. These are documented results from organizations that have implemented AI-human collaboration models balancing automation with expert judgment. This capability significantly exceeds the efficiency of human security teams, fundamentally transforming threat detection and response. Imagine a zero-day exploit detected and contained within minutes, not days, drastically reducing the window of vulnerability. ... Accelerating the transformation of legacy code represents one of the most impactful ways organizations are using AI to mitigate vulnerabilities. Legacy code accounts for a staggering 70% of identified vulnerabilities(link is external), but manually overhauling monolithic code bases is rarely feasible. Security teams know these vulnerabilities exist, but often lack the resources to address them. ... Manual SBOM creation cannot scale, not even for a 10-person startup. DevSecOps teams already stretched thin can't reasonably be expected to monitor the thousands of components in modern software stacks. Any sustainable approach to SBOM management for software-producing organizations must necessarily include automation. ... Compliance remains one of security's greatest frictions. 

Daily Tech Digest - September 20, 2025


Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Five forces shaping the next wave of quantum innovation

Quantum computers are expected to solve problems currently intractable for even the world’s fastest supercomputers. Their core strengths — efficiently finding hidden patterns in complex datasets and navigating vast optimization challenges — will enable the design of novel drugs and materials, the creation of superior financial algorithms and open new frontiers in cryptography and cybersecurity. ... The quantum ecosystem now largely agrees that simply scaling up today’s computers, which suffer from significant noise and errors that prevent fault-tolerant operation, won’t unlock the most valuable commercial applications. The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. ... Most early quantum computing companies tried a full-stack approach. Now that the industry is maturing, a rich ecosystem of middle-of-the-stack players has emerged. This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines and quantum software development from firms ... recent innovations in quantum networking technology have made a scale-out approach a serious contender. 


Post-Modern Ransomware: When Exfiltration Replaces Encryption

Exfiltration-first attacks have re-written the rules, with stolen data providing criminals with a faster, more reliable payday than the complex mechanics of encryption ever could. The threat of leaking data like financial records, intellectual property, and customer and employee details delivers instant leverage. Unlike encryption, if the victim stands firm and refuses to pay up, criminal groups can always sell their digital loot on the dark web or use it to fuel more targeted attacks. ... Phishing emails, once known for being riddled with tell-tale grammar and spelling mistakes, are now polished, personalized and delivered in perfect English. AI-powered deepfake voices and videos are providing convincing impersonations of executives or trusted colleagues that have defrauded companies for millions. At the same time, attackers are deploying custom chatbots to manage ransom negotiations across multiple victims simultaneously, applying pressure with the relentless efficiency of machines. ... Yet resilience is not simply a matter of dashboards and detection thresholds – it is equally about supporting those on the frontlines. Security leaders already working punishing hours under relentless scrutiny cannot be expected to withstand endless fatigue and a culture of blame without consequence. Organizations must also embed support for their teams into their response frameworks, from clear lines of communication and decompression time to wellbeing checks. 


The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time

The uncertainty is driving concern. “There's been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. ... “There's a nervousness around deployment of AI, and I think that nervousness comes from -- definitely in conversations with other CIOs -- not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don't necessarily need all the bells and whistles of the public cloud providers, whether that's for latency or performance reasons, or whether it's for cost or whether that's for sovereignty reasons,” explained Woo. 


Enterprise AI enters the age of agency, but autonomy must be governed

Agentic AI systems don’t just predict or recommend, they act. These intelligent software agents operate with autonomy toward defined business goals, planning, learning, and executing across enterprise workflows. This is not the next version of traditional automation or static bots. It’s a fundamentally different operating paradigm, one that will shape the future of digital enterprises. ... For many enterprises, the last decade of AI investment has focused on surfacing insights: detecting fraud, forecasting demand, and predicting churn. These are valuable outcomes, but they still require humans or rigid automation to respond. Agentic AI closes that gap. These agents combine machine learning, contextual awareness, planning, and decision logic to take goal-directed action. They can process ambiguity, work across systems, resolve exceptions, and adapt over time. ... Agentic AI will not simply automate tasks. It will reshape how work is designed, measured, and managed. As autonomous agents take on operational responsibility, human teams will move toward supervision, exception resolution, and strategic oversight. New KPIs will emerge, not just around cost or cycle time, but around agent quality, business impact, and compliance resilience. This shift will also demand new talent models. Enterprises must upskill teams to manage AI systems, not just processes. 


Cybersecurity in smart cities under scrutiny

The digital transformation of public services involves “an accelerated convergence between IT and OT systems, as well as the massive incorporation of connected IoT devices,” she explains, which gives rise to challenges such as an expanding attack surface or the coexistence of obsolete infrastructure with modern ones, in addition to a lack of visibility and control over devices deployed by multiple providers. ... “According to the European Cyber ​​Security Organisation, 86% of European local governments with IoT deployments have suffered some security breach related to these devices,” she says. Accenture’s Domínguez adds that the challenge is to consider “the fragmentation of responsibilities between administrations, concessionaires, and third parties, which complicates cybersecurity governance and requires advanced coordination models.” De la Cuesta also emphasizes the siloed nature of project development, which significantly hinders the development of an active cybersecurity strategy. ... In the integration of new tools, despite Spain holding a leading position in areas such as 5G, “technology moves much faster than the government’s ability to react,” he says. “It’s not like a private company, which has a certain agility to make investments,” he explains. “Public administration is much slower. Budgets are different. Administrative procedures are extremely long. From the moment a project is first discussed until it is actually executed, many years pass.”


Your SDLC Has an Evil Twin — and AI Built It

Welcome to the shadow SDLC — the one your team built with AI when you weren't looking: It generates code, dependencies, configs, and even tests at machine speed, but without any of your governance, review processes, or security guardrails. ... It’s not just about insecure code sneaking into production, but rather about losing ownership of the very processes you’ve worked to streamline. Your “evil twin” SDLC comes with: Unknown provenance → You can’t always trace where AI-generated code or dependencies came from. Inconsistent reliability → AI may generate tests or configs that look fine but fail in production. Invisible vulnerabilities → Flaws that never hit a backlog because they bypass reviews entirely. ... AI assistants are now pulling in OSS dependencies you didn’t choose — sometimes outdated, sometimes insecure, sometimes flat-out malicious. While your team already uses hygiene tools like Dependabot or Renovate, they’re only table stakes that don’t provide governance. ... The “evil twin” of your SDLC isn’t going away. It’s already here, writing code, pulling dependencies, and shaping workflows. The question is whether you’ll treat it as an uncontrolled shadow pipeline — or bring it under the same governance and accountability as your human-led one. Because in today’s environment, you don’t just own the SDLC you designed. You also own the one AI is building — whether you control it or not.


'ShadowLeak' ChatGPT Attack Allows Hackers to Invisibly Steal Emails

Researchers at Radware realized the issue earlier this spring, when they figured out a way of stealing anything they wanted from Gmail users who integrate ChatGPT. Not only was their trick devilishly simple, but it left no trace on an end user's network — not even an iota of the suspicious Web traffic typical of data exfiltration attacks. As such, the user had no way of detecting the attack, let alone stopping it. ... To perform a ShadowLeak attack, attackers send an outwardly normal-looking email to their target. They surreptitiously embed code in the body of the message, in a format that the recipient will not notice — for example, in extremely tiny text, or white text on a white background. The code should be written in HTML, being standard for email and therefore less suspicious than other, more powerful languages would be. ... The malicious code can instruct the AI to communicate the contents of the victim's emails, or anything else the target has granted ChatGPT access to, to an attacker-controlled server. ... Organizations can try to compensate with their own security controls — for example, by vetting incoming emails with their own tools. However, Geenens points out, "You need something that is smarter than just the regular-expression engines and the state machines that we've built. Those will not work anymore, because there are an infinite number of permutations with which you can write an attack in natural language." 


UK: World’s first quantum computer built using standard silicon chips launched

This is reportedly the first quantum computer to be built using the standard complementary metal-oxide-semiconductor (CMOS) chip fabrication process which is the same transistor technology used in conventional computers. A key part of this approach is building cryoelectronics that connect qubits with control circuits that work at very low temperatures, making it possible to scale up quantum processors greatly. “This is quantum computing’s silicon moment,” James Palles‑Dimmock, Quantum Motion’s CEO, stated. ... In contrast to other quantum computing approaches, the startup used high-volume industrial 300 millimeter chipmaking processes from commercial foundries to produce qubits. The architecture, control stack, and manufacturing approach are all built to scale to host millions of qubits and pave the way for fault-tolerant, utility-scale, and commercially viable quantum computing. “With the delivery of this system, Quantum Motion is on track to bring commercially useful quantum computers to market this decade,” Hugo Saleh, Quantum Motion’s CEO and president, revealed. ... The system’s underlying QPU is built on a tile-based architecture, integrating all compute, readout, and control components into a dense, repeatable array. This design enables future expansion to millions of qubits per chip, with no changes to the system’s physical footprint.


Key strategies to reduce IT complexity

The cloud has multiplied the fragmentation of solutions within companies, expanding the number of environments, vendors, APIs, and integration approaches, which has raised the skill set, necessitated more complex governance, and prompted the emergence of cross-functional roles between IT and business. Cybersecurity also introduces further levels of complexity, introducing new platforms, monitoring tools, regulatory requirements, and risk management approaches that must be overseen by expert personnel. And then there’s shadow IT. With the ease of access to cloud technologies, it’s not uncommon for business units to independently activate services without involving IT, generating further risks. ... “Structured upskilling and reskilling programs are needed to prepare people to manage new technologies,” says Massara. “So is an organizational model capable of managing a growing number of projects, which can no longer be handled in a one-off manner. The approach to project management is changing because the project portfolio has expanded significantly, and a structured PMO is required, with project managers who often no longer reside solely in IT, but directly within the business.” ... While it’s true that an IT system with disparate systems leads to greater complexity, companies are still very cost-conscious and wary about heavily investing in unification right away. But as systems become obsolete, they become more harmonized.


Unshackling IT: Why Third-Party Support Is a Strategic Imperative, Especially for AI

One of the most compelling arguments for independent third-party support is its inherent vendor neutrality. When a company relies solely on a software vendor for support, that vendor naturally has a vested interest in promoting its latest upgrades, cloud migrations, and proprietary solutions. This can create a conflict of interest, potentially pushing customers towards expensive, unnecessary upgrades or discouraging them from exploring alternatives that might be a better fit for their unique needs. ... The recent acquisition of VMware by Broadcom provides a compelling and timely illustration of why third-party support is becoming increasingly critical. Following the merger, many VMware customers have expressed significant dissatisfaction with changes to licensing models, product roadmaps, and, crucially, support. Broadcom has been criticized for restructuring VMware’s offerings and reportedly reducing support for smaller customers, pushing them towards bundled, more expensive solutions. ... The shift towards third-party support isn’t just about cost savings; it’s about regaining control, accessing unbiased expertise, and ensuring business continuity in a rapidly changing technological landscape. For companies making critical decisions about AI integration and managing complex enterprise systems, providers like Spinnaker Support offer a strategic advantage.

Daily Tech Digest - September 19, 2025


Quote for the day:

"The whole secret of a successful life is to find out what is one's destiny to do, and then do it." -- Henry Ford


How CISOs Can Drive Effective AI Governance

For CISOs, finding that balance between security and speed is critical in the age of AI. This technology simultaneously represents the greatest opportunity and greatest risk enterprises have faced since the dawn of the internet. Move too fast without guardrails, and sensitive data leaks into prompts, shadow AI proliferates, or regulatory gaps become liabilities. Move too slow, and competitors pull ahead with transformative efficiencies that are too powerful to compete with. Either path comes with ramifications that can cost CISOs their job. In turn, they cannot lead a "department of no" where AI adoption initiatives are stymied by the organization's security function. It is crucial to instead find a path to yes, mapping governance to organizational risk tolerance and business priorities so that the security function serves as a true revenue enabler. ... Even with strong policies and roadmaps in place, employees will continue to use AI in ways that aren't formally approved. The goal for security leaders shouldn't be to ban AI, but to make responsible use the easiest and most attractive option. That means equipping employees with enterprise-grade AI tools, whether purchased or homegrown, so they do not need to reach for insecure alternatives. In addition, it means highlighting and reinforcing positive behaviors so that employees see value in following the guardrails rather than bypassing them.


AI developer certifications tech companies want

Certifications help ensure developers understand AI governance, security, and responsible use, Hinchcliffe says. Certifications from vendors such as Microsoft and Google, along with OpenAI partner programs, are driving uptake, he says. “Strategic CIOs see certifications less as long-term guarantees of expertise and more as a short-term control and competency mechanism during rapid change,” he says. ... While certifications aren’t the sole deciding factor in landing a job, they often help candidates stand out in competitive roles where AI literacy is becoming a crucial factor, Taplin says. “This is especially true for new software engineers, who can gain a leg up by focusing on certifications early to enhance their career prospects,” he says. ... “The real demand is for AI skills, and certifications are simply one way to build those skills in a structured manner,” says Kyle Elliott, technology career coach and hiring expert. “Hiring managers are not necessarily looking for candidates with AI certifications,” Elliott says. “However, an AI certification, especially if completed in the last year or currently in progress, can signal to a hiring manager that you are well-versed in the latest AI trends. In other words, it’s a quick way to show that you speak the language of AI.” Software developers should not expect AI certifications to be a “silver bullet for landing a job or earning a promotion,” Elliott says. 


How important is data analytics in cycling?

Beyond recovery and nutrition, data analytics plays a pivotal role in shaping race-day decisions. The team combines structured data like power outputs, route elevation, and weather forecasts with unstructured data gathered from online posts by cycling enthusiasts. These data streams are fed into predictive models that anticipate race dynamics and help fine-tune equipment selection, down to tire pressure and aerodynamic adjustments. Metrics like Training Stress Score (TSS) and Heart Rate Variability (HRV) help monitor each rider’s fatigue and readiness, ensuring that training plans are both challenging and sustainable. “We analyze how environmental conditions affect each rider’s output and recovery,” Ryder says. ... The team’s data-driven strategy even extends to post-race analysis. At their hub, they evaluate power output, rider positioning, and performance variances. ... Looking ahead, Ryder sees artificial intelligence playing a greater role. The team is exploring machine learning models that predict tactical behavior from opponents and identify when riders are close to burnout. Through conversational analytics in Qlik, they envision proactive alerts such as, “This rider may not be fit to race tomorrow,” based on cumulative stress and recovery data. The team’s ethos is clear. Success doesn’t only come from racing harder. It comes from racing smarter. 


Balancing Growth and Sustainability: How Data Centers Can Navigate the Energy Demands of the AI Era

Given the systemic limitations on reliable power sources, practical solutions are needed. We must address power sustainability, upstream power infrastructure, new data center equipment and trained labor to deliver it all. By being proactive, we can “bend” the energy growth curve by decoupling data center growth from AI computing’s energy consumption. ... Before the AI boom, large data centers could grin and bear longer lead times for utilities; however, the immediate and skyrocketing demand for data centers to power AI applications calls for creative solutions. Data center developers and designers planning to build in energy-constrained regions need to consider deploying alternative prime power sources and/or energy storage systems to launch new data centers. This includes natural gas turbines, HVO-fueled generators, wind, solar, fuel cells, battery energy storage systems (BESS), and to a limited degree, small modular reactors. ... The utility company and grid operator’s intimate knowledge of the grid and local regulatory, governmental and political landscape makes them critical partners in the site selection, design, permitting, and construction of new data centers. Utilities provide critical insights on power capacity, costs, carbon intensity, power quality, grid stability and load management to ensure sustainable and reliable operations. 


LLMs can boost cybersecurity decisions, but not for everyone

Resilience played a major role in the results. High-resilience individuals performed well with or without LLM support, and they were better at using AI guidance without becoming over-reliant on it. Low-resilience participants did not gain as much from LLMs. In some cases, their performance did not improve or even declined. This creates a risk of uneven outcomes. Teams could see gaps widen between those who can critically evaluate AI suggestions and those who cannot. Over time, this may lead to over-reliance on models, reduced independent thinking, and a loss of diversity in how problems are approached. According to Lanyado, security leaders need to plan for these differences when building teams and training programs. “Not every organization and/or employee interacts with automation in the same way, and differences in team readiness can widen security risks,” he said. ... The findings suggest that organizations cannot assume adding an LLM will raise everyone’s performance equally. Without design, these tools could make some team members more effective while leaving others behind. The researchers recommend designing AI systems that adapt to the user. High-resilience individuals may benefit from open-ended suggestions. Lower-resilience users might need guidance, confidence indicators, or prompts that encourage them to consider alternative viewpoints.


Augment or Automate? Two Competing Visions for AI’s Economic Future

Looked at more critically, ChatGPT has become a supercharged Google search that leaps from finding information to synthesizing and judging it, a clear homogenization of human capacity that might lead to a world of grey-zone AI slop. ... While ChatGPT follows the people, Claude is following the money, hoping to capitalize on business needs to improve efficiency and productivity. By focusing on complex, high-value work, the company is signaling it believes the future of AI lies not in making everyone more productive, but in automating knowledge work that once required specialized human expertise. ... These divergent strategies result in different financial trajectories. OpenAI enjoys massive scale, with hundreds of millions of users providing a broad funnel for subscriptions. It generates an overwhelming amount of traffic that is of relatively lower value. OpenAI is betting the real money will flow through licensing its tools to Microsoft, where it can be embedded in Copilot and Office products to generate recurring revenue streams to offset its infrastructure and operating costs. Anthropic has fewer users but stronger unit economics. Its focus on enterprise use means customers are better positioned to purchase more expensive premium services that can demonstrate strong return-on-investment.


4 four ways to overcome the skills crisis and prepare your workforce for the age of AI

Orla Daly, CIO at Skillsoft, told ZDNET that the research shows business leaders must keep pace with the changing requirements for capabilities in different operational areas. "Significant percentages of skills are no longer relevant. The skills that we'll need in 2030 are only just evolving now," she said. "If you're not making upskilling and learning part of your core business strategy, then you're going to ultimately become uncompetitive in terms of retaining talent and delivering on your organizational outcomes." ... Daly said companies must pay more attention to the skills of their employees, including measuring and testing those proficiencies. "That's about using a combination of benchmarks, which we use at Skillsoft, that allow you, through testing, to understand the skills that you have," she said. "It's also about how you understand that capability in terms of real-world applications and measuring those skills in the context of the jobs that are being done." ... "You need to make measurement central to the business strategy, and have a program around learning, so it's part of the everyday culture of the business," she said. "From the executive level down, you need to say learning is a core part of the organization. Learning then turns up in all of your business operating frameworks in terms of how you track and measure the outcomes of programs, similar to other investments that you would make."


Sovereign AI meets Stockholm’s data center future

Sovereign AI refers to the ability of a nation to develop and operate AI platforms within its own borders, under its own laws and energy systems. ... By ensuring that sensitive data and critical compute resources remain local, sovereign AI reduces exposure to geopolitical risk, supports regulatory compliance and builds trust among both public and private stakeholders. Recent initiatives in Stockholm highlight how sovereign AI can be embedded into existing data center ecosystems. Purpose-built AI compute clusters, equipped with the latest GPU architectures, are being deployed on renewable power and integrated into local district heating networks, where excess server heat is recycled back into the city grid. These facilities are designed not only for high-performance workloads but also for long-term sustainability, aligning with Sweden’s climate and digital sovereignty goals. The strategy is clear: pair advanced AI infrastructure with domestic control and clean energy. By doing so, Stockholm can position itself as a European leader in sovereign AI, where innovation, security and sustainability converge in a way that few other markets can match. ... Stockholm’s ecosystem radiates gravitational pull. With more green, efficient and sovereign-capable data centers emerging, they attract additional clients and investments and reinforce the region’s dominance.


Agentic AI poised to pioneer the future of cybersecurity in the BFSI sector

Enter agentic AI systems that represent a network of intelligent agents having the capability for independent decision-making and adaptive learning. This extends the capabilities of traditional AI systems by incorporating autonomous decision-making and execution, while adopting proactive security measures. It is poised to revolutionise cybersecurity in the banking and financial services sector while bridging the gap between the speed of cyber-attacks and the slow, human-driven incident response. ... Agentic AI will proactively and autonomously hunt for threats across the IT systems within the financial institution by actively looking for vulnerabilities and possible threat vectors before they are exploited by threat actors. Agentic AI systems leverage their capabilities in simulation, where potential attack scenarios are modeled to identify vulnerabilities in the security posture. Data from logs, network traffic, and activities from endpoints are correlated to spot attack vectors as a part of the threat hunting process. ... AI agents have to be deployed into both customer-facing for better customer experience as well as internal systems. By establishing an agentic AI ecosystem, agents can collaborate across functions. Risk management, compliance monitoring, operational efficiency, and fraud detection functions can be streamlined, too. 


Shai-Hulud Attacks Shake Software Supply Chain Security Confidence

This isn’t the first time NPM’s reputation has been put to the test. The JavaScript community has seen a trio of supply chain attacks in rapid succession. Just recently, we saw the “manifest confusion” exploit, which tricked dependency trackers, and prior to that, a series of typosquatting and account-takeover incidents—remember the infamous “coa” and “rc” package hijacks? Now comes the latest beast from the sand: the Shai-Hulud supply chain attack. This is, depending on how you count, the third major NPM incident in recent memory—and arguably the most insidious. ... According to the detailed analysis by JFrog, attackers compromised multiple popular packages, including several that mimicked or targeted legitimate CrowdStrike modules. Before you panic: this wasn’t a direct attack on CrowdStrike itself, but the attackers were clever—by using names like “crowdstrike” and latching onto a trusted security vendor’s brand, they hoped to worm their payloads into unsuspecting production environments. ... What makes these attacks so damaging is less about the technical sophistication (though, don’t get me wrong, this one is clever) and more about how they shake our trust in the very idea of open collaboration. Every dev who’s ever typed `npm install` had to trust not just the original author, but every maintainer, every transitive dependency, and the opaque process of package publishing itself.