Daily Tech Digest - June 02, 2025


Quote for the day:

"The best way to predict the future is to create it." -- Peter Drucker


Doing nothing is still doing something

Here's the uncomfortable truth, doing nothing is still doing something – and very often, it's the wrong thing. We saw this play out at the start of the year when Donald Trump's likely return to the White House and the prospect of fresh tariffs sent ripples through global markets. Investors froze, and while the tariffs have been shelved (for now), the real damage had already been done – not to portfolios, but to behaviour. This is decision paralysis in action. And in my experience, it's most acute among entrepreneurs and high-net-worth individuals post-exit, many of whom are navigating wealth independently for the first time. It's human nature to crave certainty, especially when it comes to money, but if you're waiting for a time when everything is calm, clear, and safe before investing or making a financial decision, I've got bad news – that day is never going to arrive. Markets move, the political climate is noisy, the global economy is always in flux. If you're frozen by fear, your money isn't standing still – it's slipping backwards. ... Entrepreneurs are used to taking calculated risks, but when it comes to managing post-exit wealth or personal finances, many find themselves out of their depth. A little knowledge can be a dangerous thing – and half-understanding the tax system, the economy, or the markets can lead to costly mistakes.


The Future of Agile Isn’t ‘agile’

One reason is that agilists introduced too many conflicting and divergent approaches that fragmented the market. “Agile” meant so many things to different people that hiring managers could never predict what they were getting when a candidate’s resume indicated s/he was “experienced in agile development.” Another reason organizations failed to generate value with “agile” was that too many agile approaches focused on changing practices or culture while ignoring the larger delivery system in which the practices operate, reinforcing a culture that is resistant to change. This shouldn’t be a surprise to people following our industry, as my colleague and LeadingAgile CEO Mike Cottmeyer has been talking about why agile fails for over a decade, such as his Agile 2014 presentation, Why is Agile Failing in Large Enterprises… …and what you can do about it. The final reason that led “agile” to its current state of disfavor is that early in the agile movement there was too much money to be made in training and certifications. The industry’s focus on certifications had the effect over time of misaligning the goals of the methodology / training companies and their customers. “Train everyone. Launch trains” may be a short-term success pattern for a methodology purveyor, but it is ultimately unsustainable because the training and practices are too disconnected from tangible results senior executives need to compete and win in the market.


CIOs get serious about closing the skills gap — mainly from within

Staffing and talent issues are affecting CIOs’ ability to double down on strategic and innovation objectives, according to 54% of this year’s respondents. As a result, closing the skills gap has become a huge priority. “What’s driving it in some CIOs’ minds is tied back to their AI deployments,” says Mark Moccia, a vice president research director at Forrester. “They’re under a lot of cost pressure … to get the most out of AI deployments” to increase operational efficiencies and lower costs, he says. “It’s driving more of a need to close the skills gap and find people who have deployed AI successfully.” AI, generative AI, and cybersecurity top the list of skills gaps preventing organizations from achieving objectives, according to an April Gartner report. Nine out of 10 organizations have adopted or plan to adopt skills-based talent growth to address those challenges. ... The best approach, Karnati says, is developing talent from within. “We’re equipping our existing teams with the space, tools, and support needed to explore genAI through practical application, including rapid prototyping, internal hackathons, and proof-of-concept sprints,” Karnati says. “These aren’t just technical exercises — they’re structured opportunities for cross-functional learning, where engineers, product leads, and domain experts collaborate to test real use cases.”


The Critical Quantum Timeline: Where Are We Now And Where Are We Heading?

Technically, the term is fault-tolerant quantum computing. The qubits that quantum computers use to process data have to be kept in a delicate state – sometimes frozen to temperatures very close to absolute zero – in order to stay stable and not “decohere”. Keeping them in this state for longer periods of time requires large amounts of energy but is necessary for more complex calculations. Recent research by Google, among others, is pointing the way towards developing more robust and resilient quantum methods. ... One of the most exciting prospects ahead of us involves applying quantum computing to AI. Firstly, many AI algorithms involve solving the types of problems that quantum computers excel at, such as optimization problems. Secondly, with its ability to more accurately simulate and model the physical world, it will generate huge amounts of synthetic data. ... Looking beyond the next two decades, quantum computing will be changing the world in ways we can’t even imagine yet, just as the leap to transistors and microchips enabled the digital world and the internet of today. It will tackle currently impossible problems, help us create fantastic new materials with amazing properties and medicines that affect our bodies in new ways, and help us tackle huge problems like climate change and cleaning the oceans.


6 hard truths security pros must learn to live with

Every technological leap will be used against you - Information technology is a discipline built largely on rapid advances. Some of these technological leaps can help improve your ability to secure the enterprise. But every last one of them brings new challenges from a security perspective, not the least of which is how they will be used to attack your systems, networks, and data. ... No matter how good you are, your organization will be victimized - This is a hard one to swallow, but if we take the “five stages of grief” approach to cybersecurity, it’s better to reach the “acceptance” level than to remain in denial because much of what happens is simply out of your control. A global survey of 1,309 IT and security professionals found that 79% of organizations suffered a cyberattack within the past 12 months, up from 68% just a year ago, according to cybersecurity vendor Netwrix’s Hybrid Security Trends Report. ... Breach blame will fall on you — and the fallout could include personal liability - As if getting victimized by a security breach isn’t enough, new Securities and Exchange Commission (SEC) rules put CISOs in the crosshairs for potential criminal prosecution. The new rules, which went into effect in 2023, require publicly listed companies to report any material cybersecurity incident within four business days.


Are you an A(I)ction man?

Whilst individually AI-generated action figures have a small impact - a drop in the ocean you could say - trends like this exemplify how easy it is to use AI en masse, and collectively create an ocean of demand. Seeing the number of individuals, even those with knowledge of AI’s lofty resource consumption, partaking in the creation of these avatars, makes me wonder if we need greater awareness of the collective impact of GenAI. Now, I want to take a moment to clarify this is not a criticism of those producing AI-generated content, or of anyone who has taken part in the ‘action figure’ trend. I’ve certainly had many goes with DALL-E for fun, and taken part in various trends in my time, but the volume of these recent images caught my attention. Many of the conversations I had at Connect New York a few weeks ago addressed sustainability and the need for industry collaboration, but perhaps we should also be instilling more awareness from an end-user point of view. After all, ChatGPT, according to the Washington Post, consumes 39.8 million kWh per day. I’d be fascinated to see the full picture of power and water consumption from the AI-generated action figures. Whilst it will only account for a tiny fraction of overall demand, these drops can have a tendency to accumulate. 


The MVP Dilemma: Scale Now or Scale Later?

Teams often have few concrete requirements about scalability. The business may not be a reliable source of information but, as we noted above, they do have a business case that has implicit scalability needs. It’s easy for teams to focus on functional needs, early on, and ignore these implicit scaling requirements. They may hope that scaling won’t be a problem or that they can solve the problem by throwing more computing resources at it. They have a legitimate concern about overbuilding and increasing costs, but hoping that scaling problems won't happen is not a good scaling strategy. Teams need to consider scaling from the start. ... The MVP often has implicit scalability requirements, such as "in order for this idea to be successful we need to recruit ten thousand new customers". Asking the right questions and engaging in collaborative dialogue can often uncover these. Often these relate to success criteria for the MVP experiment. ... Some people see asynchronous communication as another scaling panacea because it allows work to proceed independently of the task that initiated the work. The theory is that the main task can do other things while work is happening in the background. So long as the initiating task does not, at some point, need the results of the asynchronous task to proceed, asynchronous processing can help a system to scale. 


Data Integrity: What It Is and Why It Matters

By contrast, data quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and data quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. By confirming the quality of their data, businesses improve the efficiency of their data operations, increase the value of their data, and enhance collaboration and decision-making. Data Quality efforts also help companies reduce their costs, enhance employee productivity, and establish closer relationships with their customers. Implementing a data integrity strategy begins by identifying the sources of potential data corruption in your organization. These include human error, system malfunctions, unauthorized access, failure to validate and test, and lack of Governance. A data integrity plan operates at both the database level and business level.


Backup-as-a-service explained: Your guide to cloud data protection

With BaaS, enterprises have quick, easy access to their data. Providers store multiple copies of backups in different locations so that data can be recovered when lost due to outages, failures or accidental deletion. BaaS also features geographic distribution and automatic failover, when data handling is automatically moved to a different server or system in the event of an incident to ensure that it is safe and readily available. ... With BaaS, the provider uses its own cloud infrastructure and expertise to handle the entire backup and restoration process. Enterprises simply connect to the backup engine, set their preferences and the platform handles file transfer, encryption and maintenance. Automation is the engine that drives BaaS, helping ensure that data is continuously backed up without slowing down network performance or interrupting day-to-day work. Enterprises first select the data they need backed up — whether it be simple files or complex apps — backup frequency and data retention times. ... Enterprises shouldn’t just jump right into BaaS — proper preparation is critical. Firstly, it is important to define a backup policy that identifies the organization’s critical data that must be backed up. This policy should also include backup frequency, storage location and how long copies should be retained.


CISO 3.0: Leading AI governance and security in the boardroom

AI is expanding the CISO’s required skillset beyond cybersecurity to include fluency in data science, machine learning fundamentals, and understanding how to evaluate AI models – not just technically, but from a governance and risk perspective. Understanding how AI works and how to use it responsibly is essential. Fortunately, AI has also evolved how we train our teams. For example, adaptive learning platforms that personalize content and simulate real-world scenarios are assisting in closing the skills gap more effectively. Ultimately, to become successful in the AI space, both CISOs and their teams will need to grasp how AI models are trained, the data they rely on, and the risks they may introduce. CISOs should always prioritize accountability and transparency. Red flags to look out for include a lack of explainability or insufficient auditing capabilities, both of which leave companies vulnerable. It’s important to understand how it handles sensitive data, and whether it has proven success in similar environments. Beyond that, it’s also vital to evaluate how well the tool aligns with your governance model, that it can be audited, and that it integrates well into your existing systems. Lastly, overpromising capabilities or providing an unclear roadmap for support are signs to proceed with caution.

Daily Tech Digest - June 01, 2025


Quote for the day:

"You are never too old to set another goal or to dream a new dream." -- C.S. Lewis


A wake-up call for real cloud ROI

To make cloud spending work for you, the first step is to stop, assess, and plan. Do not assume the cloud will save money automatically. Establish a meticulous strategy that matches workloads to the right environments, considering both current and future needs. Take the time to analyze which applications genuinely benefit from the public cloud versus alternative options. This is essential for achieving real savings and optimal performance. ... Enterprises should rigorously review their existing usage, streamline environments, and identify optimization opportunities. Invest in cloud management platforms that can automate the discovery of inefficiencies, recommend continuous improvements, and forecast future spending patterns with greater accuracy. Optimization isn’t a one-time exercise—it must be an ongoing process, with automation and accountability as central themes. Enterprises are facing mounting pressure to justify their escalating cloud spend and recapture true business value from their investments. Without decisive action, waste will continue to erode any promised benefits. ... In the end, cloud’s potential for delivering economic and business value is real, but only for organizations willing to put in the planning, discipline, and governance that cloud demands. 


Why IT-OT convergence is a gamechanger for cybersecurity

The combination of IT and OT is a powerful one. It promises real-time visibility into industrial systems, predictive maintenance that limits downtime and data-driven decision making that gives everything from supply chain efficiency to energy usage a boost. When IT systems communicate directly with OT devices, businesses gain a unified view of operations – leading to faster problem solving, fewer breakdowns, smarter automation and better resource planning. This convergence also supports cost reduction through more accurate forecasting, optimised maintenance and the elimination of redundant technologies. And with seamless collaboration, IT and OT teams can now innovate together, breaking down silos that once slowed progress. Cybersecurity maturity is another major win. OT systems, often built without security in mind, can benefit from established IT protections like centralised monitoring, zero-trust architectures and strong access controls. Concurrently, this integration lays the foundation for Industry 4.0 – where smart factories, autonomous systems and AI-driven insights thrive on seamless IT-OT collaboration. ... The convergence of IT and OT isn’t just a tech upgrade – it’s a transformation of how we operate, secure and grow in our interconnected world. But this new frontier demands a new playbook that combines industrial knowhow with cybersecurity discipline.


How To Measure AI Efficiency and Productivity Gains

Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview. ... The challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained." ... Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. 


The discipline we never trained for: Why spiritual quotient is the missing link in leadership

Spiritual Quotient (SQ) is the intelligence that governs how we lead from within. Unlike IQ or EQ, SQ is not about skill—it is about state. It reflects a leader’s ability to operate from deep alignment with their values, to stay centred amid volatility and to make decisions rooted in clarity rather than compulsion. It shows up in moments when the metrics don’t tell the full story, when stakeholders pull in conflicting directions. When the team is watching not just what you decide, but who you are while deciding it. It’s not about belief systems or spirituality in a religious sense; it’s about coherence between who you are, what you value, and how you lead. At its core, SQ is composed of several interwoven capacities: deep self-awareness, alignment with purpose, the ability to remain still and present amid volatility, moral discernment when the right path isn’t obvious, and the maturity to lead beyond ego. ... The workplace in 2025 is not just hybrid—it is holographic. Layers of culture, technology, generational values and business expectations now converge in real time. AI challenges what humans should do. Global disruptions challenge why businesses exist. Employees are no longer looking for charismatic heroes. They’re looking for leaders who are real, reflective and rooted.


Microsoft Confirms Password Deletion—Now Just 8 Weeks Away

The company’s solution is to first move autofill and then any form of password management to Edge. “Your saved passwords (but not your generated password history) and addresses are securely synced to your Microsoft account, and you can continue to access them and enjoy seamless autofill functionality with Microsoft Edge.” Microsoft has added an Authenticator splash screen with a “Turn on Edge” button as its ongoing campaign to switch users to its own browser continues. It’s not just with passwords, of course, there are the endless warnings and nags within Windows and even pointers within security advisories to switch to Edge for safety and security. ... Microsoft wants users to delete passwords once that’s done, so no legacy vulnerability remains, albeit Google has not gone quite that far as yet. You do need to remove SMS 2FA though, and use an app or key-based code at a minimum. ... Notwithstanding these Authenticator changes, Microsoft users should use this as a prompt to delete passwords and replace them with passkeys, per the Windows-makers’ advice. This is especially true given increasing reports of two-factor authentication (2FA) bypasses that are increasingly rendering basics forms of 2FA redundant.


Sustainable cyber risk management emerges as industrial imperative as manufacturers face mounting threats

The ability of a business to adjust, absorb, and continue operating under pressure is becoming a performance metric in and of itself. It is measured not only in uptime or safety statistics. It’s not a technical checkbox; it’s a strategic commitment that is becoming the new baseline for industrial trust and continuity. At the heart of this change lies security by design. Organizations are working to integrate security into OT environments, working their way up from system architecture to vendor procurement and lifecycle management, rather than adding protections along the way and after deployment. ... The path is made more difficult by the acute lack of OT cyber skills, which could be overcome by employing specialists and establishing long-term pipelines through internal reskilling, knowledge transfer procedures, and partnerships with universities. Building sustainable industrial cyber risk management can be made more organized using the ISA/IEC 62443 industrial cybersecurity standards. Cyber defense is now a continuous, sustainable discipline rather than an after-the-fact response thanks to these widely recognized models, which also allow industries to link risk mitigation to real industrial processes, guarantee system interoperability, and measure progress against common benchmarks.


Design Sprint vs Design Thinking: When to Use Each Framework for Maximum Impact

The Design Sprint is a structured five-day process created by Jake Knapp during his time at Google Ventures. It condenses months of work into a single workweek, allowing teams to rapidly solve challenges, create prototypes, and test ideas with real users to get clear data and insights before committing to a full-scale development effort. Unlike the more flexible Design Thinking approach, a Design Sprint follows a precise schedule with specific activities allocated to each day ...
The Design Sprint operates on the principle of "together alone" – team members work collaboratively during discussions and decision-making, but do individual work during ideation phases to ensure diverse thinking and prevent groupthink. ... Design Thinking is well-suited for broadly exploring problem spaces, particularly when the challenge is complex, ill-defined, or requires extensive user research. It excels at uncovering unmet needs and generating innovative solutions for "wicked problems" that don't have obvious answers. The Design Sprint works best when there's a specific, well-defined challenge that needs rapid resolution. It's particularly effective when a team needs to validate a concept quickly, align stakeholders around a direction, or break through decision paralysis.


Broadcom’s VMware Financial Model Is ‘Ethically Flawed’: European Report

Some of the biggest issues VMware cloud partners and customers in Europe include the company increasing prices after Broadcom axed VMware’s former perpetual licenses and pay-as-you-go monthly pricing models. Another big issue was VMware cutting its product portfolio from thousands of offerings into just a few large bundles that are only available via subscription with a multi-year minimum commitment. “The current VMware licensing model appears to rely on practices that breach EU competition regulations which, in addition to imposing harm on its customers and the European cloud ecosystem, creates a material risk for the company,” said the ECCO in its report. “Their shareholders should investigate and challenge the legality of such model.” Additionally, the ECCO said Broadcom recently made changes to its partnership program that forced partners to choose between either being a cloud service provider or a reseller. “It is common in Europe for CSP to play both [service provider and reseller] roles, thus these new requirements are a further harmful restriction on European cloud service providers’ ability to compete and serve European customers,” the ECCO report said.


Protecting Supply Chains from AI-Driven Risks in Manufacturing

Cybercriminals are notorious for exploiting AI and have set their sights on supply chains. Supply chain attacks are surging, with current analyses indicating a 70% likelihood of cybersecurity incidents stemming from supplier vulnerabilities. Additionally, Gartner projects that by the end of 2025, nearly half of all global organizations will have faced software supply chain attacks. Attackers manipulate data inputs to mislead algorithms, disrupt operations or steal proprietary information. Hackers targeting AI-enabled inventory systems can compromise demand forecasting, causing significant production disruptions and financial losses. ... Continuous validation of AI-generated data and forecasts ensures that AI systems remain reliable and accurate. The “black-box” nature of most AI products, where internal processes remain hidden, demands innovative auditing approaches to guarantee reliable outputs. Organizations should implement continuous data validation, scenario-based testing and expert human review to mitigate the risks of bias and inaccuracies. While black-box methods like functional testing offer some evaluation, they are inherently limited compared to audits of transparent systems, highlighting the importance of open AI development.


What's the State of AI Costs in 2025?

This year's report revealed that 44% of respondents plan to invest in improving AI explainability. Their goals are to increase accountability and transparency in AI systems as well as to clarify how decisions are made so that AI models are more understandable to users. Juxtaposed with uncertainty around ROI, this statistic signals further disparity between organizations' usage of AI and accurate understanding of it. ... Of the companies that use third-party platforms, over 90% reported high awareness of AI-driven revenue. That awareness empowers them to confidently compare revenue and cost, leading to very reliable ROI calculations. Conversely, companies that don't have a formal cost-tracking system have much less confidence that they can correctly determine the ROI of their AI initiatives. ... Even the best-planned AI projects can become unexpectedly expensive if organizations lack effective cost governance. This report highlights the need for companies to not merely track AI spend but optimize it via real-time visibility, cost attribution, and useful insights. Cloud-based AI tools account for almost two-thirds of AI budgets, so cloud cost optimization is essential if companies want to stop overspending. Cost is more than a metric; it's the most strategic measure of whether AI growth is sustainable. As companies implement better cost management practices and tools, they will be able to scale AI in a fiscally responsible way, confidently measure ROI, and prevent financial waste.