Quote for the day:
"The best way to predict the future is
to create it." -- Peter Drucker

Here's the uncomfortable truth, doing nothing is still doing something – and
very often, it's the wrong thing. We saw this play out at the start of the year
when Donald Trump's likely return to the White House and the prospect of fresh
tariffs sent ripples through global markets. Investors froze, and while the
tariffs have been shelved (for now), the real damage had already been done – not
to portfolios, but to behaviour. This is decision paralysis in action. And in my
experience, it's most acute among entrepreneurs and high-net-worth individuals
post-exit, many of whom are navigating wealth independently for the first time.
It's human nature to crave certainty, especially when it comes to money, but if
you're waiting for a time when everything is calm, clear, and safe before
investing or making a financial decision, I've got bad news – that day is never
going to arrive. Markets move, the political climate is noisy, the global
economy is always in flux. If you're frozen by fear, your money isn't standing
still – it's slipping backwards. ... Entrepreneurs are used to taking calculated
risks, but when it comes to managing post-exit wealth or personal finances, many
find themselves out of their depth. A little knowledge can be a dangerous thing
– and half-understanding the tax system, the economy, or the markets can lead to
costly mistakes.

One reason is that agilists introduced too many conflicting and divergent
approaches that fragmented the market. “Agile” meant so many things to different
people that hiring managers could never predict what they were getting when a
candidate’s resume indicated s/he was “experienced in agile development.”
Another reason organizations failed to generate value with “agile” was that too
many agile approaches focused on changing practices or culture while ignoring
the larger delivery system in which the practices operate, reinforcing a culture
that is resistant to change. This shouldn’t be a surprise to people following
our industry, as my colleague and LeadingAgile CEO Mike Cottmeyer has been
talking about why agile fails for over a decade, such as his Agile 2014
presentation, Why is Agile Failing in Large Enterprises… …and what you can do
about it. The final reason that led “agile” to its current state of disfavor is
that early in the agile movement there was too much money to be made in training
and certifications. The industry’s focus on certifications had the effect over
time of misaligning the goals of the methodology / training companies and their
customers. “Train everyone. Launch trains” may be a short-term success pattern
for a methodology purveyor, but it is ultimately unsustainable because the
training and practices are too disconnected from tangible results senior
executives need to compete and win in the market.

Staffing and talent issues are affecting CIOs’ ability to double down on
strategic and innovation objectives, according to 54% of this year’s
respondents. As a result, closing the skills gap has become a huge priority.
“What’s driving it in some CIOs’ minds is tied back to their AI deployments,”
says Mark Moccia, a vice president research director at Forrester. “They’re
under a lot of cost pressure … to get the most out of AI deployments” to
increase operational efficiencies and lower costs, he says. “It’s driving more
of a need to close the skills gap and find people who have deployed AI
successfully.” AI, generative AI, and cybersecurity top the list of skills gaps
preventing organizations from achieving objectives, according to an April
Gartner report. Nine out of 10 organizations have adopted or plan to adopt
skills-based talent growth to address those challenges. ... The best
approach, Karnati says, is developing talent from within. “We’re equipping our
existing teams with the space, tools, and support needed to explore genAI
through practical application, including rapid prototyping, internal hackathons,
and proof-of-concept sprints,” Karnati says. “These aren’t just technical
exercises — they’re structured opportunities for cross-functional learning,
where engineers, product leads, and domain experts collaborate to test real use
cases.”

Technically, the term is fault-tolerant quantum computing. The qubits that
quantum computers use to process data have to be kept in a delicate state –
sometimes frozen to temperatures very close to absolute zero – in order to stay
stable and not “decohere”. Keeping them in this state for longer periods of time
requires large amounts of energy but is necessary for more complex calculations.
Recent research by Google, among others, is pointing the way towards developing
more robust and resilient quantum methods. ... One of the most exciting
prospects ahead of us involves applying quantum computing to AI. Firstly, many
AI algorithms involve solving the types of problems that quantum computers excel
at, such as optimization problems. Secondly, with its ability to more accurately
simulate and model the physical world, it will generate huge amounts of
synthetic data. ... Looking beyond the next two decades, quantum computing
will be changing the world in ways we can’t even imagine yet, just as the leap
to transistors and microchips enabled the digital world and the internet of
today. It will tackle currently impossible problems, help us create fantastic
new materials with amazing properties and medicines that affect our bodies in
new ways, and help us tackle huge problems like climate change and cleaning the
oceans.

Every technological leap will be used against you - Information technology is a
discipline built largely on rapid advances. Some of these technological leaps
can help improve your ability to secure the enterprise. But every last one of
them brings new challenges from a security perspective, not the least of which
is how they will be used to attack your systems, networks, and data. ... No
matter how good you are, your organization will be victimized - This is a hard
one to swallow, but if we take the “five stages of grief” approach to
cybersecurity, it’s better to reach the “acceptance” level than to remain in
denial because much of what happens is simply out of your control. A global
survey of 1,309 IT and security professionals found that 79% of organizations
suffered a cyberattack within the past 12 months, up from 68% just a year ago,
according to cybersecurity vendor Netwrix’s Hybrid Security Trends Report. ...
Breach blame will fall on you — and the fallout could include personal liability
- As if getting victimized by a security breach isn’t enough, new Securities and
Exchange Commission (SEC) rules put CISOs in the crosshairs for potential
criminal prosecution. The new rules, which went into effect in 2023, require
publicly listed companies to report any material cybersecurity incident within
four business days.
Whilst individually AI-generated action figures have a small impact - a drop in
the ocean you could say - trends like this exemplify how easy it is to use AI en
masse, and collectively create an ocean of demand. Seeing the number of
individuals, even those with knowledge of AI’s lofty resource consumption,
partaking in the creation of these avatars, makes me wonder if we need greater
awareness of the collective impact of GenAI. Now, I want to take a moment to
clarify this is not a criticism of those producing AI-generated content, or of
anyone who has taken part in the ‘action figure’ trend. I’ve certainly had many
goes with DALL-E for fun, and taken part in various trends in my time, but the
volume of these recent images caught my attention. Many of the conversations I
had at Connect New York a few weeks ago addressed sustainability and the need
for industry collaboration, but perhaps we should also be instilling more
awareness from an end-user point of view. After all, ChatGPT, according to the
Washington Post, consumes 39.8 million kWh per day. I’d be fascinated to see the
full picture of power and water consumption from the AI-generated action
figures. Whilst it will only account for a tiny fraction of overall demand,
these drops can have a tendency to accumulate.
/articles/mvp-dilemma/en/smallimage/mvp-dilemma-thumbnail-1748330492438.jpg)
Teams often have few concrete requirements about scalability. The business may
not be a reliable source of information but, as we noted above, they do have a
business case that has implicit scalability needs. It’s easy for teams to
focus on functional needs, early on, and ignore these implicit scaling
requirements. They may hope that scaling won’t be a problem or that they can
solve the problem by throwing more computing resources at it. They have a
legitimate concern about overbuilding and increasing costs, but hoping that
scaling problems won't happen is not a good scaling strategy. Teams need to
consider scaling from the start. ... The MVP often has implicit scalability
requirements, such as "in order for this idea to be successful we need to
recruit ten thousand new customers". Asking the right questions and engaging
in collaborative dialogue can often uncover these. Often these relate to
success criteria for the MVP experiment. ... Some people see asynchronous
communication as another scaling panacea because it allows work to proceed
independently of the task that initiated the work. The theory is that the main
task can do other things while work is happening in the background. So long as
the initiating task does not, at some point, need the results of the
asynchronous task to proceed, asynchronous processing can help a system to
scale.

By contrast, data quality builds on methods for confirming the integrity of
the data and also considers the data’s uniqueness, timeliness, accuracy, and
consistency. Data is considered “high quality” when it ranks high in all these
areas based on the assessment of data analysts. High-quality data is
considered trustworthy and reliable for its intended applications based on the
organization’s data validation rules. The benefits of data integrity and data
quality are distinct, despite some overlap. Data integrity allows a business
to recover quickly and completely in the event of a system failure, prevent
unauthorized access to or modification of the data, and support the company’s
compliance efforts. By confirming the quality of their data, businesses
improve the efficiency of their data operations, increase the value of their
data, and enhance collaboration and decision-making. Data Quality efforts also
help companies reduce their costs, enhance employee productivity, and
establish closer relationships with their customers. Implementing a data
integrity strategy begins by identifying the sources of potential data
corruption in your organization. These include human error, system
malfunctions, unauthorized access, failure to validate and test, and lack of
Governance. A data integrity plan operates at both the database level and
business level.

With BaaS, enterprises have quick, easy access to their data. Providers store
multiple copies of backups in different locations so that data can be recovered
when lost due to outages, failures or accidental deletion. BaaS also features
geographic distribution and automatic failover, when data handling is
automatically moved to a different server or system in the event of an incident
to ensure that it is safe and readily available. ... With BaaS, the provider
uses its own cloud infrastructure and expertise to handle the entire backup and
restoration process. Enterprises simply connect to the backup engine, set their
preferences and the platform handles file transfer, encryption and maintenance.
Automation is the engine that drives BaaS, helping ensure that data is
continuously backed up without slowing down network performance or interrupting
day-to-day work. Enterprises first select the data they need backed up — whether
it be simple files or complex apps — backup frequency and data retention times.
... Enterprises shouldn’t just jump right into BaaS — proper preparation is
critical. Firstly, it is important to define a backup policy that identifies the
organization’s critical data that must be backed up. This policy should also
include backup frequency, storage location and how long copies should be
retained.
AI is expanding the CISO’s required skillset beyond cybersecurity to include
fluency in data science, machine learning fundamentals, and understanding how to
evaluate AI models – not just technically, but from a governance and risk
perspective. Understanding how AI works and how to use it responsibly is
essential. Fortunately, AI has also evolved how we train our teams. For example,
adaptive learning platforms that personalize content and simulate real-world
scenarios are assisting in closing the skills gap more effectively. Ultimately,
to become successful in the AI space, both CISOs and their teams will need to
grasp how AI models are trained, the data they rely on, and the risks they may
introduce. CISOs should always prioritize accountability and transparency. Red
flags to look out for include a lack of explainability or insufficient auditing
capabilities, both of which leave companies vulnerable. It’s important to
understand how it handles sensitive data, and whether it has proven success in
similar environments. Beyond that, it’s also vital to evaluate how well the tool
aligns with your governance model, that it can be audited, and that it
integrates well into your existing systems. Lastly, overpromising capabilities
or providing an unclear roadmap for support are signs to proceed with caution.