Quote for the day:
"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer
Data, creativity, AI, and marketing: where do we go from here?

While causes of inefficient data coordination vary, silos remain the most
frequent offender. There is still a widespread tendency to collect and store
data in isolated buckets that are often made all the more challenging by
lingering reliance on manual processing — as underscored by the fact four in ten
cross-industry employees cite structuring, preparing and manipulating
information among their top data difficulties. Therefore, a sizable number of
organizations are working with fragmented and inconsistent data that requires
time-consuming wrangling and is often subject to human error. The obvious
problem this poses is a lack of the comprehensive data to inform sound
decisions. At the AI-assisted marketing level, faulty data has a high potential
to jeopardise creative efforts; resulting in irrelevant ads that miss their mark
for target audiences and brand goals and misguided strategic moves based on
skewed analysis. Of course, there are no quick fixes to tackle these
complications. But businesses can reach greater data maturity and efficacy by
reconfiguring their orchestration methods. With a streamlined system that
persistently delivers consolidated data, marketers will be equipped to extract
key performance and consumer insights that steer refined and precise AI-enhanced
activity.
How AI is transforming strategy development
Beyond these well-understood risks, gen AI presents five additional
considerations for strategists. First, it elevates the importance of access to
proprietary data. Gen AI is accelerating a long-term trend: the democratization
of insights. It has never been easier to leverage off-the-shelf tools to rapidly
generate insights that are the building blocks of any strategy. As the adoption
of AI models spreads, so do the consequences of relying on commoditized
insights. After all, companies that use generic inputs will produce generic
outputs, which lead to generic strategies that, almost by definition, lead to
generic performance or worse. As a result, the importance of curating
proprietary data ecosystems (more on these below) that incorporate quantitative
and qualitative inputs will only increase. Second, the proliferation of data and
insights elevates the importance of separating signal from noise. This has long
been a challenge, but gen AI compounds it. We believe that as the technology
matures, it will be able to effectively pull out the signals that matter, but it
is not there yet. Third, as the ease of insight generation grows, so does the
value of executive-level synthesis. Business leaders—particularly those charged
with making strategic decisions—cannot operate effectively if they are buried in
data, even if that data is nothing but signal.
Why Cybersecurity Is Everyone’s Responsibility

Ultimately, cybersecurity is everyone’s responsibility because the fallout
affects us all when something goes wrong. When a company goes through a data
breach – say it’s ransomware – a number of people are held to task, and even
more are impacted. First, the CEO and CISO will rightly be held accountable.
Next, security managers will bear their share of the blame and be scrutinized
for how they handled the situation. Then, laws and lawmakers will be audited to
see if the proper rules were in place. The organization will be investigated for
compliance violations, and if found guilty, will pay regulatory fines, legal
costs, and maybe lose professional licenses. If the company cannot recover from
the reputational damage, revenue will be lost, and jobs will be cut. Lastly, and
most importantly, the users who lost their data can likely be impacted for
years, even a lifetime. Bank accounts and credit cards will need to be changed,
identity theft will be a pressing risk, and in the case of healthcare data
breaches, sensitive, unchangeable information could be leaked or used as
blackmail against the victims. ... The burden of cybersecurity rests with us
all. There is an old saying attributed to Dale Carnegie: “Here lies the body of
William Jay, who died maintaining his right of way— He was right, dead right, as
he sped along, but he’s just as dead as if he were wrong.”
Spy vs spy: Security agencies help secure the network edge

“Products designed with Secure by Design principles prioritize the security of
customers as a core business requirement, rather than merely treating it as a
technical feature,” the introductory web page said. “During the design phase
of a product’s development lifecycle, companies should implement Secure by
Design principles to significantly decrease the number of exploitable flaws
before introducing them to the market for widespread use or consumption.
Out-of-the-box, products should be secure with additional security features
such as multi-factor authentication (MFA), logging, and single sign-on (SSO)
available at no extra cost.” ... However, she doesn’t feel that lumping
together internet connected firewalls, routers, IoT devices, and OT systems in
an advisory is helpful to the community, and “neither is calling them ‘edge
devices,’ because it assumes that enterprise IT is the center of the universe
and the ‘edge’ is out there.” “That may be true for firewalls, routers, and
VPN gateways, but not for OT systems,” she continued. ... Many are internet
connected to support remote operations and maintenance, she noted, so “the
goal there should be to give advice on how to remote into those systems
securely, and the tone of the advisories should be targeted to the production
realities where IT security tools and processes are not always a good
idea.”
Will the end of Windows 10 accelerate CIO interest in AI PCs?

“The vision around AI PCs is that, over time, more of the models, starting
with small language models, and then quantized large language models … more of
those workloads will happen locally, faster, with lower latency, and you won’t
need to be connected to the internet and it should be less expensive,” the IDC
analyst adds. “You’ll pay a bit more for an AI PC but [the AI workload is] not
on the cloud and then arguably there’s more profit and it’s more secure.” ...
“It’s smart for CIOs to consider some early deployments of these to bring the
AI closer to the employees and processes,” Melby says. “A side benefit is that
it keeps the compute local and reduces cyber risk to a degree. But it takes a
strategic view and precision targeting. The costs of AI PCs/laptops are at a
premium right now, so we really need a compelling business case, and the
potential for reduced cloud costs could help break loose those kinds of
justifications.” Not all IT leaders are on board with running AI on PCs
and laptops. “Unfortunately, there are many downsides to this approach,
including being locked into the solution, upgrades becoming more difficult,
and not being able to benefit from any incremental improvements,” says Tony
Marron, managing director of Liberty IT at Liberty Mutual.
Self-sovereign identity could transform fraud prevention, but…

Despite these challenges, SSI has the potential to be a powerful tool in the
fight against fraud. Consider the growing use of mobile driver’s licenses
(mDLs). These digital credentials allow users to prove their identity quickly
and securely without exposing unnecessary personal information. Unlike
traditional forms of identification, which often reveal more data than needed,
SSI-based credentials operate on the principle of minimal disclosure, only
sharing the required details. This limits the amount of exploitable
information in circulation and reduces identity theft risk. Another promising
area is passwordless authentication. For years, we’ve talked about the death
of the password, yet reliance on weak, easily compromised credentials
persists. SSI could accelerate the transition to more secure authentication
mechanisms, using biometrics and cryptographic certificates instead of
passwords. By eliminating centralized repositories of login credentials,
businesses can significantly reduce the risk of credential-stuffing attacks
and phishing attempts. However, the likelihood of a fully realized SSI wallet
that consolidates identity documents, payment credentials and other sensitive
information remains low, at least in the near future. The convenience factor
isn’t there yet, and without significant consumer demand, businesses have
little motivation to push for mass adoption.
The Staging Bottleneck: Microservices Testing in FinTech

Two common scaling strategies exist: mocking dependencies, which sacrifices
fidelity and risks failures in critical integrations, or duplicating staging
environments, which is costly and complex due to compliance needs. Teams often
resort to shared environments, causing bottlenecks, interference and missed
bugs — slowing development and increasing QA overhead. ... By multiplexing the
baseline staging setup, sandboxes provide tailored environments for individual
engineers or QA teams without adding compliance risks or increasing
maintenance burdens, as they inherit the same compliance and configuration
frameworks as production. These environments allow teams to work independently
while maintaining fidelity to production conditions. Sandboxes integrate
seamlessly with external APIs and dependencies, replicating real-world
scenarios such as rate limits, timeouts and edge cases. This enables robust
testing of workflows and edge cases while preserving isolation to avoid
disruptions across teams or systems. ... By adopting sandboxes, FinTech
organizations can enable high-quality, efficient development cycles, ensuring
compliance while unlocking innovation at scale. This paradigm shift away from
monolithic staging environments toward dynamic, scalable sandboxes gives
FinTech companies a critical competitive advantage.
From Code to Culture: Adapting Workplaces to the AI Era
As AI renovates industries, it also exposes a critical gap in workforce
readiness. The skills required to excel in an AI-driven world are evolving
rapidly, and many employees find their current capabilities misaligned with
these new demands. In this context, reskilling is not just a response to
technological disruption; it is a strategic necessity for ensuring long-term
organisational resilience. Today’s workforce is broadening its skillset at an
unprecedented pace. Professionals are acquiring 40% more diverse skills compared
to five years ago, reflecting the growing need to adapt to the complexities of
AI-integrated workplaces. AI literacy has emerged as a crucial area of focus,
encompassing abilities like prompt engineering and proficiency with tools. ...
Beyond its operational benefits, AI is reimagining innovation and strategic
decision-making in a volatile business environment characterised by economic
uncertainty and rapid technological shifts. However, organisations must tread
carefully. AI is not a panacea, and its effectiveness depends on thoughtful
implementation. Ethical considerations like data privacy, algorithmic bias, and
the potential for job displacement must be addressed to ensure that AI augments
rather than undermines human potential. Transparent communication about AI’s
role in the workplace can foster trust and help employees understand its
benefits.
CIOs and CISOs grapple with DORA: Key challenges, compliance complexities

“As often happens with such ambitious regulations, the path to compliance is
particularly complex,” says Giuseppe Ridulfo, deputy head of the organization
department and head of IS at Banca Etica. “This is especially true for smaller
entities, such as Banca Etica, which find themselves having to face significant
structural challenges. DORA, although having shared objectives, lacks a
principle of proportionality that takes into account the differences between
large institutions and smaller banks.” This is compounded for smaller
organizations due to the prevalence of outsourcing for these firms, Ridulfo
explains. “This operating model, which allows access to advanced technologies
and skills, clashes with the stringent requirements of the regulation, in
particular those that impose rigorous control over third-party suppliers and
complex management of contracts relating to essential or important functions,”
he says. ... The complexity of DORA, therefore, is not in the text itself,
although substantial, but in the work it entails for compliance. As Davide
Baldini, lawyer and partner of the ICT Legal Consulting firm, points out, “DORA
is a very clear law, as it is a regulation, which is applied equally in all EU
countries and contains very detailed provisions.
True Data Freedom Starts with Data Integrity
Data integrity is essential to ensuring business continuity, and the movement of
data poses a significant risk. A lack of pre-migration testing is the main cause
of issues such as data corruption and data loss during the movement of data.
These issues lead to unexpected downtime, reputational damage, and loss of
essential information. As seen by this year’s global incident, one fault, no
matter how small, can result in a significant negative impact on the business
and its stakeholders. This incident sends a clear message – testing before
implementation is essential. Without proper testing, organizations cannot
identify potential issues and implement corrective measures. ... This includes
testing for both functionality, or how well the system operates after migration,
and economics, the cost-effectiveness of the system or application.
Functionality testing ensures a system continues to meet expectations. Economics
testing involves examining resource consumption, service costs and overall
scalability to ascertain whether the solution is economically sustainable for
the business. This is particularly important with cloud-based migrations. While
organizations can manually conduct these audits, tools on the market can also
help can conduct regular automated data integrity audits.
No comments:
Post a Comment