Quote for the day:
"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis
Why CIOs need to master the art of adaptation
Adaptability sounds simple in theory, but when and how CIOs should walk away
from tested tools and procedures is another matter. ... “If those criteria are
clear, then saying no to a vendor or not yet to a CEO is measurable and people
can see the reasoning, rather than it feeling arbitrary,” says Dimitri Osler ...
Not every piece of wisdom about adaptability deserves to be followed. Mantras
like fail fast sound inspiring but can lead CIOs astray. The risk is spreading
teams too thin, chasing fads, and losing sight of real priorities. “The most
overrated advice is this idea you immediately have to adopt everything new or
risk being left behind,” says Osler. “In practice, reckless adoption just
creates technical and cultural debt that slows you down later.” Another piece of
advice he’d challenge is the idea of constant reorganization. “Change for the
sake of change doesn’t make teams more adaptive,” he says. “It destabilizes
them.” Real adaptability comes from anchored adjustments, where every shift is
tied to a purpose, otherwise, you’re just creating motion without progress,
Osler adds. ... A powerful way to build adaptability is to create a culture of
constant learning, in which employees at all levels are expected to grow. This
can be achieved by seeing change as an opportunity, not a disruption. Structures
like flatter hierarchies can also play a role because they can enable fast
decision-making and give people the confidence to respond to shifting
circumstances, Madanchian adds.Building Responsible Agentic AI Architecture
The architecture of agentic AI with guardrails defines how intelligent systems
progress from understanding intent to taking action—all while being
continuously monitored for compliance, contextual accuracy, and ethical
safety. At its core, this architecture is not just about enabling autonomy but
about establishing structured accountability. Each layer builds upon the
previous one to ensure that the AI system functions within defined
operational, ethical, and regulatory boundaries. ... Implementing agentic
guardrails requires a combination of technical, architectural, and governance
components that work together to ensure AI systems operate safely and
reliably. These components span across multiple layers — from data ingestion
and prompt handling to reasoning validation and continuous monitoring —
forming a cohesive control infrastructure for responsible AI behavior. ...
The deployment of AI guardrails spans nearly every major industry where
automation, decision-making, and compliance intersect. Guardrails act as the
architectural assurance layer that ensures AI systems operate safely,
ethically, and within regulatory and operational constraints. ... While
agentic AI holds extraordinary potential, recent failures across industries
underscore the need for comprehensive governance frameworks, robust
integration strategies, and explicit success criteria. Decoding Black Box AI: The Global Push for Explainability and Transparency
The relationship between regulatory requirements and standards development
highlights the connection between legal, technical, and institutional domains.
Regulations like the AI Act can guide standardization, while standards help
put regulatory principles into practice across different regions. Yet, on a
global level, we mostly see recognition of the importance of explainability
and encouragement of standards, rather than detailed or universally adopted
rules. To bridge this gap, further research and global coordination are needed
to harmonize emerging standards with regulatory frameworks, ultimately
ensuring that explainability is effectively addressed as AI technologies
proliferate across borders. ... However, in practice, several of these
strategies tend to equate explainability primarily with technical
transparency. They often frame solutions in terms of making AI systems’ inner
workings more accessible to technical experts, rather than addressing broader
societal or ethical dimensions. ... Transparency initiatives are
increasingly recognized in fostering stakeholder trust and promoting the
adoption of AI technologies, especially when clear regulatory directives on AI
explainability are not developed yet. By providing stakeholders with
visibility into the underlying algorithms and data usage, these initiatives
demystify AI systems and serve as foundational elements for building
credibility and accountability within organizations.How neighbors could spy on smart homes
Even with strong wireless encryption, privacy in connected homes may be
thinner than expected. A new study from Leipzig University shows that someone
in an adjacent apartment could learn personal details about a household
without breaking any encryption. ... the analysis focused on what leaks
through side channels, the parts of communication that remain visible even
when payloads are protected. Every wireless packet exposes timing, size, and
signal strength. By watching these details over time, the researcher could map
out daily routines. ... Given the black box nature of this passive monitoring,
even if the CSI was accurate, you would have no ground truth to ‘decode’ the
readings to assign them to human behavior. So technically it would be
advantageous, but you would have a hard time in classifying this data.” Once
these patterns were established, a passive observer could tell when someone
was awake, working, cooking, or relaxing. Activity peaks from a smart speaker
or streaming box pointed to media consumption, while long quiet periods
matched sleeping hours. None of this required access to the home’s WiFi
network. ... The findings show that privacy exposure in smart homes goes
beyond traditional hacking. Even with WPA2 or WPA3 encryption, network traffic
leaks enough side information for outsiders to make inferences about
occupants. A determined observer could build profiles of daily schedules,
detect absences, and learn which devices are in use.Ransom payment rates drop to historic low as attackers adapt
The economics of ransomware are changing rapidly. Historically, attackers
relied on broad access through vulnerabilities and credentials, operating with
low overheads. The introduction of the RaaS model allowed for greater
scalability, but also brought increased costs associated with access brokers,
data storage, and operational logistics. Over time, this has eroded profit
margins and fractured trust among affiliates, leading some groups to abandon
ransomware in favour of data-theft-only operations. Recent industry upheaval,
including the collapse of prominent RaaS brands in 2024, has further
destabilised the market. ... In Q3 2025, both the average ransom payment (USD
$376,941) and median payment (USD $140,000) dropped sharply by 66% and 65%
respectively compared with the previous quarter. Payment rates also fell to a
historic low of 23% across incidents involving encryption, data exfiltration,
and other forms of extortion, underlining the challenges faced by ransomware
groups in securing financial rewards. This trend reflects two predominant
factors: Large enterprises are increasingly refusing to pay ransoms, and
attacks on smaller organisations, which are more likely to pay, generally
result in lower sums. The drop in payment rates is even more pronounced in
data exfiltration-only incidents, with just 19% resulting in a payout in Q3,
down to another record low.Shadow AI’s Role in Data Breaches
The adoption barrier is nearly zero: no procurement process, no integration meetings, no IT tickets. All it takes is curiosity and an internet connection. Employees see immediate productivity gains, faster answers, better drafts, cleaner code, and the risks feel abstract. Even when policies prohibit certain AI tools, enforcement is tricky. Blocking sites might prevent direct access, but it won’t stop someone from using their phone or personal laptop. The reality is that AI tools are designed for frictionless use, and that very frictionlessness is what makes them so hard to contain. ... For regulated industries, the compliance fallout can be severe. Healthcare providers risk HIPAA violations if patient information is exposed. Financial institutions face penalties for breaking data residency laws. In competitive sectors, leaked product designs or proprietary algorithms can hand rivals an unearned advantage. The reputational hit can be just as damaging, and once customers or partners lose confidence in your data handling, restoring trust becomes a long-term uphill climb. Unlike a breach caused by a known vulnerability, the root cause in shadow AI incidents is often harder to patch because it stems from behavior, not just infrastructure. ... The first instinct might be to ban unapproved AI outright. That approach rarely works long-term. Employees will either find workarounds or disengage from productivity gains entirely, fostering frustration and eroding trust in leadership.Deepfake Attacks Are Happening. Here’s How Firms Should Respond
The quality of deepfake technology is increasing “at a dramatic rate,” agrees Will Richmond-Coggan, partner and head of cyber disputes at Freeths LLP. “The result is that there can be less confidence that real-time audio deepfakes, or even video, will be detectable through artefacts and errors as it has been in the past.” Adding to the risk, many people share images and audio recordings of themselves via social media, while some host vlogs or podcasts. ... As the technology develops, Tigges predicts fake Zoom meetings will become more compelling and interactive. “Interviews with prospective employees and third-party vendors may be malicious, and conventional employees will find themselves battling state sponsored threat actors more regularly in pursuit of their daily remit.” ... User scepticism is critical, agrees Tigges. He recommends "out of band authentication.” “If someone asks to make an IT-related change, ask that person in another communication method. If you're in a Zoom meeting, shoot them a Slack message.” To avoid being caught out by deepfakes, it is also important that employees are willing to challenge authority, says Richmond-Coggan. “Even in an emergency it will be better for someone in leadership to be challenged and made to verify their identity, than the organisation being brought down because someone blindly followed instructions that didn’t make sense to them, or which they were too afraid to challenge.”Obsidian: SaaS Vendors Must Adopt Security Standards as Threats Grow
The problem is the SaaS vendors tend to set their own rules, he wrote, so
security settings and permissions can differ from app to app – hampering risk
management – posture management is hobbled by limited-security APIs that
restrict visibility into their configurations, and poor logs and data telemetry
make threats difficult to detect, investigate, and respond to. “For years, SaaS
security has been a one-way street,” Tran wrote. “SaaS vendors cite the shared
responsibility model, while customers struggle to secure hundreds of unique
applications, each with limited, inconsistent security controls and blind
spots.” ... Obsidian’s Tran pointed to the recent breaches of hundreds of
Salesforce customers due to OAuth tokens associated with a third party,
Salesloft and its Drift AI chat agent, being compromised, allowing the threat
actors access into both Salesforce and Google Workspace instances. The incidents
illustrated the need for strong security in SaaS environments. “The same
cascading risks apply to misconfigured AI agents,” Tran wrote. “We’ve witnessed
one agent download over 16 million files while every other user and app combined
accounted for just one million. AI agents not only move unprecedented amounts of
data, they are often overprivileged. Our data shows 90% of AI agents are
over-permissioned in SaaS.” ... Given the rising threats, “SaaS customers are
sounding the alarm and demanding greater visibility, guardrails and
accountability from vendors to curb these risks,” he wrote.
Why your Technology Spend isn’t Delivering the Productivity you Expected
Firms essentially spend years building technical debt faster than they can pay
it down. Even after modernisation projects, they can’t bring themselves to
decommission old systems. So they end up running both. This is the vicious
cycle. You keep spending to maintain what you have, building more debt, paying
what amounts to a complexity tax in time and money. This problem compounds in
asset management because most firms are running fragmented systems for different
asset classes, with siloed data environments and no comprehensive platform.
Integrating anything becomes a nightmare. ... Here’s where it gets interesting,
and where most firms stop short. Virtualisation gives you access to data
wherever it lives. That’s the foundation. But the real power comes when you
layer on a modern investment management platform that maintains bi-temporal
records (which track both when something happened and when it was recorded) as
well as full audit trails. Now you can query data as it existed at any point in
time. Understand exactly how positions and valuations evolved. ... The best data
strategy is often the simplest one: connect, don’t copy, govern, then
operationalise. This may sound almost too straightforward given the complexity
most firms are dealing with. But that’s precisely the point. We’ve
overcomplicated data architecture to the point where 80 per cent of our budget
goes to maintenance instead of innovation.
No comments:
Post a Comment