Quote for the day:
"You don't build business, you build people, and then people build the business." -- Zig Ziglar
CIOs shift from ‘cloud-first’ to ‘cloud-smart’
The cloud-smart trend is being influenced by better on-prem technology, longer
hardware cycles, ultra-high margins with hyperscale cloud providers, and the
typical hype cycles of the industry, according to McElroy. All favor hybrid
infrastructure approaches. However, “AI has added another major wrinkle with
siloed data and compute,” he adds. “Many organizations aren’t interested in or
able to build high-performance GPU datacenters, and need to use the cloud. But
if they’ve been conservative or cost-averse, their data may be in the on-prem
component of their hybrid infrastructure.” These variables have led to
complexity or unanticipated costs, either through migration or data egress
charges, McElroy says. ... IT has parsed out what should be in a private cloud
and what goes into a public cloud. “Training and fine-tuning large models
requires strong control over customer and telemetry data,” Kale explains. “So we
increasingly favor hybrid architectures where inference and data processing
happen within secure, private environments, while orchestration and
non-sensitive services stay in the public cloud.” Cisco’s cloud-smart strategy
starts with data classification and workload profiling. Anything with
customer-identifiable information, diagnostic traces, and model feedback loops
are processed within regionally compliant private clouds, he says. ...
“Many organizations are wrestling with cloud costs they know instinctively are
too high, but there are few incentives to take on the risky work of repatriation
when a CFO doesn’t know what savings they’re missing out on,” he says.Harmonizing EU's Expanding Cybersecurity Regulations
Aligning NIS2, GDPR and DORA is difficult, since each framework approaches risks differently, which creates overlapping obligations for reporting, controls and vendor oversight, leading to areas that require careful interpretation. Given these overlapping requirements, organizations should establish an integrated governance model that consolidates risk management to report workflows and third-party oversight across all relevant EU frameworks. Strengthening internal coordination - especially between legal, compliance, cybersecurity and executive teams - helps ensure consistent interpretation of obligations and reduces fragmentation in implementation. ... Developers must build safeguards into AI systems, including adversarial testing, robust access controls and monitoring for unexpected behavior. Transparent development practices and collaboration with cybersecurity teams help prevent AI models from being exploited for malicious purposes. ... A trust-based ecosystem depends on transparency, consistent governance and strong cybersecurity practices across all stakeholders. Key elements still missing include harmonized standards, comprehensive regulatory guidance, and mechanisms to verify compliance and foster confidence among users and businesses. ... Ethical frameworks guide responsible decision-making by balancing societal impact, individual right and technological innovation. Organizations can apply them through policies, AI oversight and risk assessments that incorporate principles from deontology, utilitarianism, virtue ethics and care ethics into everyday operations and strategic planning.Invisible IT is becoming the next workplace priority
Lenovo defines invisible IT as support that runs in the background and prevents
problems before employees notice them. The report highlights two areas that
bring this approach to life. The first is predictive and proactive support.
Eighty three percent of leaders say this approach is essential, but only 21
percent have achieved it. With AI tools that monitor telemetry data across
devices, support teams can detect early signs of failure and trigger automated
fixes. If a fix requires human involvement, the repair can happen before the
user experiences downtime. This reduces disruptions and shifts support teams
away from repetitive tasks that slow down operations. The second area is hyper
personalization. Many organizations personalize support by role or seniority,
but the study argues this does not reflect how people work. AI systems can now
create personas based on individual usage patterns. This lets support teams
tailor responses and rollouts to real conditions rather than assumptions. ...
Although interest in invisible IT is high, most companies are still using manual
processes. Sixty five percent detect issues only when users contact support.
Fifty five percent resolve them through manual interventions. Hyper
personalization is also limited, with 51 percent of organizations offering
standard support for all employees. Barriers are widespread. Fifty one percent
cite fragmented systems as their top challenge. Another 47 percent point to cost
concerns or uncertain return on investment. Limited AI capabilities and skills
gaps also slow progress, along with slow upgrade cycles and a lack of time for
planning.Why AI coding agents aren’t production-ready: Brittle context windows, broken refactors, missing operational awareness
AI agents have demonstrated a critical lack of awareness regarding OS machine,
command-line and environment installations. This deficiency can lead to
frustrating experiences, such as the agent attempting to execute Linux
commands on PowerShell, which can consistently result in ‘unrecognized
command’ errors. Furthermore, agents frequently exhibit inconsistent ‘wait
tolerance’ on reading command outputs, prematurely declaring an inability to
read results before a command has even finished, especially on slower
machines. ... Working with AI coding agents often presents a longstanding
challenge of hallucinations, or incorrect or incomplete pieces of information
(such as small code snippets) within a larger set of changesexpected to be
fixed by a developer with trivial-to-low effort. However, what becomes
particularly problematic is when incorrect behavior is repeated within a
single thread, forcing users to either start a new thread and re-provide all
context, or intervene manually to “unblock” the agent. ... Agents may not
consistently leverage the latest SDK methods, instead generating more verbose
and harder-to-maintain implementations. ... Despite the allure of autonomous
coding, the reality of AI agents in enterprise development often demands
constant human vigilance. Instances like an agent attempting to execute Linux
commands on PowerShell, false-positive safety flags or introduce inaccuracies
due to domain-specific reasons highlight critical gaps; developers simply
cannot step away.Offensive security takes center stage in the AI era
Now a growing percentage of CISOs see offensive security as a must-have and, as
such, are building up offensive capabilities and integrating them into their
security processes to ensure the information revealed during offensive exercises
leads to improvements in their overall security posture. ... Mellen sees several
buckets of activities involved in offensive security, starting with
vulnerability management at the bottom end of the maturity scale, and then
moving up to attack service management and penetration testing, to threat
hunting and adversarial simulations, such as tabletop exercises. “Then there’s
the concept of purple teaming where the organization looks at an attack scenario
and what were the defenses that should have alerted but didn’t and how to
rectify those,” he says. ... Many CISOs also have had team members with specific
offensive security skills for many years. In fact, the Offensive Security
Certified Professional (OSCP), the Offensive Security Experienced Penetration
Tester (OSEP), and the Offensive Security Certified Expert (OSCE) certifications
from OffSec are all credentials that have been in demand for years. ...
Another factor that keeps CISOs from incorporating more offensive security into
their strategies is concern about exposing vulnerabilities they don’t have the
ability to address, Mellen adds. “They can’t unknow that they have those
vulnerabilities if they’re not able to do something about them, although the
hackers are going to find them whether or not you identify them,” he says.
Securing AI for Cyber Resilience: Building Trustworthy and Secure AI Systems
Attackers increasingly target the AI supply chain - poisoning training data, manipulating models, or exploiting vulnerabilities during deployment and operations. When an AI system or model is compromised, it can quietly skew decisions. This poses significant risks for autonomous systems or analytics engines. Thus, it is important that we embed security and resilience into our AI systems, ensuring robust protection from design to deployment and operations. ... Visibility is key. You can’t protect what you can’t see. Without visibility into data flows, model behavior and system interactions, threats can remain undetected until it is too late. Continuous validation and monitoring help surface anomalies and adversarial manipulations early, enabling timely interventions. Explainability is just as pivotal. Detecting an anomaly is one thing, but understanding why it happened drives true resilience. Explainability clarifies the reasoning behind AI systems and their decisions, helps verify threats, traces manipulations, makes AI systems auditable, and strengthens trust. Assurance must be continuous. ... Attackers are exploiting AI-specific security weaknesses, such as data poisoning, model inversion, and adversarial manipulations. As AI adoption accelerates, its threats will follow in equal sophistication and scale. The rapid proliferation of AI systems across industries not only drives innovation but also expands the attack surface, drawing the attention from both state-sponsored and criminal actors.From silos to strategy: What the era of cloud 'coopetition' means for CIOs
This week, historic competitors AWS and Google Cloud announced the launch of a
cross-cloud interconnect service, effectively tearing down the digital iron
curtain that once separated their ecosystems. With Microsoft Azure expected to
join this framework in 2026, the cloud industry is pivoting toward
"coopetition"-- a strategic truce driven by the modern enterprise's embrace of
multi-cloud. ... One of the primary drivers accelerating AWS and Google's
cross-cloud interconnect service is AI. The potential of enterprise AI has been
hampered by data silos, with fragmented pockets of information trapped in
different systems, which then prevents the training of comprehensive models.
MuleSoft's 2025 Connectivity Benchmark Report found that integration challenges
are a leading cause of stalled AI initiatives, with nearly 95% of 1,050 IT
leaders surveyed citing connectivity issues as a major hurdle. A cross-cloud
partnership is a critical tool for dismantling these barriers -- one that could
even eliminate the challenge of data silos, according to Ahuja. ... However,
coopetition is not a silver bullet. It also introduces new friction points where
the complexity of managing multiple environments can outweigh the benefits if
not addressed properly. Peterson warned that there may not be sufficient value
when workloads are "highly dependent and intertwined, requiring low-latency
communication across different providers".
Simplicity, speed & scalability are the key pillars of our AI strategy: Siddharth Sureka, Motilal Oswal Financial Services
AI is here to stay, and will transform all industries. Naturally, the BFSI sector tends to be on the leading edge of this journey, following closely behind pure technology companies. However, rather than viewing this purely through a technology lens, we approached it from an end-to-end organisational transformation lens. ... The first pillar is simplicity. To reach tier two, three, and four cities, we must make the financial experience intuitive. Simplicity is driven by personalisation, which means how we curate the information delivered to clients and ensure their digital journey is frictionless. The second pillar is speed. We are in the business of providing the right insights at the speed of the market. As an event occurs, we must be able to serve our clients with immediate insights. A prime example of this is our ‘News Agent’ product. As news arrives, the system measures the sentiment and analyses how it may impact the market, and then serves that insight directly to the client instantly. The third vertical is scalability. Once we have achieved simplicity and speed, our focus is to scale this architecture to reach the deeper pockets of the country. This scalability is essential for the financial inclusion journey we are embarked upon, ensuring that investors in tier three and four cities can take full advantage of the markets. ... In software engineering, you are delivering a deterministic output. However, when you move into the domain of AI, the outcomes become stochastic or probabilistic in nature. As leaders, we must understand the use cases we are working on and, crucially, the ‘cost of getting it wrong’.Observability at the Edge: A Quiet Shift in Reliability Thinking
Most organizations still don’t really know what’s happening inside their own
digital systems. A survey found that 84% of companies struggle with
observability, the basic ability to understand if their systems are working as
they should. The reasons are familiar: monitoring tools are expensive, their
architectures clumsy, and when scaled across thousands of locations, the
complexity often overwhelms the promise. The cost of that opacity is not
abstract. Every minute of downtime is lost revenue. Every unnoticed glitch is a
frustrated customer. And every delay in diagnosis erodes trust. In this sense,
observability is not just a matter for engineers; it’s central to how modern
businesses function. ... When systems fail, the speed of diagnosis becomes
critical. In fact, organizations can lose an average of $1 million per hour
during unplanned downtime, a striking testament to the high cost of delays. The
standard approach, engineers combing through logs, traces, and deployment
histories, often slows response when time is most precious. ... What stands out
is not only the design of these solutions but their uptake elsewhere. The edge
observability model first proven in retail has been mirrored in other
industries, including banking. The Core Web Vitals approach has been picked up
by financial services firms seeking to sharpen digital performance. And the
Incident Copilot reflects a broader shift toward embedding AI into reliability
practices. Industry peers have described the edge observability work as
“innovative, cost-effective, and cloud-native.”




























/filters:no_upscale()/articles/scaling-cloud-distributed-applications/en/resources/55figure-5-1764666987811.jpg)

