Quote for the day:
"Sheep are always looking for a new
shepherd when the terrain gets rocky." -- Karen Marie Moning

“The AI explosion and how quickly it has come upon us is the top issue for me,”
says Mark Sherwood, executive vice president and CIO of Wolters Kluwer, a global
professional services and software firm. “In my experience, AI has changed and
progressed faster than anything I’ve ever seen.” To keep up with that rapid
evolution, Sherwood says he is focused on making innovation part of everyday
work for his engineering team. ... “Modern digital platforms generate staggering
volumes of telemetry, logs, and metrics across an increasingly complex and
distributed architecture. Without intelligent systems, IT teams drown in alert
fatigue or miss critical signals amid the noise,” he explains. “What was once a
manageable rules-based monitoring challenge has evolved into a big data and
machine learning problem.” He continues, saying, “This shift requires IT
organizations to rethink how they ingest, manage, and act upon operational data.
It’s not just about observability; it’s about interpretability and actionability
at scale. ... CIOs today are also paying closer attention to geopolitical news
and determining what it means for them, their IT departments, and their
organizations. “These are uncertain times geopolitically, and CIOs are asking
how that will affect IT portfolios and budgets and initiatives,” Squeo says.

While the findings reflect growing concern, they also highlight a strategic
shift, with 78% of leaders now considering digital sovereignty when selecting
tech partners, and 68% saying they will only adopt AI services where they have
full certainty over data ownership. For some, the answer is to take back
control. Cloud repatriation is gaining some traction, at least in terms of
mindset, but as yet, this is not translating into a mass exodus from the
hyperscalers. And yet, calls for digital sovereignty are getting louder. In
Europe, the Euro-Stack open letter has reignited the debate, urging policymakers
to champion a competitive, sovereign digital infrastructure. But while politics
might be a trigger, the key question is not whether businesses are abandoning
cloud (most aren’t) but whether the balance of cloud usage is changing, driven
as much by cost as performance needs and rising regulatory risks. ... “Despite
access to cloud cost-optimisation teams, there was limited room to reduce
expenses,” says Jonny Huxtable, CEO of LinkPool. After assessing bare-metal and
colocation options, LinkPool decided to move fully to Pulsant’s colocation
service. The company claims the move achieved a 90% to 95% cost reduction
alongside major performance improvements and enhanced disaster recovery
capabilities.

Effective cookie management under the DPDP Act, as detailed in the BRDCMS,
requires real time updates to user preferences. Users must have access to a
dedicated cookie preferences interface that allows them to modify or revoke
their consent without undue complexity or delay. This interface should be easily
accessible, typically through privacy settings or a dedicated cookie management
dashboard. The real-time nature of these updates is crucial for maintaining
compliance with the principles of consent as enshrined under the DPDP Act. When
a user withdraws consent for specific cookie categories, the system must
immediately cease the collection and processing of data through those cookies,
ensuring that the user’s privacy preferences are respected without delay.
Transparency is one of the fundamental pillars of the DPDP Act and extends to
cookie usage disclosure. While the DPDP Act itself remains silent on specific
cookie policies, the BRDCMS mandates the provision of a clear and accessible
cookie policy. Organisations must provide clear and accessible cookie policies
which outline the purposes of cookie usage, the data sharing practices and the
implications of different consent choices. The cookie policy serves as a
comprehensive resource enabling users to make informed decisions of their
consent preferences.

According to the report, the majority of workers are ready to embrace agents for
the automation of low-stakes and repetitive tasks, "even after reflecting on
potential job loss concerns and work enjoyment." Respondents said they hoped to
focus on more engaging and important tasks, mirroring what's become something of
a marketing mantra among big tech companies pushing AI agents: that these
systems will free workers and businesses from drudgery, so they can focus on
more meaningful work. The authors also noted "critical mismatches" between the
tasks that AI agents are being deployed to handle -- such as software
development and business analysis -- and the tasks that workers are actually
looking to automate. ... The study could have big implications for the future of
human-AI collaboration in the workplace. Using a metric that they call the Human
Agency Scale (HAS), the authors found "that workers generally prefer higher
levels of human agency than what experts deem technologically necessary." ...
The report further showed that the rise of AI automation is causing a shift in
the human skills that are most valued in the workplace: information-processing
and analysis skills, the authors said, are becoming less valuable as machines
become increasingly competent in these domains, while interpersonal skills --
including "assisting and caring for others" -- is more important than ever.

The traditional methods for integrating databases are complex and not suited to
AI, Xin said. The challenge lies in integrating analytics and AI with
transactional workloads. Consider what developers would do when adding a feature
to a code base, Xin said in his keynote address at the Data + AI Summit. They’d
create a new branch of the codebase and make changes to the new branch. They’d
use that branch to check bugs, perform testing and so on. Xin said creating a
new branch is an instant operation. What’s the equivalent for databases? You
only clone your production databases. It might take days. How do you set up
secure networking? How do you create ETL pipelines and log data from one to
another? ... Streaming is now a first-class citizen in the enterprise, Mohan
told me. The separation of compute and storage makes a difference. We are
approaching an era when applications will scale infinitely, both in terms of the
number of instances and their scale-out capabilities. And that leads us to new
questions about how we start to think about evaluation, observability and
semantics. Accuracy matters. ... ADP may have the world’s best payroll data,
Mohan said, but then that data has to be processed through ETL into an analytics
solution like Databricks. Then comes the analytics and the data science work.
The customer has to perform a significant amount of data engineering work and
preparation.

Reluctant executives and budget hawks can shoulder some of the responsibility
for slow AI adoption, but they’re hardly the only barriers. Increasingly,
employees are voicing legitimate concerns about surveillance, privacy and the
long-term impact of automation on job security. At the same time, enterprises
may face structural issues when it comes to integration: fragmented systems, a
lack of data inventory and access controls, and other legacy architectures can
also hinder the secure integration and scalability of AI-driven security
solutions. Meanwhile, bad actors face none of these considerations. They have
immediate, unfettered access to open-source AI tools, which can enhance the
speed and force of an attack. They operate without AI tool guardrails,
governance, oversight or ethical constraints. ... Insider threat detection is
also maturing. AI models can detect suspicious behavior, such as unusual access
to data, privilege changes or timing inconsistencies, that may indicate a
compromised account or insider threat. Early adopters, such as financial
institutions, are using behavioral AI to flag synthetic identities by spotting
subtle deviations that traditional tools often lack. They can also monitor
behavioral intent signals, such as a worker researching resignation policies
before initiating mass file downloads, providing early warnings of potential
data exfiltration.

“In cellular communications on the ground, this was solved a few decades ago.
But doing it in space, you have to have the computing horsepower to do those
handoffs as well as the throughput capability.” This additional compute needs to
be in "a radiation tolerant form, and in such a way that they don't consume too
much power and generate too much heat to cause massive thermal problems on the
satellites." In LEO, satellites face a barrage of radiation. "It's an
environment that's very rich in protons," O'Neill says. "And protons can cause
upsets in configuration registers, they can even cause latch-ups in certain
integrated circuits." The need to be more radiation tolerant has also pushed the
industry towards newer hardware as, the smaller the process node, the lower the
operating voltage. "Reducing operating voltage makes you less susceptible to
destructive effects," O'Neill explains. One issue, a single event latch up, sees
the satellite conduct a lot of current from power to ground through the
integrated circuit, potentially frying it. ... Modern integrated circuits are a
lot less susceptible to these single-event latch-ups, but are not completely
immune. "While the core of the circuit may be operating at a very low voltage,
0.7 or 0.8 volts, you still have I/O circuits in the integrated circuit that may
be required to interoperate with other ICs at 3.3 volts or 2.5 volts," O'Neill
adds.
A common challenge we see is the absence of a formal ERM program, or the
fragmentation of risk functions, where enterprise, cybersecurity, and
third-party risks are evaluated using different impact criteria. This lack of
alignment makes it difficult for CISOs to communicate effectively with the
C-suite and board. Standardizing risk programs and using consistent impact
criteria enables clearer risk comparisons, shared understanding, and more
strategic decision-making. This challenge is further exacerbated by the rise of
AI-specific regulations and frameworks, including the NIST AI Risk Management
Framework, the EU AI Act, the NYC Bias Audit Law, and the Colorado Artificial
Intelligence Act. ... Communicating security investments in clear,
business-aligned risk terms—such as High, Medium, or Low—using agreed-upon
impact criteria like financial exposure, operational disruption, reputational
harm, and customer impact makes it significantly easier to justify spending and
align with enterprise priorities. ... In our Virtual CISO engagements, we’ve
found that a risk-based, outcome-driven approach is highly effective with
executive leadership. We frame cyber risk tolerance in financial and operational
terms, quantify the business value of proposed investments, and tie security
initiatives directly to strategic objectives.

In the past, teams had time to adapt to new technologies. Operating systems or
enterprise resource planning (ERP) tools evolved over years, giving users more
room to learn these platforms and acquire the skills to use them. Unlike
previous tech shifts, this one with AI doesn’t come with a long runway. Change
arrives overnight, and expectations follow just as fast. Many employees feel
like they’re being asked to keep pace with systems they haven’t had time to
learn, let alone trust. A recent example would be ChatGPT reaching 100 million
monthly active users just two months after launch. ... This underlines the
emotional and behavioral complexity of adoption. Some people are naturally
curious and quick to experiment with new technology while others are skeptical,
risk-averse or anxious about job security. ... Adopting AI is not just a
technical initiative, it’s a cultural reset, one that challenges leaders to show
up with more empathy and not just expertise. Success depends on how well leaders
can inspire trust and empathy across their organizations. The 4 E’s of adoption
offer more than a framework. They reflect a leadership mindset rooted in
inclusion, clarity and care. By embedding empathy into structure and using
metrics to illuminate progress rather than pressure outcomes, teams become more
adaptable and resilient.
Predictive Analytics – a key capability of AIOps – forecasts future network
performance and problems, enabling early intervention and proactive maintenance.
Further, early prediction of bottlenecks or additional requirements helps to
optimise the management of network resources. For example, when organisations
have advance warning about traffic surges, they can allocate capacity to prevent
congestion and outages, and enhance overall network performance. A range of
mundane tasks, from incident response to work order generation to network
configuration to proactive IT health checks and maintenance scheduling, can be
automated with AIOps to reduce the load on IT staff and free them up to
concentrate on more strategic activities. ... When traditional monitoring tools
were unable to identify bottlenecks in a healthcare provider’s network that was
seeing a slowdown in its electronic health records (EHR) system during busy
hours, a switch to AIOps resolved the problem. By enabling observability across
domains, the system highlighted that performance dipped when users logged in
during shift changes. It also predicted slowdowns half an hour in advance and
automatically provisioned additional resources to handle the surge in activity.
The result was a 70 percent reduction in the most important EHR slowdowns,
improvement in system responsiveness, and freeing up of IT human resources.
No comments:
Post a Comment