Quote for the day:
"Don't let yesterday take up too much of today." -- Will Rogers
When AI Meets DevOps To Build Self-Healing Systems
Self-healing systems do not just react to events and incidents — they analyse
historic data, identify early triggers or symptoms of failures, and act. For
example, if a service is known to crash when it runs out of memory, a
self-healing system can observe metrics like memory consumption, predict when
the service may fail with very low memory, and take action to fix the issue—like
restarting the service or allocating more memory—without human intervention. In
AIOps, self-healing systems are powered by data science in terms of machine
learning models, real-time analytics, and automated workflows. ... Self-healing
systems don’t just rely on static rules and manual checks; they utilise
real-time data streams and apply pattern and anomaly detection through machine
learning to ascertain the state of the environment. A self-healing system is
trying to gauge its own health all the time — CPU utilisation, latency, memory,
throughput, traffic, security anomalies, etc — to preemptively address an
impending failure. The key component of every self-healing system is a cycle
that reflects the process followed by intelligent agents: Detect → Diagnose →
Act. ... The integration of artificial intelligence and DevOps signifies an
important change in the way modern IT systems are built, managed, and evolved.
As we have discussed here, AIOps is not just an extension of a type of
automation — it is changing the way operations are modelled from reactive to
intelligent, self-healing ecosystems.Building a product roadmap: From high-level vision to concrete plans
A roadmap provides the anchor to keep everyone aligned amid constant flux. Yet
many organizations still treat roadmaps as static artifacts — a one-and-done
exercise intended to appease executives or investors. That’s a mistake. The
most effective roadmaps are living documents evolving with the product and
market realities. ... If strategy defines direction, milestones are the engine
that keeps the train moving. Too often, teams treat milestones as arbitrary
checkpoints or internal deadlines. Done right, these can become powerful tools
for motivation, alignment and storytelling. ... The best roadmaps aren’t
written by PMs — they’re co-authored by teams. That’s why I advocate for
bottom-up collaboration anchored in executive alignment. Before any roadmap
offsite, sync with the CEO or leadership team. Understand what they care about
and why. If they disagree with priorities, resolve those conflicts early. Then
bring that context into a team workshop. During the session, identify
technical leads — those trusted voices who can translate into action.
Encourage them to pre-think tradeoffs and dependencies before the group
session. ... The perfect roadmap doesn’t exist and that’s the point. Remember,
the goal isn’t to build a flawless plan, but a resilient one. As President
Dwight D. Eisenhower said, “Plans are useless, but planning is indispensable.”
... Vision without execution is hallucination. But execution without vision is
chaos. The magic of product leadership lies in balancing both: crafting a
roadmap that’s both inspiring and achievable.Scattered network data impedes automation efforts
As IT organizations mature their network automation strategies, it’s becoming
clear that network intent data is an essential foundation. They need reliable
documentation of network inventory, IP address space, topology and connectivity,
policies, and more. This requirement often kicks off a network source of truth
(NSoT) project, which involves network teams discovering, validating, and
consolidating disparate data in a tool that can model network intent and provide
programmatic access to data for network automation tools and other systems. ...
IT leaders do not understand the value of NSoT solutions. The data is already
available, although it’s scattered and of dubious quality. Why should we spend
money on a product or even extra engineers to consolidate it? “Part of the issue
is that we’ve got leadership that are not infrastructure people,” said a network
engineer with a global automobile manufacturer. “It’s kind of a heavy lift to
get them to buy into it, because they see that applications are running fine
over the network. ‘Why do I need to spend money on this is?’ And we tell them
that the network is running fine, but there will be failures at some point and
it’s worth preventing that.” ... NSoT isn’t a magic bullet for solving the
problems IT organizations have with poor network documentation and scattered
operational data. Network engineering teams will need to discover, validate,
reconcile, and import data from multiple repositories. This process can be
challenging and time-consuming. Some of this data will difficult to
find. What insurers expect from cyber risk in 2026
Cyber insurers are beginning to use LLMs to translate internet scale data into
structured inputs for underwriting and portfolio analysis. These applications
target specific pain points such as data gaps and processing delays. Broader
change across pricing or risk selection remains gradual. ... AI supported
workflows begin to reduce repetitive tasks across those stages. Automation
supports data entry, document review, and routine verification. Human oversight
remains central for judgment based decisions. The research links this shift to
measurable operational effects. Fewer manual touches per claim reduce processing
time and error rates. Claims teams gain capacity without proportional increases
in staffing. ... Age verification and online safety legislation introduce
unintended cyber risk. Requirements that reduce online anonymity create high
value identity datasets that attract attackers. The research highlights rising
exposure to identity based coercion, insider compromise, and extortion. Once
personal identity data is leaked, attackers gain leverage that can translate
into access to corporate systems. This dynamic supports long term campaigns by
organized groups and state aligned actors. ... Data orchestration becomes a core
capability. Insurers and reinsurers integrate signals including security
posture, threat activity, and loss experience into shared models. Consistent
views across teams and regions support portfolio governance. This shift places
emphasis on actionability. Data value depends on timing and relevance within
workflows rather than volume alone.
Human + AI Will Define the Future of Work by 2027: Nasscom-Indeed Report
This emerging model of Humans + AI working together is reported as the next
phase of transformation, where success depends on how effectively AI will
augment human capabilities, empower employees, and align with organizational
purpose. The report highlights that the most effective human–AI partnerships are
emerging across higher-order activities such as scope definition, system
architecture, and data model design. At the same time, more routine and
repeatable tasks, including boilerplate code generation and unit test creation,
are expected to be increasingly automated by AI over the next two to three
years. ... To stay relevant in a Human + AI workplace, the report emphasizes
that individuals should build capability, adaptability, and continuous learning.
This includes experience with using AI tools (prompting, critical review of
output, combining AI speed with human judgment), moving up the value chain
(e.g., developers from coding to architecture thinking), building
multidisciplinary skills (tech + domain + professional skills), and focusing on
outcomes over credentials by creating repositories of work samples showing
measurable impact. ... Organizations have already started taking measures to
address these challenges. Every seven in ten HR leaders are focusing on
upskilling, more than half focusing on modernizing systems. With respect to AI
adoption, 79% prioritize internal reskilling as a dominant strategy. From vulnerability whack-a-mole to strategic risk operations
“Software bills of materials are just an ingredients list,” he notes. “That’s helpful because the idea is that through transparency we will have a shared understanding. The problem is that they don’t deliver a shared understanding because the expectation of anyone in security who reads the SBOM is the first job they’ll do is run those versions against vulnerability databases.” This creates a predictable problem: security teams receive SBOMs, scan them for vulnerabilities, and generate alerts for every CVE match, regardless of whether those vulnerabilities actually affect the product. ... To make SBOMs truly useful, Kreilein introduces VEX (Vulnerability Exploitability Exchange), an open standards framework that addresses the context problem. VEX provides four status messages: affected, not affected, under investigation, and fixed. “What we want to start doing is using a project called VEX that gives four possible status messages,” Kreilein explains. ... Developers aren’t refusing to patch because they don’t care about security. They’re worried that upgrading a component will break the application. “If my application is brittle and can’t take change, I cannot upgrade to the non-vulnerable version,” Kreilein explains. “If I don’t have effective test automation and integration and unit testing, I can’t guarantee that this upgrade won’t break the application.” This reframing shifts the security conversation from compliance and mandates to engineering fundamentals. Better test coverage, better reference architectures, and better secure-by-design practices become security initiatives.AI backlash forces a reality check: humans are as important as ever
Companies are now moving beyond the hype and waking up to the consequences of AI
slop, underperforming tools, fragmented systems, and wasted budgets, said Brooke
Johnson, chief legal officer at Ivanti. “The early rush to adopt AI prioritized
speed over strategy, leaving many organizations with little to show for their
investments,” Johnson said. Organizations now need to balance AI, workforce
empowerment and cybersecurity at the same they’re still formulating strategies.
That’s where people come in. ... AI is becoming less a tech problem and more of
an adoption hurdle, Depa said. “What we’re seeing now more and more is less of a
technology challenge, more of a change management, people, and process challenge
— and that’s going to continue as those technologies continue to evolve,” he
said. DXC Technology is taking a similar approach, designing tools where human
insight, judgment, and collaboration create value that AI can’t do alone, said
Dan Gray, vice president of global technical customer operations at the company.
... Companies might have to accept underutilizing some of the AI gains in the
near term. AI could help workers complete their tasks in half the time and enjoy
a leisurely pace. Alternately, employees might burn out quickly by getting more
work. “If you try to lay them off, you don’t have a good workforce left. If you
let them be, why are you paying them? So that’s a paradox,” Seth said.Physical AI is the next frontier - and it's already all around you
Physical AI can be generally defined as AI implemented in hardware that can
perceive the world around it and then reason to perform or orchestrate actions.
Popular examples including autonomous vehicles and robots -- but robots that
utilize AI to perform tasks have existed for decades. So what's the difference?
... Saxena adds that while humanoid robots will be useful in instances where
humans don't want to perform a task, either because it is too tedious or too
risky, they will not replace humans. That's where AI wearables, such as smart
glasses, play an important role, as they can augment human capabilities. But
beyond that, AI wearables might actually be able to feed back into other
physical AI devices, such as robots, by providing a high-quality dataset based
on real-life perspectives and examples. "Why are LLMs so great? Because there is
a ton of data on the internet, for a lot of the contextual information and
whatnot, but physical data does not exist," said Saxena. ... Given the privacy
concerns that may come from having your everyday data used to train robots,
Saxena highlighted that the data from your wearables should always be kept at
the highest level of privacy. As a result, the data -- which should already be
anonymized by the wearable company -- could be very helpful in training robots.
That robot can then create more data, resulting in a healthy ecosystem. "This
sharing of context, this sharing of AI between that robot and the wearable AI
devices that you have around you is, I think, the benefit that you are going to
be able to accrue," added Asghar.
Unlocking the Power of Geospatial Artificial Intelligence (GeoAI)
GeoAI is more than sophisticated map analytics. It is a strategic technology
that blends AI with the physical world, allowing tech experts to see,
understand, and act on patterns that were previously invisible. From planning
sustainable cities to protecting wildlife, it’s helping experts tackle
significant challenges with precision and speed. As the world generates more
location-based data every day, GeoAI is becoming a must-have tool. It’s not just
tech – it’s a way to make the world work better. ... To make it simpler. Machine
learning spots trends, computer vision interprets images, GIS organizes it all,
and knowledge graphs tie it together. The result? GeoAI can take a chaotic pile
of data and deliver clear answers, like telling a city where to build a new park
or warning about a wildfire risk. It’s a powerhouse that’s making location-based
decisions faster and smarter. In all, GeoAI is transforming the speed at which
we extract meaning from complex datasets, thereby enabling us to address the
Earth’s most pressing challenges. ... Though powerful, GeoAI is not without
challenges. Effective implementation requires careful attention to data privacy,
technical infrastructure, and organizational change management. ... Leaders who
take GeoAI seriously stand to gain more than just incremental improvements. With
the right systems in place, they can respond faster, make smarter decisions, and
get better results from every field team in the network.
For application security: SCA, SAST, DAST and MAST. What next?
If you think SAST and SCA are enough, you’re already behind. The future of app
security is posture, provenance and proof, not alerts. ... Posture is the
‘what.’ Provenance is the ‘how’. The SLSA framework gives us a shared vocabulary
and verifiable controls to prove that artifacts were built by hardened,
tamper‑resistant pipelines with signed attestations that downstream consumers
can trust. When I insist on SLSA Level 2 for most services and Level 3 for
critical paths, I am not chasing compliance theater; I am buying integrity that
survives audit and incident. Proof is where SBOMs finally grow up. Binding SBOM
generation to the build that emits the deployable bits, signing them and
validating at deploy time moves SBOMs from “ingredient lists” to enforceable
controls. The CNCF TAG‑Security best practices v2 paper is my practical map,
personas, VEX for exploitability, cryptographic verification to ensure tests
actually ran, and prescriptive guidance for cloud‑native factories. ... Among
the nexts, AI is the most mercurial. NIST’s final 2025 guidance on adversarial
ML split threats across PredAI and GenAI and called out prompt injection in
direct and indirect form as the dominant exploit in agentic systems where
trusted instructions co mingle with untrusted data. The U.S. AI Safety Institute
published work on agent hijacking evaluations, which I treat as required
red‑team reading for anyone delegating actions to tools.
No comments:
Post a Comment