Quote for the day:
"Failure is the condiment that gives
success its flavor." -- Truman Capote

Attackers actively target user credentials because they offer the most direct
route or foothold into a targeted organization’s network. Once inside, attackers
can move laterally across systems, searching for other user accounts to
compromise, or they attempt to escalate their privileges and gain administrative
control. This hunt for credentials extends beyond user accounts to include code
repositories, where developers may have hard-coded access keys and other secrets
into application source code. Attacks using valid credentials were successful
98% of the time, according to Picus Security. ... “CISOs and security teams
should focus on enforcing strong, unique passwords, using MFA everywhere,
managing privileged accounts rigorously and testing identity controls
regularly,” Curran says. “Combined with well-tuned DLP and continuous
monitoring that can detect abnormal patterns quickly, these measures can help
limit the impact of stolen or cracked credentials.” Picus Security’s latest
findings reveal a concerning gap between the perceived protection of security
tools and their actual performance. An overall protection effectiveness score of
62% contrasts with a shockingly low 3% prevention rate for data exfiltration.
“Failures in detection rule configuration, logging gaps and system integration
continue to undermine visibility across security operations,” according to Picus
Security.

In an age of escalating cyber threats and expanding digital footprints, security
can no longer be layered on; it must be architected in from the start. With the
rise of AI, IoT and even quantum computing on the horizon, the threat landscape
is more dynamic than ever. Security-embedded architectures prioritize
identity-first access control, continuous monitoring and zero-trust principles
as baseline capabilities. ... Sustainability is no longer a side initiative;
it’s becoming a first principle of enterprise architecture. As organizations
face pressure from regulators, investors and customers to lower their carbon
footprint, digital sustainability is gaining traction as a measurable design
objective. From energy-efficient data centers to cloud optimization strategies
and greener software development practices, architects are now responsible for
minimizing the environmental impact of IT systems. The Green Software Foundation
has emerged as a key ecosystem partner, offering measurement standards like
software carbon intensity (SCI) and tooling for emissions-aware development
pipelines. ... Technology leaders must now foster a culture of innovation, build
interdisciplinary partnerships and enable experimentation while ensuring
alignment with long-term architectural principles. They must guide the
enterprise through both transformation and stability, navigating short-term
pressures and long-term horizons simultaneously.

Modern architectures dissolve the boundary between core and digital. The digital
banking solution is no longer a bolt-on to the core; the core and digital come
together to form the accountholder experience. That user experience is delivered
through the digital channel, but when done correctly, it’s enabled by the modern
core. Among other things, the core transformation requires robust use of shared
APIs, consistent data structures, and unified development teams. Leading
financial institutions are coming to realize that core evaluations now must
include an evaluable of their capability to enable the digital experience.
Criteria like Availability, Reliability, Real-time, Speed and Security are now
emerging as foundational requirements of a core to enable the digital
experience. "If your core can’t keep up with your digital, you’re stuck playing
catch-up forever," said Jack Henry’s Paul Wiggins, Director of Sales, Digital
Engineers. ... Many institutions still operate with digital siloed in one
department, while marketing, product, and operations pursue separate agendas.
This leads to mismatched priorities — products that aren’t promoted effectively,
campaigns that promise features operations can’t support, and technical fixes
that don’t address the root cause of customer and member pain points. ...
Small-business services are a case in point. Jack Henry’s Strategy Benchmark
study found that 80% of CEOs plan to expand these services over the next two
years.
The thing that’s really important for a CIO to be thinking about is that we are
a microcosm for how all of the business functions are trying to execute the
tactics against the strategy. What we can do across the portfolio is represent
the strategy in real terms back to the business. We can say: These are all of
the different places where we're thinking about investing. Does that match with
the strategy we thought we were setting for ourselves? And where is there a
delta and a difference? ... When I got my first CIO role, there was all of this
conversation about business process. That was the part that I had to learn and
figure out how to map into these broader, strategic conversations. I had my
first internal IT role at Deutsche Bank, where we really talked about product
model a lot -- thinking about our internal IT deliverables as products. When I
moved to Lenovo, we had very rich business process and transformation
conversations because we were taking the whole business through such a
foundational change. I was able to put those two things together. It was a
marriage of several things: running a product organization; marrying that to the
classic IT way of thinking about business process; and then determining how that
becomes representative to the business strategy.

Active metadata addresses the shortcomings of passive approaches by
automatically updating the metadata whenever an important aspect of the
information changes. Defining active metadata and understanding why it matters
begins by looking at the shift in organizations’ data strategies from a focus
on data acquisition to data consumption. The goal of active metadata is to
promote the discoverability of information resources as they are acquired,
adapted, and applied over time. ... From a data consumer’s perspective, active
metadata adds depth and breadth to their perception of the data that fuels
their decision-making. By highlighting connections between data elements that
would otherwise be hidden, active metadata promotes logical reasoning about
data assets. This is especially so when working on complex problems that
involve a large number of disconnected business and technical entities.The
active metadata analytics workflow orchestrates metadata management across
platforms to enhance application integration, resource management, and quality
monitoring. It provides a single, comprehensive snapshot of the current status
of all data assets involved in business decision-making. The technology
augments metadata with information gleaned from business processes and
information systems.
“Digital readiness at Godrej Enterprises Group is about empowering every
employee to thrive in an ever-evolving landscape,” Kaur said. “It’s not just
about technology adoption. It’s about building a workforce that is agile,
continuously learning, and empowered to innovate.” This reframing reflects a
broader trend across Indian industry, where digital transformation is no longer
confined to IT departments but runs through every layer of an organisation. For
Godrej Enterprises Group, this means designing a workplace where
intrapreneurship is rewarded, innovation is constant, and employees are trained
to think beyond immediate functions. ... “We’ve moved away from one-off training
sessions to creating a dynamic ecosystem where learning is accessible, relevant,
and continuous,” she said. “Learning is no longer a checkbox — it’s a shared
value that energises our people every day.” This shift is underpinned by
leadership development programmes and innovation platforms, ensuring that
employees at every level are encouraged to experiment and share knowledge.
... “We see digital skilling as a core business priority, not just an HR or
L&D initiative,” she said. “By making digital skilling a shared
responsibility, we foster a culture where learning is continuous, progress is
visible, and success is celebrated across the organisation.”

However, before you get too excited, he warned: "This is a great example of what
LLMs are doing right now. You give it a small, well-defined task, and it goes
and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a
driver for my new hardware.' Instead, it's very specific -- convert this
specific hash to use our standard API." Levin said another AI win is that "for
those of us who are not native English speakers, it also helps with writing a
good commit message. It is a common issue in the kernel world where sometimes
writing the commit message can be more difficult than actually writing the code
change, and it definitely helps there with language barriers." ... Looking
ahead, Levin suggested LLMs could be trained to become good Linux maintainer
helpers: "We can teach AI about kernel-specific patterns. We show examples from
our codebase of how things are done. It also means that by grounding it into our
kernel code base, we can make AI explain every decision, and we can trace it to
historical examples." In addition, he said the LLMs can be connected directly to
the Linux kernel Git tree, so "AI can go ahead and try and learn things about
the Git repo all on its own." ... This AI-enabled program automatically analyzes
Linux kernel commits to determine whether they should be backported to stable
kernel trees. The tool examines commit messages, code changes, and historical
backporting patterns to make intelligent recommendations.

No matter how reliable your application components are, they will need to be
maintained, upgraded or replaced at some point. As elements in your application
evolve, some will reach end of life status – for example, Redis 7.2 will reach
end of life status for security updates in February 2026. Before that point,
it’s necessary to assess the available options. For businesses in some sectors
like financial services, running out of date and unsupported software is a
potential failure for regulations on security and resilience. For example, the
Payment Card Industry Data Security Standard version 4.0 enforces that teams
should check all their software and hardware is supported every year; in the
case of end of life software, teams must also provide a full plan for migration
that will be completed within twelve months. ... For developers and software
architects, understanding the role that any component plays in the overall
application makes it easier to plan ahead. Even the most reliable and consistent
component may need to change given outside circumstances. In the Discworld
series, golems are so reliable that they become the standard for currency; at
the same time, there are so many of them that any problem could affect the whole
economy. When it comes to data caching, Redis has been a reliable companion for
many developers.

The report, based on insights from more than 2,000 IT leaders, reveals that a
staggering 94% of global IT leaders struggle with cloud cost optimization. Many
enterprises underestimate the complexities of managing public cloud resources
and the inadvertent overspending that occurs from mismanagement,
overprovisioning, or a lack of visibility into resource usage. This inefficiency
goes beyond just missteps in cloud adoption. It also highlights how difficult it
is to align IT cost optimization with broader business objectives. ... This
growing focus sheds light on the rising importance of finops (financial
operations), a practice aimed at bringing greater financial accountability to
cloud spending. Adding to this complexity is the increasing adoption of
artificial intelligence and automation tools. These technologies drive
innovation, but they come with significant associated costs. ... The argument
for greater control is not new, but it has gained renewed relevance when paired
with cost optimization strategies. ... With 41% of respondents’ IT budgets still
being directed to scaling cloud capabilities, it’s clear that the public cloud
will remain a cornerstone of enterprise IT in the foreseeable future. Cloud
services such as AI-powered automation remain integral to transformative
business strategies, and public cloud infrastructure is still the preferred
environment for dynamic, highly scalable workloads. Enterprises will need to
make cloud deployments truly cost-effective.
/articles/ai-infrastructure-aggregating-agentic-traffic/en/smallimage/ai-infrastructure-aggregating-agentic-traffic-thumbnail-1755682463373.jpg)
Software architects and engineering leaders building AI-native platforms are
starting to notice familiar warning signs: sudden cost spikes on AI API bills,
bots with overbroad permissions tapping into sensitive data, and a disconcerting
lack of visibility or control over what these AI agents are doing. It’s a
scenario reminiscent of the early days of microservices – before we had gateways
and meshes to restore order – only now the "microservices" are semi-autonomous
AI routines. Gartner has begun shining a spotlight on this emerging gap. ...
Every major shift in software architecture eventually demands a mediation layer
to restore control. When web APIs took off, API gateways became essential for
managing authentication/authorization, rate limits, and policies. With
microservices, service meshes emerged to govern internal traffic. Each time, the
need only became clear once the pain of scale surfaced. Agentic AI is on the
same path. Teams are wiring up bots and assistants that call APIs independently
- great for demos ... So, what exactly is an AI Gateway? At its core, it’s a
middleware component – either a proxy, service, or library – through which all
AI agent requests to external services are channeled. Rather than letting each
agent independently hit whatever API it wants, you route those calls via the
gateway, which can then enforce policies and provide central management.
No comments:
Post a Comment