Quote for the day:
"Everything you’ve ever wanted is on the other side of fear." -- George Addair
AI Agents Are About To Blow Up the Business Process Layer

While AI agents are built to do specific tasks or automate specific,
often-repetitive tasks (like updating your calendar), they generally require
human input. Agentic AI is all about autonomy (think self-driving cars),
employing a system of agents to constantly adapt to dynamic environments and
independently create, execute and optimize results. When agentic AI is applied
to business process workflows, it can replace fragile, static business processes
with dynamic, context-aware automation systems. Let’s take a look at why
integrating AI agents into enterprise architectures marks a transformative leap
in the way organizations approach automation and business processes, and what
kind of platform is required to support these systems of automation. ... Models
that power networks of agents are essentially stateless functions that take
context as an input and output a response, so some kind of framework is
necessary to orchestrate them. Part of that orchestration could be simple
refinements (for example, having the model request more information). This might
sound analogous to retrieval-augmented generation (RAG) — and it should, because
RAG is essentially a simplified form of agent architecture: It provides the
model with a single tool that accesses additional information, often from a
vector database.
The risks of autonomous AI in machine-to-machine interactions
Adversarial AI attacks, such as model poisoning and data manipulation, threaten
M2M security by compromising automated authentication and processes. These
attacks exploit vulnerabilities in how machine learning models exchange data and
authenticate within M2M environments. Model poisoning involves injecting
malicious data or manipulating updates, undermining AI decision-making and
potentially introducing backdoors. If AI systems accept compromised credentials
or updates, security degrades, particularly in autonomous M2M systems, leading
to cascading failures. ... The key is implementing zero standing privileges
(ZSP) to prevent AI-driven systems from having persistent, unnecessary access to
sensitive resources. Instead of long-lived credentials, access is granted
just-in-time (JIT) with just-enough privileges, based on real-time verification.
ZSP minimizes risk by enforcing ephemeral credentials, policy-based access
control, continuous authorization, and automated revocation if anomalies are
detected. This ensures that even if an AI system is compromised, attackers can’t
exploit standing privileges to move laterally. With AI making autonomous
decisions, security must be dynamic. By eliminating unnecessary privileges and
enforcing strict, real-time access controls, organizations can secure AI-driven
machine-to-machine interactions while maintaining agility and automation.
Password managers under increasing threat as infostealers triple and adapt

Attacks against credential stories are rising partly because these attacks have
become easier and more automated, with widely available tools enabling
cybercriminals to extract and exploit credentials at scale. In addition, “many
businesses still rely on passwords as their primary defense, despite the known
security risks, due to challenges around MFA [multi-factor authentication]
adoption and user friction,” Berzinski said. David Sancho, senior threat
researcher at anti-malware vendor Trend Micro, told CSO that the increase in
malware targeting credential stores is unsurprising. “We are definitely seeing a
rise in malware targeting credential stores, but this is hardly a surprise to
anybody,” Sancho said. “Credential stores are where credentials are located,
specifically on the browser. Every time you let the browser ‘memorize’ a
user/password pair, it gets stored somewhere. Those locations are certainly the
prime targets — and have been for a long time — for infostealers.” Darren
Guccione, CEO and co-founder of password manager vendor Keeper Security,
acknowledged that cybercriminals were targeting credential stores but argued
that some applications were better protected than others. “Not all password
managers are created equal, and that distinction is critical as cybercriminals
increasingly target a broad range of cybersecurity solutions, including
credential stores,” Guccione said.
What role does LLM reasoning play for software tasks?
Reasoning models like o1 and R1 work in two steps, first they “reason” or
“think” about the user’s prompt, then they return a final result in a second
step. In the reasoning step, the model goes through a chain of thought to come
to a conclusion. It depends on the user interface in front of the model if you
can fully see the contents of this reasoning step. OpenAI e.g. is only showing
users summaries of each step. DeepSeek’s platform shows the full reasoning
chain (and of course you also have access to the full chain when you run R1
yourself). At the end of the reasoning step the chatbot UIs will show messages
like “Thought for 36 seconds”, or “Reasoned for 6 seconds”. However long it
takes, and regardless of if the user can see it or not, tokens are being
generated in the background, because LLMs think through token generation. ...
Many of the reasoning benchmarks use grade school math problems, so those are
my frame of reference when I try to find analogous problems in software where
a chain of thought would be helpful. It seems to me like this is about
problems that need multiple steps to come to a solution, where each step
depends on the output of the previous one. ... Debugging seems like an
excellent use case for chain of thought. My main puzzle is how much our usage
of reasoning for debugging will be hindered by the lack of function
calling.
How to keep AI hallucinations out of your code

The consequences of flawed AI code can be significant. Security holes and
compliance issues are top of mind for many software companies, but some issues
are less immediately obvious. Faulty AI-generated code adds to overall
technical debt, and it can detract from the efficiency code assistants are
intended to boost. “Hallucinated code often leads to inefficient designs or
hacks that require rework, increasing long-term maintenance costs,” says
Microsoft’s Ramaswamy. Fortunately, the developers we spoke with had plenty of
advice about how to ensure AI-generated code is correct and secure. There were
two categories of tips: how to minimize the chance of code hallucinations, and
how to catch hallucinations after the fact. ... Even with machine assistance,
most people we spoke to saw human beings as the last line of defense against
AI hallucination. Most saw human involvement remaining crucial to the coding
process for the foreseeable future. ” Always use AI as a guide, not a source
of truth,” says Microsoft’s Ramaswamy. “Treat AI-generated code as a
suggestion, not a replacement for human expertise.” That expertise shouldn’t
just be around programming generally; you should stay intimately acquainted
with the code that powers your applications. “It can sometimes be hard to spot
a hallucination if you’re unfamiliar with a codebase,” says Rehl.
Open source LLMs hit Europe’s digital sovereignty roadmap

The project’s top-line goal, as per its tagline, is to create: “A series of
foundation models for transparent AI in Europe.” Additionally, these models
should preserve the “linguistic and cultural diversity” of all EU languages —
current and future. What this translates to in terms of deliverables is still
being ironed out, but it will likely mean a core multilingual LLM designed for
general-purpose tasks where accuracy is paramount. And then also smaller
“quantized” versions, perhaps for edge applications where efficiency and speed
are more important. “This is something we still have to make a detailed plan
about,” Hajič said. “We want to have it as small but as high-quality as
possible. We don’t want to release something which is half-baked, because from
the European point-of-view this is high-stakes, with lots of money coming from
the European Commission — public money.” While the goal is to make the model
as proficient as possible in all languages, attaining equality across the
board could also be challenging. “That is the goal, but how successful we can
be with languages with scarce digital resources is the question,” Hajič said.
“But that’s also why we want to have true benchmarks for these languages, and
not to be swayed toward benchmarks which are perhaps not representative of the
languages and the culture behind them.“
How to Create a Sound Data Governance Strategy

Governance isn’t a project with an end date. It’s an ongoing hygiene exercise
that requires continuous attention and focus,” says Ennamli. “You don’t have
to build an army if you did the initial work right, just a diverse team of
experts that understand the business dynamics and have foundational data
knowledge.” McKesson’s Thirunagalingam warns that it’s also possible to
imagine starting from the wrong end, having ignored the needs of certain key
stakeholders until late in the game. The result of that is resistance to the
adoption of solution and misaligned policies for the governance of the
business with its operational requirements. ... “Do a bit and then build up.
Make things simple at first [to] quickly deliver business value, such as
increasing data accuracy or [enabling] more effective compliance,” says
Thirunagalingam. “Promote accountability by embedding governance into business
outcomes and encouraging ownership of data stewardship to all employees. BSI
Americas’s Barlow says some organizations don’t understand how much data they
possess, which can hamper the implementation of an effective data management
program. Similarly, they may not fully grasp what regulations they must comply
with or what data is specifically collected.
Boost Your Website Core Web Vitals Through DevOps Best Practices
Integrating automation and performance testing is essential for making Core
Web Vitals SEO a natural part of the DevOps workflow. This includes the
implementation of automated performance tests in the CI/CD pipeline after each
code change to detect issues early on. CI/CD pipelines enable rapid testing
and deployment with performance checks. Load testing enables the replication
of high-traffic conditions, uncovering bottlenecks and ensuring the site can
scale for spikes. Similarly, performance budgeting, with goals for metrics
such as page speed, allows teams to set automated tests and avoid
degradation. A/B testing enables teams to test new features side-by-side,
seeing how they affect Core Web Vitals before deployment. With these automated
flows, teams reliably deliver quality code, ensuring performance is always a
consideration and never an afterthought. ... Collaboration among DevOps,
developers and SEO experts is required to optimize Core Web Vitals. All have
their own set of skills, and if all of them collaborate, they can make a
decent plan: DevOps and Developers: Developers construct the site, and DevOps
ensures its proper deployment. Communicating frequently is the secret to
catching performance problems and making sure new code doesn’t slow down the
site.
Mastering Kubernetes in the Cloud: A Guide to Cloud Controller Manager

The main benefit of Cloud Controller Manager is that it offers a simple way
for Kubernetes to interact with cloud provider APIs without requiring any
special configuration or code implementation on the part of Kubernetes users.
Cluster admins can simply choose which cloud they need to integrate with, then
enable the appropriate Cloud Controller Manager. In addition, from the
perspective of the Kubernetes project, Cloud Controller Manager is
advantageous because it separates cloud-specific compatibility logic into a
distinct component. Rather than building support for each cloud platform's
APIs directly into the Kubernetes control plane, Cloud Controller Manager uses
a plugin architecture that allows the various cloud providers to write the
logic necessary for Kubernetes to integrate with their APIs, then make it
available to Kubernetes users as a component that the users can optionally
enable. This approach makes it easy for cloud providers to update the
compatibility layer as needed in order to keep it in sync with their APIs. ...
If you're running Kubernetes on bare-metal servers that you are managing
yourself, Cloud Controller Manager is not necessary because Kubernetes can
interact with nodes and other resources directly, without having to use
special APIs.
A cohesive & data-centric culture is essential for businesses to thrive in the AI-driven world
The cohesive and data-centric culture emerged as it was essential for
businesses to thrive in this AI-dominating world so as to make smarter, faster
decisions. Accurate, accessible, and well-managed data across the organisation
often qualifies the organisation to step away from the guesswork and base
their decisions on reliable information. Moreover, data-driven culture has
always contributed to a more strategic approach to business challenges.
Additionally, AI-powered solutions take this game to a further extent by
providing real-time insights, predictive analytics, and automation, which
means it allows companies to speedily analyse massive data amounts, reveal
hidden patterns, and predict trends, thus acting proactively instead of
reactively. For instance, studies have found that AI can improve forecast
accuracy in the retail sector by reducing errors up to 50%. It was also
noticed that artificial intelligence could uplift the financial sector by 38%
in 10 years time. In the same way, some reports have been released predicting
that AI could help the healthcare sector save $150 billion annually by
becoming more efficient and making better decisions. These are illustrations
of the advanced data culture that AI provides which helps businesses to be
proactive and make decisions based on facts.
No comments:
Post a Comment