Quote for the day:
"People may forget what you say, but
they won't forget how you them feel." -- Mary Kay Ash

“Without standardized protocols, companies will not be able to reap the maximum
value from digital labor, or will be forced to build interoperability
capabilities themselves, increasing technical debt,” he says. Protocols are also
essential for AI security and scalability, because they will enable AI agents to
validate each other, exchange data, and coordinate complex workflows, Lerhaupt
adds. “The industry can build more robust and trustworthy multi-agent systems
that integrate with existing infrastructure, encouraging innovation and
collaboration instead of isolated, fragmented point solutions,” he says. ... ACP
is “a universal protocol that transforms the fragmented landscape of today’s AI
agents into inter-connected teammates,” writes Sandi Besen, ecosystem lead and
AI research engineer at IBM Research, in Towards Data Science. “This unlocks new
levels of interoperability, reuse, and scale.” ACP uses standard HTTP patterns
for communication, making it easy to integrate into production, compared to
JSON-RPC, which relies on more complex methods, Besen says. ... Agent2Agent,
supported by more than 50 Google technology partners, will allow IT leaders to
string a series of AI agents together, making it easier to get the specialized
functionality their organizations need, Ensono’s Piazza says. Both ACP and
Agent2Agent, with their focus on connecting AI agents, are complementary
protocols to the model-centric MCP, their creators say.

“We noticed that some uncertainty models tend to be overconfident, even when
the actual error in prediction is high,” said Bilbrey Pope. “This is common
for most deep neural networks. But a model trained with SNAP gives a metric
that mitigates this overconfidence. Ideally, you’d want to look at both
prediction uncertainty and training data uncertainty to assess your overall
model performance.” ... “AI should be able to accurately detect its knowledge
boundaries,” said Choudhury. “We want our AI models to come with a confidence
guarantee. We want to be able to make statements such as ‘This prediction
provides 85% confidence that catalyst A is better than catalyst B, based on
your requirements.’” In their published study, the researchers chose to
benchmark their uncertainty method with one of the most advanced foundation
models for atomistic materials chemistry, called MACE. The researchers
calculated how well the model is trained to calculate the energy of specific
families of materials. These calculations are important to understanding how
well the AI model can approximate the more time- and energy-intensive methods
that run on supercomputers. The results show what kinds of simulations can be
calculated with confidence that the answers are accurate.

Ironically, integration meant to boost efficiency can stifle innovation. Once a
complex web of AI- interconnected systems exists, adding tools or modifying
processes becomes a major architectural undertaking, not plug-and-play. It
requires understanding interactions with central AI logic, potentially needing
complex model re-training, integration point redevelopment, and extensive
regression testing to avoid destabilisation. ... When AI integrates and
automates decisions and workflows across systems based on learned patterns, it
inherently optimises for the existing or dominant processes observed in the
training data. While efficiency is the goal, there’s a tangible risk of
inadvertently enforcing uniformity and suppressing valuable diversity in
approaches. Different teams might have unique, effective methods deviating
from the norm. An AI trained on the majority might flag these as errors, subtly
discouraging creative problem-solving or context-specific adaptations. ...
Feeding data from multiple sensitive systems (CRM, HR, finance, and
communications) into central AI dramatically increases the scope and
sensitivity of data processed and potentially exposed. Each integration point
is another vector for data leakage or unauthorised access. Sensitive customer,
employee, and financial data may flow across more boundaries and be aggregated
in new ways, increasing the surface area for breaches or misuse.

A minuscule delay (measurable in API response times) in processing API
requests can be as painful to a customer as a major outage. User behavior and
expectations have evolved, and performance standards need to keep up.
Traditional API monitoring tools are stuck in a binary paradigm of up versus
down, despite the fact that modern, cloud native applications live in complex,
distributed ecosystems. ... Measuring performance from multiple locations
provides a more balanced and realistic view of user experience and can help
uncover metrics you need to monitor, like location-specific latency: What’s
fast in San Francisco might be slow in New York and terrible in London. ...
The real value of IPM comes from how its core strengths, such as proactive
synthetic testing, global monitoring agents, rich analytics with
percentile-based metrics and experience-level objectives, interact and
complement each other, Vasiliou told me. “IPM can proactively monitor single
API URIs [uniform resource identifiers] or full API multistep transactions,
even when users are not on your site or app. Many other monitors can also do
this. It is only when you combine this with measuring performance from
multiple locations, granular analytics and experience-level objectives that
the value of the whole is greater than the sum of its parts,” Vasiliou
said.

“Without a robust identity model, agents can’t truly act autonomously or
securely,” says the post. “The MCP-I (I for Identity) specification addresses
this gap – introducing a practical, interoperable approach to agentic
identity.” Vouched also offers its turnkey SaaS Vouched MCP Identity Server,
which provides easy-to-integrate APIs and SDKs for enterprises and developers
to embed strong identity verification into agent systems. While the Agent
Reputation Directory and MCP-I specification are open and free to the public,
the MCP Identity Server is available as a commercial offering. “Thinking
through strong identity in advance is critical to building an agentic future
that works,” says Peter Horadan, CEO of Vouched. “In some ways we’ve seen this
movie before. For example, when our industry designed email, they never
anticipated that there would be bad email senders. As a result, we’re still
dealing with spam problems 50 years later.” ... An early slide outlining
definitions tells us that AI agents are ushering in a new definition of the
word “tools,” which he calls “one of the big changes that’s happening this
year around agentic AI, giving the ability to LLMs to actually do and act with
permission on behalf of the user, interact with permission on behalf of the
user, interact with third-party APIs,” and so on. Tools aside, what are the
challenges for agentic AI? “The biggest one is security,” he says.
Optimistic cybersecurity involves effective NHI management that reduces risk,
improves regulatory compliance, enhances operational efficiency and provides
better control over access management. This management strategy goes beyond
point solutions such as secret scanners, offering comprehensive protection
throughout the entire lifecycle of these identities. ... Furthermore, a
proactive attitude towards cybersecurity can lead to potential cost savings by
automating processes such as secrets rotation and NHIs decommissioning. By
utilizing optimistic cybersecurity strategies, businesses can transform their
defensive mechanisms, preparing for a new era in cyber defense. By integrating
Non-Human Identities and Secrets Management into their cloud security control
strategies, organizations can fortify their digital infrastructure,
significantly reducing security breaches and data leaks. ... Implementing
an optimistic cybersecurity approach is no less than a transformation in
perspective. It involves harnessing the power of technology and human
ingenuity to build a resilient future. With optimism at its core,
cybersecurity measures can become a beacon of hope rather than a looming
threat. By welcoming this new era of cyber defense with open arms,
organizations can build a secure digital environment where NHIs and their
secrets operate seamlessly, playing a pivotal role in enhancing overall
cybersecurity.

The data reveals a persistent reliance on human action for tasks that should
be automated across the identity security lifecycle.41% of end users still
share or update passwords manually, using insecure methods like spreadsheets,
emails, or chat tools. They are rarely updated or monitored, increasing the
likelihood of credential misuse or compromise. Nearly 89% of organizations
rely on users to manually enable MFA in applications, despite MFA being one of
the most effective security controls. Without enforcement, protection becomes
optional, and attackers know how to exploit that inconsistency. 59% of IT
teams handle user provisioning and deprovisioning manually, relying on
ticketing systems or informal follow-ups to grant and remove access. These
workflows are slow, inconsistent, and easy to overlook—leaving organizations
exposed to unauthorized access and compliance failures. ... According to the
Ponemon Institute, 52% of enterprises have experienced a security breach
caused by manual identity work in disconnected applications. Most of them had
four or more. The downstream impact was tangible: 43% reported customer loss,
and 36% lost partners. These failures are predictable and preventable, but
only if organizations stop relying on humans to carry out what should be
automated. Identity is no longer a background system. It's one of the primary
control planes in enterprise security.

“Attackers have leaned more heavily on vulnerability exploitation to get in
quickly and quietly,” said Dray Agha, senior manager of security operations at
managed detection and response vendor Huntress. “Phishing and stolen credentials
play a huge role, however, and we’re seeing more and more threat actors target
identity first before they probe infrastructure.” James Lei, chief operating
officer at application security testing firm Sparrow, added: “We’re seeing a
shift in how attackers approach critical infrastructure in that they’re not just
going after the usual suspects like phishing or credential stuffing, but
increasingly targeting vulnerabilities in exposed systems that were never meant
to be public-facing.” ... “Traditional methods for defense are not resilient
enough for today’s evolving risk landscape,” said Andy Norton, European cyber
risk officer at cybersecurity vendor Armis. “Legacy point products and siloed
security solutions cannot adequately defend systems against modern threats,
which increasingly incorporate AI. And yet, too few organizations are
successfully adapting.” Norton added: “It’s vital that organizations stop
reacting to cyber incidents once they’ve occurred and instead shift to a
proactive cybersecurity posture that allows them to eliminate vulnerabilities
before they can be exploited.”

An important component of an organization’s data management strategy is
controlling access to the data to prevent data corruption, data loss, or
unauthorized modification of the information. The fundamentals of data access
management are especially important as the first line of defense for a company’s
sensitive and proprietary data. Data access management protects the privacy of
the individuals to which the data pertains, while also ensuring the organization
complies with data protection laws. It does so by preventing unauthorized people
from accessing the data, and by ensuring those who need access can reach it
securely and in a timely manner. ... Appropriate data access controls improve
the efficiency of business processes by limiting the number of actions an
employee can take. This helps simplify user interfaces, reduce database errors,
and automate validation, accuracy, and integrity checks. By restricting the
number of entities that have access to sensitive data, or permission to alter or
delete the data, organizations reduce the likelihood of errors being introduced
while enhancing the effectiveness of their real-time data processing activities.
... Becoming a data-driven organization requires overcoming several obstacles,
such as data silos, fragmented and decentralized data, lack of visibility into
security and access-control measures currently in place, and a lack of
organizational memory about how existing data systems were designed and
implemented.
Generative AI is making the most impact in areas like Marketing, Software
Engineering, Customer Service, and Sales. These functions benefit from AI’s
ability to process vast amounts of data quickly. On the other hand, Legal and HR
departments see less GenAI adoption, as these areas require high levels of
accuracy, predictability, and human judgment. ... Business and tech leaders must
prioritize business value when choosing AI use cases, focus on AI literacy and
responsible AI, nurture cross-functional collaboration, and stress continuous
learning to achieve successful outcomes. ... Leaders need to clearly outline and
share a vision for responsible AI, establishing straightforward principles and
policies that address fairness, bias reduction, ethics, risk management,
privacy, sustainability, and compliance with regulations. They should also
pinpoint the risks associated with Generative AI, such as privacy concerns,
security issues, hallucinations, explainability, and legal compliance
challenges, along with practical ways to mitigate these risks. When choosing and
prioritizing use cases, it’s essential to consider responsible AI by filtering
out those that carry unacceptable risks. Each Generative AI use case should have
a designated champion responsible for ensuring that development and usage align
with established policies.
No comments:
Post a Comment