Service as Software: How AI Agents Are Transforming SaaS
SaaS empowered users across industries by providing the tools and intelligence
to make informed decisions. But it has always stopped short of execution.
Lawyers, radiologists, tax consultants, and other service providers rely on SaaS
to make decisions, but they remain responsible for the last-mile
activity. Service as Software closes this gap. Agents powered by capable
LLMs and integrated with existing APIs — and even SaaS platforms — don’t just
inform users, they take action on their behalf. Instead of providing tools for
human service providers, Service as Software directly delivers outcomes. This
transformation is more than technological — it’s economic. ... Enterprises
considering transitioning from SaaS to Service as Software often begin by
examining which tasks would yield the most value from automation. These tasks
are typically repetitive, time-sensitive, or error-prone when conducted
manually. Introducing an intelligent agent that can monitor data streams,
evaluate decision rules and initiate final actions may require augmenting
existing infrastructure — for instance, adding webhooks, implementing new API
endpoints, or integrating a rules engine.
Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged
Perhaps the most dangerous aspect of anthropomorphizing AI is how it masks the
fundamental differences between human and machine intelligence. While some AI
systems excel at specific types of reasoning and analytical tasks, the large
language models (LLMs) that dominate today’s AI discourse — and that we focus on
here — operate through sophisticated pattern recognition. These systems process
vast amounts of data, identifying and learning statistical relationships between
words, phrases, images and other inputs to predict what should come next in a
sequence. When we say they “learn,” we’re describing a process of mathematical
optimization that helps them make increasingly accurate predictions based on
their training data. ... One critical area where anthropomorphizing creates risk
is content generation and copyright compliance. When businesses view AI as
capable of “learning” like humans, they might incorrectly assume that
AI-generated content is automatically free from copyright concerns. ... One of
the most concerning costs is the emotional toll of anthropomorphizing AI. We see
increasing instances of people forming emotional attachments to AI chatbots,
treating them as friends or confidants.
Building Secure Software - Integrating Security in Every Phase of the SDLC
A common problem in software development is that security related activities are
left out or deferred until the final testing phase, which is too late in the
SDLC after most of the critical design and implementation has been completed.
Besides, the security checks performed during the testing phase can be
superficial, limited to scanning and penetration testing, which might not reveal
more complex security issues. By adopting shift left principle, teams are able
to detect and fix security flaws early on, save money that would otherwise be
spent on a costly rework, and have a better chance of avoiding delays going into
production. Integrating security into SDLC should look like weaving rather than
stacking. There is no “security phase,” but rather a set of best practices and
tools that should be included within the existing phases of the SDLC. A Secure
SDLC requires adding security review and testing at each software development
stage, from design, to development, to deployment and beyond. From initial
planning to deployment and maintenance, embedding security practices ensures the
creation of robust and resilient software.
Making AI greener starts with smarter data center design
There’s been a lot of talk about the off-grid energy investments of
hyperscalers. But the energy efficiency of AI infrastructure also has a big
role to play. Nokia provides networking connectivity inside and between data
centers, as well as between end users and data center applications.
Understanding this intricate web is important as it’s not just about making
the processes inside a data center faster and more efficient. It’s about
making the entire journey between somebody making an AI request—and getting
back a response—quick, secure, and more energy efficient. ... Energy,
performance, and cost considerations may prompt some cloud providers to
build their data centers in remote locations with access to clean energy,
passive cooling, and cheaper and more plentiful real estate. However, data
sovereignty laws, security concerns, and the ultra-low latency requirements
of industrial applications may see a move toward more distributed cloud
computing, with AI workloads moving closer to the end user. This would
likely lead to more regional, metropolitan, and edge data centers, with some
businesses and organizations opting for on-site data centers for
mission-critical functions.
We may, in fact, see both trends at the
same time.
Employees Enter Sensitive Data Into GenAI Prompts Far Too Often
"Utilizing AI for the sake of using AI is destined to fail," said Kris
Bondi, CEO and co-founder of Mimoto, in an emailed statement to Dark
Reading. "Even if it gets fully implemented, if it isn't serving an
established need, it will lose support when budgets are eventually cut or
reappropriated." Though Kowski believes that not incorporating GenAI is
risky, success can still be achieved, he notes. "Success without AI is still
achievable if a company has a compelling value proposition and strong
business model, particularly in sectors like engineering, agriculture,
healthcare, or local services where non-AI solutions often have greater
impact," he said. If organizations do want to pursue incorporating GenAI
tools but want to mitigate the high risks that come along with it, the
researchers at Harmonic have recommendations on how to best approach this.
The first is to move beyond "block strategies" and implement effective AI
governance, including deploying systems to track input into GenAI tools in
real time, identifying what plans are in use and ensuring that employees are
using paid plans for their work and not plans that use inputted data to
train systems, gaining full visibility over these tools, sensitive data
classification, creating and enforcing workflows, and training employees on
best practices and risks of responsible GenAI use.
What is Blue Ocean Strategy? 3 Key Ways to Build a Business in an Uncontested Market
One of the biggest surprises in tackling a neglected market segment is
realizing that your future customers might not even know they need you. They
may sense a vague discomfort or carry a subconscious worry, but they haven't
articulated the problem in a way that translates into action. In my field,
most people didn't fully appreciate how complex certain end-of-life tasks
could become — until they found themselves in the middle of a crisis they
never prepared for. Simply presenting a solution and hoping people will
connect the dots doesn't work when the underlying problem is hidden or
poorly understood. Education became my most potent tool. ... Building
momentum in a market with no clear precedent means learning to paddle in
still waters. I needed to constantly fine-tune the product based on
authentic customer feedback, invest the time and effort to educate potential
users so they could recognize the value of what I was offering, and craft a
holistic experience that viewed their challenges from multiple angles. These
three strategies became the bedrock of my approach to Blue Ocean
markets.
Secure AI? Dream on, says AI red team
The first step in an AI red teaming operation is to determine which
vulnerabilities to target, they said. They suggest: “starting from potential
downstream impacts, rather than attack strategies, makes it more likely that
an operation will produce useful findings tied to real world risks. After
these impacts have been identified, red teams can work backwards and outline
the various paths that an adversary could take to achieve them.” ... The
two, authors said, are distinct yet “both useful and can even be
complimentary. In particular, benchmarks make it easy to compare the
performance of multiple models on a common dataset. AI red teaming requires
much more human effort but can discover novel categories of harm and probe
for contextualized risks.” ... The bottom line here: RAI harms are more
ambiguous than security vulnerabilities and it all has to do with
“fundamental differences between AI systems and traditional software.” Most
AI safety research, the authors noted, focus on adversarial users who
deliberately break guardrails, when in truth, they maintained, benign users
who accidentally generate harmful content are as or more important.
New AI Architectures Could Revolutionize Large Language Models
For context, transformer architecture, the technology which gave ChatGPT the
'T' in its name, is designed for sequence-to-sequence tasks such as language
modeling, translation, and image processing. Transformers rely on “attention
mechanisms,” or tools to understand how important a concept is depending on
a context, to model dependencies between input tokens, enabling them to
process data in parallel rather than sequentially like so-called recurrent
neural networks—the dominant technology in AI before transformers appeared.
This technology gave models context understanding and marked a before and
after moment in AI development. ... Google Research's Titans architecture
takes a different approach to improving AI adaptability. Instead of
modifying how models process information, Titans focuses on changing how
they store and access it. The architecture introduces a neural long-term
memory module that learns to memorize at test time, similar to how human
memory works. ... Overall, the era of AI companies bragging over the sheer
size of their models may soon be a relic of the past. If this new generation
of neural networks gains traction, then future models won’t need to rely on
massive scales to achieve greater versatility and performance.
How to Leverage Network Segmentation for Hospitality Sector PCI SSF Compliance
Network segmentation is the process of dividing a computer network into
isolated segments or subnetworks, with each segment protected by security
controls like firewalls and access restrictions. Specifically, each segment
is separated by firewalls or other security measures, effectively
restricting traffic flow between segments. Thus, this isolation helps
contain potential security breaches, hence preventing them from spreading
across the entire network. ... In the context of PCI SSF compliance, network
segmentation can help hospitality businesses protect sensitive payment card
data. It does so by limiting access to this data. By isolating the
Cardholder Data Environment (CDE) from the rest of the network,
organizations can reduce the scope of PCI SSF compliance. This also enhances
their overall security posture. ... By isolating sensitive data, network
segmentation reduces the risk of unauthorized access and data breaches. It
creates multiple layers of defense, making it more difficult for attackers
to reach critical systems. This approach also limits the lateral movement of
threats, ensuring that a compromised system does not jeopardize the entire
network.
Overcoming Key Challenges in an AI-Centric Future
Much has been made of AI and its potential dangers in the hands of
attackers. It’s true—with the help of AI, launching an attack has never been
easier, and it’s likely just a matter of time until we witness a significant
AI-driven breach. That said, all is not lost. AI-specific security controls
are already beginning to emerge, and as AI becomes more commonplace, newer
and more advanced solutions will continue to emerge in the near future. ...
Regulations almost always lag behind innovation, and AI is no exception.
While a handful of AI regulations have begun to emerge around the world,
most organizations are currently taking matters into their own hands by
implementing dedicated AI polices to evaluate and control the AI services
they use. Right now, those initiatives are focused primarily on maintaining
data privacy and preventing AI from making critical errors. These AI safety
standards will continue to evolve and will likely be integrated into
existing security frameworks, including those put out by independent
advisory bodies. Regulators will almost certainly maintain a strong focus on
ethical considerations, creating guidelines that help define acceptable and
responsible use cases for AI capabilities.
Quote for the day:
“Winners are not afraid of losing.
But losers are. Failure is part of the process of success. People who
avoid failure also avoid success.” -- Robert T. Kiyosaki
No comments:
Post a Comment