AI Agents: The Intersection of Tool Calling and Reasoning in Generative AI
Building robust and reliable agents requires overcoming many different
challenges. When solving complex problems, an agent often needs to balance
multiple tasks at once including planning, interacting with the right tools at
the right time, formatting tool calls properly, remembering outputs from
previous steps, avoiding repetitive loops, and adhering to guidance to protect
the system from jailbreaks/prompt injections/etc. Too many demands can easily
overwhelm a single agent, leading to a growing trend where what may appear to an
end user as one agent, is behind the scenes a collection of many agents and
prompts working together to divide and conquer completing the task. This
division allows tasks to be broken down and handled in parallel by different
models and agents tailored to solve that particular piece of the puzzle. It’s
here that models with excellent tool calling capabilities come into play. While
tool-calling is a powerful way to enable productive agents, it comes with its
own set of challenges. Agents need to understand the available tools, select the
right one from a set of potentially similar options, format the inputs
accurately, call tools in the right order, and potentially integrate feedback or
instructions from other agents or humans.
Transforming cloud security with real-time visibility
Addressing the visibility problem first, enables security teams to understand
real risk and fix misconfigurations across the organization much faster. As an
example, we encounter many teams that face the same misconfiguration across
hundreds of assets owned by thousands of developers. Without the right
visibility into assets’ behavior, organizations have to go through every
individual team, explain the risk, check if their workload actually utilizes the
misconfiguration, and then configure it accordingly – essentially an impossible
task. With runtime insights, security teams immediately understand what specific
assets utilize the misconfigurations, which developers own them, and all the
relevant risk contexts around them. This takes what could be a 6-month long
project involving the whole R&D org into a simple task completed in a day
and involving a few individuals. ... One of the top challenges organizations
face is maintaining consistent compliance across various cloud environments,
especially when those environments are highly dynamic and deployed by multiple
stakeholders who don’t necessarily have the right expertise in the space. The
solution lies in taking a dual approach.
Patrolling the Micro-Perimeter to Enhance Network Security
As companies move into industrial automation, remote retail sites, remote
engineering, etc., the systems and applications used by each company group may
need to be sequestered from corporate-wide employee access so that only those
users authorized to use a specific system or application can gain access. From a
network perspective, segments of the network, which become internal network
micro security peripheries, surround these restricted access systems and
applications, so they are only available for the users and user devices that are
authorized to use them. Multi-factor security protocols are used to strengthen
user signons, and network monitoring and observability software polices all
activity at each network micro-periphery. The mission of a zero-trust network is
to "trust no one," not even company employees, with unlimited access to all
network segments, systems, and applications. This is in contrast to older
security schemes that limited security checks and monitoring to the external
periphery of the entire enterprise network but that didn't apply security
protocols to micro-segments within that network.
CIO intangibles: 6 abilities that set effective IT execs apart
Change leadership is different, and it’s very much a CIO-level skill, she says.
“Change leadership is inspiring and motivating you to want to make the change.
It’s much more about communication. It’s about navigating the different parts of
the organization. It’s co-leading.” It’s one thing, she says, for an IT leader
or a change management team to tell users, “This is what we’re doing and why
we’re doing it.” It’s at a whole other level to have a business leader say, “Hey
team, we’re next. This is what we’re doing. This is why it’s important and here
are my expectations of you.” That’s what effective change leadership can
accomplish. ... For critical thinking, CIOs need another intangible skill: the
ability to ask the right questions. “It’s the whole idea of being more curious,”
says Mike Shaklik, partner and global head of CIO advisory at Infosys
Consulting. “The folks who can listen well, and synthesize while they listen,
ask better questions. They learn to expect better answers from their own people.
If you add intentionality to it, that’s a game-changer.” ... “In today’s
environment, a lot of technology work does not happen inside of the IT
organization,” Struckman says. “Yet leadership expects the CIO to understand how
it all makes sense together.”
Building an Internal Developer Platform: 4 Essential Pillars
Infrastructure as Code (IaC) is the backbone of any modern cloud native
platform. It allows platform engineering teams to manage and provision
infrastructure (such as compute, storage and networking resources)
programmatically using code. IaC ensures that infrastructure definitions are
version-controlled, reusable and consistent across different environments. ...
Security, governance and compliance are integral to managing modern
infrastructure, but manual policy enforcement doesn’t scale well and can
create bottlenecks. Policy as Code (PaC) helps solve this challenge by
programmatically defining governance, security and operational policies. These
policies are automatically enforced across cloud environments, Kubernetes
clusters and CI/CD pipelines. Essentially, they “shift down security” into the
platform. ... GitOps is an operational model where all system configurations,
including application deployments, infrastructure and policies, are managed
through Git repositories. By adopting GitOps, platform teams can standardize
how changes are made and ensure that the actual system state matches the
desired state defined in Git.
Chief risk storyteller: How CISOs are developing yet another skill
Creating a compelling narrative is also important to bolster the case for
investment in the cybersecurity program, when it comes to restructuring or
starting a new program it becomes very important. Hughes estimates the base
set of requirements in the Center for Internet Security Controls Framework is
a $2 to $3 million expense. “That’s a massive expense, so that storytelling
and dialogue between you and the rest of the company to create that new,
forward expense is significant,” he says. However, just as some stories have
their skeptics, CISOs also need to be able to defend their risk story,
particularly when there’s big dollars attached to it. De Lude has found it can
be helpful to stress test the story or presentation with challenge sessions.
“I might invite different people to a run through and explain the concept and
ask for potential objections to test and develop a robust narrative,” she
says. De Lude has found that drawing on internal expertise of people with
strong communications skills can help learn how to project a story in a way
that’s compelling. “Having someone lend support who wasn’t a cyber expert but
knew how to really convey a strong message in all sorts of different ways was
a gamer change,” she says.
The Disruptive Potential of On-Device Large Language Models
On-device personal AI assistants transform each device into a powerful
companion that mimics human interaction and executes complex tasks. These AI
assistants can understand context and learn about their owner's preferences,
allowing them to perform a wide range of activities — from scheduling
appointments to creative writing — even when offline. By operating directly on
the user's device, these AI assistants ensure privacy and fast response times,
making them indispensable for managing both routine and sophisticated tasks
with ease and intelligence. ... Voice control for devices is set to become
significantly more powerful and mainstream, especially with advancements in
on-device large language models. Companies like FlowVoice are already paving
the way, enabling near-silent voice typing on computers. ... On-device AI
therapists have the potential to become mainstream due to their ability to
offer users both privacy and responsive, engaging conversations. By operating
directly on the user's device, these AI therapists ensure that sensitive data
remains private and secure, minimizing the risk of breaches associated with
cloud-based services.
Why cloud computing is losing favour
There are various reasons behind this trend. “In the early days, cloud
repatriations were often a response to unsuccessful migrations; now they more
often reflect changes in market pricing,” says Adrian Bradley, head of cloud
transformation at KPMG UK. “The inflation of labour costs, energy prices and
the cost of the hardware underpinning AI are all driving up data centre fees.
For some organisations, repatriation changes the balance in the relative cost
and value of on-premise or hybrid architectures compared to public clouds.”
... There are risks that can come with cloud repatriation. James Hollins,
Azure presales solution architect at Advania, highlights the potential to
disrupt key services. “Building from scratch on-premises could be complex and
risky, especially for organisations that have been heavily invested in
cloud-based solutions,” he says. “Organisations accustomed to cloud-first
environments may need to acquire or retrain staff to manage on-premises
infrastructure, as they will have spent the last few years maintaining and
operating in a cloud-first world with a specific skillset.” Repatriation can
lead to higher licensing costs for third-party software that many businesses
do not anticipate or budget for, he adds.
Proactive Approaches to Securing Linux Systems and Engineering Applications
With AI taking the world by storm, it is more important than ever for you, as
an IT professional, to be vigilant and proactive about security
vulnerabilities. The rapid advancement of AI technologies introduces new
attack vectors and sophisticated threats, as malicious actors can leverage AI
to automate and scale their attacks, potentially exploiting vulnerabilities at
an unprecedented rate and complexity, making traditional security measures
increasingly challenging to maintain. Your role in implementing these measures
is crucial and valued. ... Diligent patch management is critical for
maintaining the security and stability of Linux systems and applications.
Administrators play a vital role in this process, ensuring that patches are
applied promptly and correctly. ... Automation tools and centralized patch
management systems are invaluable for streamlining the patch deployment
process and reducing human error. These tools ensure that patches are applied
consistently across all endpoints, enhancing overall security and operational
efficiency. Administrators can patch the system and applications using
configuration management tools like Ansible and Puppet.
The Role of Architects in Managing Non-Functional Requirements
One of the strongest arguments for architects owning NFRs is that
non-functional aspects are deeply integrated into the system architecture. For
example, performance metrics, scalability, and security protocols are all
shaped by architectural decisions such as choice of technology stack, data
flow design, and resource allocation. Since architects are responsible for
making these design choices, it makes sense that they should also ensure the
system meets the NFRs. When architects own NFRs, they can prioritise these
elements throughout the design phase, reducing the risk of conflicts or
last-minute adjustments that could compromise the system’s stability. This
ownership ensures that non-functional aspects are not seen as afterthoughts
but rather integral parts of the design process. ... Architects typically have
a high-level, end-to-end view of the system, enabling them to understand how
various components interact. This holistic perspective allows them to evaluate
trade-offs and balance functional and non-functional needs without
compromising the integrity of the system. For example, an architect can
optimise performance without sacrificing security or usability by making
informed decisions that consider all NFRs.
Quote for the day:
"Nothing ever comes to one, that is
worth having, except as a result of hard work." --
Booker T. Washington
No comments:
Post a Comment