Quote for the day:
"Leadership is liberating people to do
what is required of them in the most effective and humane way possible." --
Max DePree

AI upskilling is still majorly under-prioritized across organizations. Did you
know that less than one-third of companies have trained even a quarter of their
staff to use AI? How do leaders expect employees to feel empowered to use AI if
education isn’t presented as the priority? Maintaining a nimble and
knowledgeable workforce is critical, fostering a culture that embraces
technological change. Team collaboration in this sense could take the form of
regular training about agentic AI, highlighting its strengths and weaknesses and
focusing on successful human-AI collaborations. For more established companies,
role-based training courses could successfully show employees in different
capacities and roles to use generative AI appropriately. ... Although gen AI
will not substantially affect organizations’ workforce sizes in the short-term,
we should still expect an evolution of role titles and responsibilities. For
example, from service operations and product development to AI ethics and AI
model validation positions. For this shift to successfully happen,
executive-level buy-in is paramount. Senior leaders need a clearly-defined
organization-wide strategy, including a dedicated team to drive gen AI adoption.
We’ve seen that when senior leaders delegate AI integration solely to IT or
digital technology teams, the business context can be neglected.

“AI is not just an incremental change from digital business. AI is a step change
in how business and society work,” he said. “A significant implication is that,
if savviness across the C-suite is not rapidly improved, competitiveness will
suffer, and corporate survival will be at stake.” CEOs perceived even the CIO,
chief information security officer (CISO), and chief data officer (CDO) as
lacking AI savviness. Respondents said the top two factors limiting AI’s
deployment and use are the inability to hire adequate numbers of skilled people
and an inability to calculate value or outcomes. “CEOs have shifted their view
of AI from just a tool to a transformative way of working,” said Jennifer
Carter, a principal analyst at Gartner. “This change has highlighted the
importance of upskilling. As leaders recognize AI’s potential and its impact on
their organizations, they understand that success isn’t just about hiring new
talent. Instead, it’s about equipping their current employees with the skills
needed to seamlessly incorporate AI into everyday tasks.” This focus on
upskilling is a strategic response to AI’s evolving role in business, ensuring
that the entire organization can adapt and thrive in this new paradigm.
Sixty-six percent of CEOs said their business models are not fit for AI
purposes, according to Gartner’s survey.

The most obvious option is the one that is already happening whether we like
it or not: LLMs are the new Q&A platforms. In the immediate term, ChatGPT
and similar tools have become the go-to source for many. They provide the
convenience of natural language queries with immediate answers. It’s possible
we’ll see official “Stack Overflow GPT” bots or domain-specific LLMs trained
on curated programming knowledge. In fact, Stack Overflow’s own team has been
experimenting with using AI to draft preliminary answers to questions, while
linking back to the original human posts for context. This kind of hybrid
approach leverages AI’s speed but still draws on the library of verified
solutions the community has built over years. ... Additionally, it’s still
possible that the social Q&A sites will save the experience through data
partnerships. For example, Stack Overflow, Reddit, and others have moved
toward paid licensing agreements for their data. The idea is to both control
how AI companies use community content and to funnel some value back to the
content creators. We may see new incentives for experienced developers to
contribute knowledge. One proposal is that if an AI answer draws from your
Stack Overflow post, you could earn reputation points or even a cut of the
licensing fee.

AI models are frequently deployed as part of larger application pipelines,
such as through APIs, plugins, or retrieval-augmented generation (RAG)
architectures. “Insufficient testing at this level can lead to insecure
handling of model inputs and outputs, injection pathways through serialized
data formats, and privilege escalation within the hosting environment,”
Mindgard’s Garraghan says. “These integration points are frequently overlooked
in conventional AppSec [application security] workflows.” ... AI systems may
exhibit emergent behaviors only during deployment, especially when operating
under dynamic input conditions or interacting with other services.
“Vulnerabilities such as logic corruption, context overflow, or output
reflection often appear only during runtime and require operational
red-teaming or live traffic simulation to detect,” according to Garraghan. ...
The rush to implement AI puts CISOs in a stressful bind, but James Lei, chief
operating officer at application security testing firm Sparrow, advises CISOs
to push back on the unchecked enthusiasm to introduce fundamental security
practices into the deployment process. “To reduce these risks, organizations
should be testing AI tools in the same way they would any high-risk software,
running simulated attacks, checking for misuse scenarios, validating input and
output flows, and ensuring that any data processed is appropriately
protected,” he says.

Today, in leading-edge organizations, data stewardship is at the heart of
data-driven transformation initiatives, such as DataOps, AI governance, and
improved metadata management, which have evolved data stewardship beyond
traditional data quality control. Data stewards can be found in every industry
and in organizations of any size. Modern data stewards interact with:Automated
data quality tools that identify and resolve data issues at scale. Data catalogs
and data lineage applications that organize business and technical metadata and
provide searchable inventories of data assets. AI/ML models that require
extensive monitoring to ensure they are trained on unbiased, accurate datasets
The scope of data stewardship has expanded to include ethical considerations,
particularly concerning data privacy, algorithmic bias, and responsible AI. Data
stewards are increasingly seen as the conscience of data within organizations,
championing not only compliance but also fairness, transparency, and
accountability. New organizational models, such as federated data stewardship –
in which data stewardship responsibilities are distributed across teams – can
promote improved collaboration and enable scaling data stewardship efforts
alongside agile and decentralized business units.

In Strands’ model-driven approach, tools are key to how you customize the
behavior of your agents. For example, tools can retrieve relevant documents from
a knowledge base, call APIs, run Python logic, or just simply return a static
string that contains additional model instructions. Tools also help you achieve
complex use cases in a model-driven approach, such as with these Strands Agents
example pre-built tools: Retrieve tool: This tool implements semantic search
using Amazon Bedrock Knowledge Bases. Beyond retrieving documents, the retrieve
tool can also help the model plan and reason by retrieving other tools using
semantic search. For example, one internal agent at AWS has over 6,000 tools to
select from! Models today aren’t capable of accurately selecting from quite that
many tools. Instead of describing all 6,000 tools to the model, the agent uses
semantic search to find the most relevant tools for the current task and
describes only those tools to the model. ... Thinking tool: This tool prompts
the model to do deep analytical thinking through multiple cycles, enabling
sophisticated thought processing and self-reflection as part of the agent. In
the model-driven approach, modeling thinking as a tool enables the model to
reason about if and when a task needs deep analysis.

“AI hallucinations are an expected byproduct of probabilistic models,” explains
Chetan Conikee, CTO at Qwiet AI, emphasizing that the focus shouldn’t be on
eliminating them entirely but on minimizing operational disruption. “The CISO’s
priority should be limiting operational impact through design, monitoring, and
policy.” That starts with intentional architecture. Conikee recommends
implementing a structured trust framework around AI systems, an approach that
includes practical middleware to vet inputs and outputs through deterministic
checks and domain-specific filters. This step ensures that models don’t operate
in isolation but within clearly defined bounds that reflect enterprise needs and
security postures. Traceability is another cornerstone. “All AI-generated
responses must carry metadata including source context, model version, prompt
structure, and timestamp,” Conikee notes. Such metadata enables faster audits
and root cause analysis when inaccuracies occur, a critical safeguard when AI
output is integrated into business operations or customer-facing tools. For
enterprises deploying LLMs, Conikee advises steering clear of open-ended
generation unless necessary. Instead, organizations should lean on RAG grounded
in curated, internal knowledge bases.

Internally, an important lesson has been to view data management as a federated
service. This entails a shift from data management being a ‘governance’ activity
– something people did because we pushed them to do it – to a service-driven
activity – something people do because they want to. We worked with our
User-Centred Service Design team to agree an underpinning set of principles to
get buy-in across the organisation on the purpose of, and facets to, good data
management. The overarching principle is that data are valuable, shared assets.
We can maximise value by making data widely available, easy to use and
understand, whilst ensuring data are protected and not misused. Bringing the
service to life means getting four things right: First, a proportionate vision
for service maturity. All data need to have basic information registered. But
where data are widely used or feed into critical processes, it becomes
instrumental to dedicate resources to supporting ease of access, use and quality
for our users. We are increasingly tending toward managing these assets
centrally. Second, the assignment of clear responsibilities across the
federation. We are working through which datasets will be managed centrally and
which will be managed by teams across the Bank that are expert in them.

If it takes developers and engineers months to become productive, your platform
isn’t helping — it’s hindering. A great platform should be as frictionless and
intuitive as a consumer-grade product. Internal platforms must empower instant
productivity. If your platform offers compute, it shouldn’t just be raw power —
it should be integrated, easy to adopt, and evolve seamlessly in the background.
Let’s not create unnecessary cognitive load. Developers are adapting quickly to
generative AI and new tech. The real value lies in solving real, tangible
problems — not fictional ones. This brings us to a deeper look at what’s not
working — and why so many efforts fail despite the best intentions. ... Most
enterprises are hybrid by nature — legacy systems, siloed processes and complex
workflows are the norm. The real challenge isn’t just technological; it’s
integrating platform engineering into these messy realities without making it
worse. Today, no single product solves this end-to-end. We’re still lacking a
holistic solution that manages internal workflows, governance and hybrid
complexity without adding friction. What’s needed is a shift in mindset — from
assembling open source tools to building integrated, adoption-focused,
business-aligned platforms. And that shift must be guided by clear trends in
tooling and team structure.

“A lot of the carbon emissions of the data center happen in the build of it, in
laying down the slab,” says Josh Claman, CEO at Accelsius, a liquid cooling
company. “I hope that companies won’t just throw all that away and start over.”
In addition to the environmental benefits, upgrading an air-cooled data center
into a hybrid, liquid and air system has other advantages, says Herb Hogue, CTO
at Myriad360, a global systems integrator. Liquid cooling is more effective than
air alone, he says, and when used in combination with air cooling, the
temperature of the air cooling systems can be increased slightly without
impacting performance. “This reduces overall energy consumption and lowers
utility bills,” he says. Liquid cooling also allows for not just lower but also
more consistent operating temperatures, Hogue says. That leads to less wear on
IT equipment, and, without fans, fewer moving parts per server. The downsides,
however, include the cost of installing the hybrid system and needed specialized
operations and maintenance skills. There might also be space constraints and
other challenges. Still, it can be a smart approach for handling high-density
server setups, he says. And there’s one more potential benefit, says Gerald
Kleyn, vice president of customer solutions for HPC and AI at Hewlett Packard
Enterprise.
No comments:
Post a Comment