Quote for the day:
"You can't be a leader if you can't influence others to act." -- Dale E. Zand
Meet your AI auditor: How this new job role monitors model behavior
The relentless rise of artificial intelligence (AI) is creating a new role for
business and technology professionals to consider: AI auditor. The role bears
a striking resemblance to that of financial auditors, with a major exception:
AI auditors monitor and report on the behavior of AI transactions rather than
monetary transactions. ... The closest role to an AI auditor is now seen
within teams tasked with reviewing AI model behavior, but their work is more
akin to quality assurance, Bronfman said. The reviews cover "outputs,
outliers, and edge-cases, and audit training processes for data input
properties, accuracy, and predictability." AI auditors will put more teeth
into assuring AI is responsible and trustworthy. ... AI auditing jobs won't
just be found within enterprises. Just as organizations tend to rely on
outside financial auditors, there will be many roles within third-party AI
auditing firms. "Independent third-party auditors provide structured oversight
and prevent conflicts of interest," said Bronfman. AI auditing standards and
codes of conduct may even be ultimately supported "by a UN-like body or a
coalition of major states, where deployment will require ongoing behavioral
audits and mandated transparency." ... To move into this type of role, budding
AI auditors "will need to deeply understand AI and how the algorithm works in
order to identify where the pitfalls are and test how it can fail," said
Bronfman. Ransomware is the invoice for compounding technical debt
Cybercriminals are continuing their aggressive campaign of credential theft,
purchasing stolen usernames and passwords from the dark web to access personal
email, social media or financial accounts, noted the report. At an
organisational level, these same pathways are compounded by internal security
gaps like identity sprawl, which increases the chance of compromise, said
Niraj Naidu ... “Technical debt accumulates quickly and quietly,” he told ARN.
“A lot of organisations rely on legacy backup systems that were never really
designed to protect against cyber-attacks. ... Naidu believes the urgency to
do something “isn’t really triggered until there’s a security event for a lot
of organisations”. That then leads to the ransom note, which is like “the
invoice coming due for years of technical debt”, he explained. “With that
there’s downtime, strained investor relations, legal implications, customer
churn, as well as brand damage and regulatory penalties,” Naidu said. ... What
has led to the failure for organisations to address tech debt is a “lack of
clear visibility” over what sensitive information they hold, where it resides
and who can access it, explained Naidu. “A lot of organisations may believe
they’ve eliminated technical debt, especially executives,” he said. “They may
not necessarily have that level of visibility or transparency, particularly
when you’re looking at cloud adoption.Don’t Panic Yet: “Humanity’s Last Exam” Has Begun
Well-known benchmarks such as the Massive Multitask Language Understanding
(MMLU) exam, previously viewed as rigorous, have become less effective at
distinguishing true progress in AI capability. In response, an international
group of nearly 1,000 researchers, including a professor from Texas A&M
University, developed a far more demanding assessment. Their goal was to
design an exam so comprehensive and grounded in specialized human expertise
that today’s AI systems would struggle to pass it. The result is “Humanity’s
Last Exam” (HLE), a 2,500-question test that covers mathematics, the
humanities, natural sciences, ancient languages, and highly specialized
academic fields. ... Despite its apocalyptic name, Humanity’s Last Exam isn’t
meant to suggest the end of human relevance. Instead, it highlights how much
knowledge remains uniquely human and how far AI systems still have to go.
“This isn’t a race against AI,” Nguyen said. “It’s a method for understanding
where these systems are strong and where they struggle. That understanding
helps us build safer, more reliable technologies. And, importantly, it reminds
us why human expertise still matters.” ... HLE is intended to serve as a
long‑term, transparent benchmark for evaluating advanced AI systems. As part
of that mission, the team has made some of the exam publicly available, while
keeping most of the test questions hidden so AI models can’t memorize the
answers. Who really sets AI guardrails? How CIOs can shape AI governance policy
As Donald Farmer, futurist at Tranquilla AI, explains, the guardrails of a
vendor's AI system reflect that vendor's assessment of acceptable risk -- not
the enterprise's. "That is shaped by their legal own exposure, their broadest
possible customer base and their own ethical assumptions," Farmer said. "This
works for many customers, but at the edges there can be tension." ... "Every
AI agent expands the attack surface." Without disciplined data management and
segmentation, one compromised component can ripple across business functions.
The more tightly integrated AI becomes, the greater the potential blast
radius. This requires CIOs to engage actively with governance, even if it
seems like they are being handed a list of preset rules. As Palmer said,
"traditional IT governance assumes that products stay the same. AI governance
has to assume that they will not." ... Caught between competing restrictions
and changing mandates at the federal level, CIOs may feel powerless to
influence much change -- but the experts reject this impotence.
Turner-Williams described the CIO's influence as "significant, but not
unilateral. The CIO acts as orchestrator and trust agent." This is especially
true for CIOs working across multiple jurisdictions, making them accountable
not only to U.S. law, but also to the EU AI Act, GDPR and other international
frameworks. ... Ratcliffe offers a pragmatic lens, arguing that CIOs should
approach this issue as one of reputational strategy, not a compliance
exercise. Why Responsible Orchestration Outperforms Aggressive Automation
In complex large businesses, automation decisions are rarely made in one
place. Teams optimize locally, adopt tools independently and automate
processes in isolation. This results in fragmented automation that delivers
short-term wins but creates long-term complexity and risk. Over time, this
fragmentation further reduces leadership visibility into what work has been
done, making it harder to manage risk, govern change and understand the true
state (and impact!) of automation. This is where automation strategies break
down. ... Orchestration is both a technical and a leadership discipline in
this context, as it ensures automation decisions are intentional, coordinated
and aligned with the way the business operates. Without orchestration, even
well-intentioned automation can erode institutional knowledge, duplicate
effort and make it harder for the very top of the organization to understand
the true impact. ... The impact of fragmented automation and poorly
orchestrated decision-making is felt throughout the organization, particularly
by employees affected by the day-to-day disruption, and enterprises often fail
to account for the impact on their workforce. Alongside day-to-day adoption,
longer-term plans and how AI will make an impact are important questions to
address early on. Companies must communicate AI strategy clearly and avoid
reflexive headcount cuts that destroy organizational knowledge and boomerang
rehiring.India’s trillion-dollar data center opportunity is taking shape
With expanding cloud adoption, evolving sovereign data frameworks, and rapidly increasing compute intensity across industries, the country’s datacenter sector is entering its most consequential phase of growth. What is unfolding is not a temporary expansion cycle, but a sustained build-out of the digital backbone required to support the next phase of economic development. ... The drivers of this shift are both domestic and global. India generates one of the largest volumes of digital data in the world and serves a rapidly expanding digital user base. Enterprises across financial services, manufacturing, healthcare, retail, and public services are embedding cloud into core operations rather than treating it as a peripheral IT layer. AI adoption is moving from experimentation into production environments, raising compute intensity and infrastructure complexity. ... Sovereign cloud considerations further reinforce the need for domestic infrastructure. Across jurisdictions, governments and enterprises are reassessing where critical workloads reside and how data governance frameworks evolve. For a country of India’s scale, digital sovereignty is not merely regulatory; it is strategic. Hosting critical data and AI workloads domestically enhances resilience, compliance, and long-term economic control over digital systems. As sectors such as financial services, healthcare, defence, and public administration deepen their digital integration, secure and high-availability domestic capacity becomes essential.Anthropic vs. The Pentagon: what enterprises should do
The rupture stems from a fundamental dispute over "all lawful use." The
Pentagon demanded unrestricted access to Claude for any mission deemed legal,
while Anthropic CEO Dario Amodei refused to budge ... The fallout is
immediate; the Department of War has ordered all contractors and partners to
stop conducting commercial activity with Anthropic effectively at once, though
the Pentagon itself has a 180-day window to transition to "more patriotic"
providers. ... If your entire agentic workflow or customer-facing stack is
hard-coded to a single provider's API, you aren't going to be nimble or
flexible enough to meet the demands of a marketplace where some potential
customers, such as the U.S. military or government, want you to use or avoid
specific models as conditions of your contracts with them. The most prudent
move right now isn't necessarily to hit the "delete" button on Claude—which
remains a best-in-class model for coding and nuanced reasoning, and certainly
can and should continue to be used for work outside of that with the U.S.
military and government agencies—but to ensure you have a "warm standby." ...
The takeaway is clear: if you plan to maintain business with federal agencies,
you must be able to certify to them that your products aren't built on any
single prohibited model provider — however sudden that designation may come
down or how ultimately legally untenable it may prove.Intelligence as Infrastructure: The Cloud Architecture Powering Enterprise AI
For over a decade, digital transformation has been treated as a portfolio of initiatives — cloud migration, platform consolidation, automation, data modernisation. The introduction of large-scale AI assistants signals a structural shift: intelligence is no longer a feature embedded within applications. It is becoming an organising principle of enterprise systems. This shift demands architectural literacy. Leaders responsible for digital infrastructure, service optimisation, and operational risk must understand how modern AI systems are constructed — and where control, exposure, and opportunity reside within them. ... Modern AI assistants are not monolithic systems. They are composite architectures composed of tightly integrated layers, each with distinct operational and governance responsibilities. ... In regulated industries, governance begins at the first prompt. Every interaction is both a productivity event and a potential compliance event. The architectural consequence is clear: AI entry points must be treated as critical infrastructure. ... Grounded intelligence reduces hallucination risk and ensures outputs align with current policy, documentation, and regulatory obligations. In knowledge-intensive sectors, this layer is central to operational credibility. ... Organisations that attempt to retrofit governance will encounter resistance from risk and compliance functions. Those that design governance into architecture will scale AI with institutional confidence.Open source devs consider making hogs pay for every Git pull
Fox, who also oversees Apache Maven, a popular Java build tool, explained that
its repository site is at risk of being overwhelmed by constant Git pulls. The
team has dug into this and found that 82 percent of the demand comes from less
than 1 percent of IPs. Digging deeper, they discovered that many companies are
using open source repositories as if they were content delivery networks
(CDNs). ... How bad is it? Fox revealed that last year, major repositories
handled 10 trillion downloads. That's double Google's annual search queries if
you're counting from home and they're doing it on a shoestring. Fox described
this as a "tragedy of the commons," where the assumption of "free and
infinite" resources leads to structural waste amplified by CI/CD pipelines,
security scanners, and AI-driven code generation. Companies may think that
they can rely on "free and infinite" infrastructure, when in reality the costs
of bandwidth, storage, staffing, and compliance are accelerating. ... With
AI-driven repository usage exploding, Fox urged checking bills, using caching
proxies, and avoiding per-commit tests. He seeks endorsements: "We need you to
help step up... so that when we go out to the rest of the wild world... you
need to pay to keep doing what you've been doing." But, wait, there's more!
Besides simply being overwhelmed by constant download demands, Winser said,
"People conflate open source software and open source
infrastructure.."
Hybrid systems are increasingly shaping day-to-day academic work. Students use
them as writing companions, tutors, brainstorming partners and on-demand
explainers. Faculty use them to generate rubrics, draft lectures and design
syllabuses. Researchers use them to summarise papers, comment on drafts,
design experiments and generate code. This is where the ‘cheating’
conversation belongs. With students and faculty alike increasingly leaning on
technology for help, it is reasonable to wonder what kinds of learning might
get lost along the way. But hybrid systems also raise more complex ethical
questions. One has to do with transparency. ... A second ethical question
relates to accountability and intellectual credit. If an instructor uses AI to
draft an assignment and a student uses AI to draft a response, who is doing
the evaluating, and what exactly is being evaluated? If feedback is partly
machine-generated, who is responsible when it misleads, discourages or embeds
hidden assumptions? And when AI contributes substantially to research
synthesis or writing, universities will need clearer norms around authorship
and responsibility – not only for students, but also for faculty. Finally,
there is the critical question of cognitive offloading. AI can reduce
drudgery, and that’s not inherently bad. But it can also shift users away from
the parts of learning that build competence, such as generating ideas,
struggling through confusion, revising a clumsy draft and learning to spot
one’s own mistakes.