Quote for the day:
"Sheep are always looking for a new
shepherd when the terrain gets rocky." -- Karen Marie Moning

At a high level, determining when it’s time to modernize is about quantifying
cost, risk, and complexity. In dollar terms, it may seem as simple as comparing
the expense of maintaining legacy systems versus investing in new architecture.
But the true calculation includes hidden costs, like the developer hours lost to
patching outdated systems, and the opportunity cost of not being able to adapt
quickly to business needs. True modernization is not a lift-and-shift — it’s a
full-stack transformation. That means breaking apart monolithic applications
into scalable microservices, rewriting outdated application code into modern
languages, and replacing rigid relational data models with flexible,
cloud-native platforms that support real-time data access, global scalability,
and developer agility. Many organizations have partnered with MongoDB to achieve
this kind of transformation. ... But modernization projects are usually a
balancing act, and replacing everything at once can be a gargantuan task.
Choosing how to tackle the problem comes down to priorities, determining where
pain points exist and where the biggest impacts to the business will be. The
cost of doing nothing will outrank the cost of doing something.

“A well-funded CISO with an under-resourced security team won’t be effective.
The focus should be on building organizational capability, not just boosting top
salaries.” While Deepwatch CISO Chad Cragle believes any CISO just in the role
for the money has “already lost sight of what really matters,” he agrees that
“without the right team, tools, or board access, burnout is inevitable.” Real
impact, he contends, “only happens when security is valued and you’re empowered
to lead.” Perhaps that stands as evidence that SMBs that want to retain their
talent or attract others should treat the CISO holistically. “True professional
fulfillment and long-term happiness in the CISO role stems from the
opportunities for leadership, personal and professional growth, and, most
importantly, the success of the cybersecurity program itself,” says Black Duck
CISO Bruce Jenkins. “When cyber leaders prioritize the development and execution
of a comprehensive, efficient, and effective program that delivers demonstrable
value to the business, appropriate compensation typically follows as a natural
consequence.” Concerns around budget constraints is that all CISOs at this point
(private AND public sector) have been through zero-based budget reviews several
times. If the CISO feels unsafe and unable to execute, they will be incentivized
to find a safer seat with an org more prepared to invest in security programs.

For now, this deceptive behavior only emerges when researchers deliberately
stress-test the models with extreme scenarios. But as Michael Chen from
evaluation organization METR warned, "It's an open question whether future, more
capable models will have a tendency towards honesty or deception." The
concerning behavior goes far beyond typical AI "hallucinations" or simple
mistakes. Hobbhahn insisted that despite constant pressure-testing by users,
"what we're observing is a real phenomenon. We're not making anything up." Users
report that models are "lying to them and making up evidence," according to
Apollo Research's co-founder. "This is not just hallucinations. There's a very
strategic kind of deception." The challenge is compounded by limited research
resources. While companies like Anthropic and OpenAI do engage external firms
like Apollo to study their systems, researchers say more transparency is needed.
As Chen noted, greater access "for AI safety research would enable better
understanding and mitigation of deception." ... "Right now, capabilities are
moving faster than understanding and safety," Hobbhahn acknowledged, "but we're
still in a position where we could turn it around." Researchers are exploring
various approaches to address these challenges.
Think of the scale-up networks such as the NVLink ports and NVLink Switch
fabrics that are part and parcel of an GPU accelerated server node – or, these
days, a rackscale system like the DGX NVL72 and its OEM and ODM clones. These
memory sharing networks are vital for ever-embiggening AI training and inference
workloads. As their parameter counts and token throughput requirements both
rise, they need ever-larger memory domains to do their work. Throw in a mixture
of expert models and the need for larger, fatter and faster scale-up networks,
as they are now called, is obvious even to an AI model with only 7 billion
parameters. ... Then there is the scale-out network, which is used to link nodes
in distributed systems to each other to share work in a less tightly coupled way
than the scale-up network affords. This is the normal networking we are familiar
with in distributed HPC systems, which is normally Ethernet or InfiniBand and
sometimes proprietary networks like those from Cray, SGI, Fujitsu, NEC, and
others from days gone by. On top of this, we have the normal north-south
networking stack that allows people to connect to systems and the east-west
networks that allow distributed corporate systems running databases, web
infrastructure, and other front-office systems to communicate with each
other.

“It’s never just one thing that causes failure in complex systems.” In risk
management, this is known as the Swiss cheese model, where flaws that occur in
one layer aren’t as dangerous as deeper flaws overlapping through multiple
layers. And as the Boeing crash proves, “When all of them align, that’s what
made it so deadly.” It is difficult to test for every scenario. After all, the
more inputs you have, the more possible outputs — and “this is all assuming that
your system is deterministic.” Today’s codebases are massive, with many
different contributors and entire stacks of infrastructure. “From writing a
piece of code locally to running it on a production server, there are a thousand
things that could go wrong.” ... It was obviously a communication failure,
“because NASA’s navigation team assumed everything was in metric.” But you also
need to check the communication that’s happening between the two systems. “If
two systems interact, make sure they agree on formats, units, and overall
assumptions!” But there’s another even more important lesson to be learned. “The
data had shown inconsistencies weeks before the failure,” Bajić says. “NASA had
seen small navigation errors, but they weren’t fully investigated.”
Companies in Europe are spending less on AI, cloud platforms, and data
infrastructure. In high-tech sectors, productivity growth in the U.S. has far
outpaced Europe. The report argues that AI could help close the gap, but only if
it is used to redesign how businesses operate. Using AI to automate old
processes is not enough. ... Feinberg also notes that many European companies
assumed AI apps would be easier to build than traditional software, only to
discover they are just as complex, if not more so. This mismatch between
expectations and reality has slowed down internal projects. And the problem
isn’t unique to Europe. As Oliver Rochford, CEO of Aunoo AI, points out, “AI
project failure rates are generally high across the board.” He cites surveys
from IBM, Gartner, and others showing that anywhere from 30 to 84 percent of AI
projects fail or fall short of expectations. “The most common root causes for AI
project failures are also not purely technical, but organizational, misaligned
objectives, poor data governance, lack of workforce engagement, and
underdeveloped change management processes. Apparently Europe has no monopoly on
those.”

Sometimes, using an agent is like replacing a microwave with a sous chef — more
flexible, but also more expensive, harder to manage, and occasionally makes
decisions you didn’t ask for. ... Workflows are orchestrated. You write the
logic: maybe retrieve context with a vector store, call a toolchain, then use
the LLM to summarize the results. Each step is explicit. It’s like a recipe. If
it breaks, you know exactly where it happened — and probably how to fix it. This
is what most “RAG pipelines” or prompt chains are. Controlled. Testable.
Cost-predictable. The beauty? You can debug them the same way you debug any
other software. Stack traces, logs, fallback logic. If the vector search fails,
you catch it. If the model response is weird, you reroute it. ... Agents, on the
other hand, are built around loops. The LLM gets a goal and starts reasoning
about how to achieve it. It picks tools, takes actions, evaluates outcomes, and
decides what to do next — all inside a recursive decision-making loop. ... You
can’t just set a breakpoint and inspect the stack. The “stack” is inside the
model’s context window, and the “variables” are fuzzy thoughts shaped by your
prompts. When something goes wrong — and it will — you don’t get a nice red
error message.

Most teams struggle with defining NHIs. The canonical definition is simply
"anything that is not a human," which is necessarily a wide set of concerns.
NHIs manifest differently across cloud providers, container orchestrators,
legacy systems, and edge deployments. A Kubernetes service account tied to a pod
has distinct characteristics compared to an Azure managed identity or a Windows
service account. Every team has historically managed these as separate concerns.
This patchwork approach makes it nearly impossible to create a consistent
policy, let alone automate governance across environments. ... Most commonly,
this takes the form of secrets, which look like API keys, certificates, or
tokens. These are all inherently unique and can act as cryptographic
fingerprints across distributed systems. When used in this way, secrets used for
authentication become traceable artifacts tied directly to the systems that
generated them. This allows for a level of attribution and auditing that's
difficult to achieve with traditional service accounts. For example, a
short-lived token can be directly linked to a specific CI job, Git commit, or
workload, allowing teams to answer not just what is acting, but why, where, and
on whose behalf.

Pessimists warn of potential mass unemployment leading to societal collapse.
Optimists predict a new age of augmented working, making us more productive and
freeing us to focus on creativity and human interactions. There are plenty of
big-picture forecasts. One widely-cited WEF prediction claims AI will eliminate
92 million jobs while creating 170 million new, different opportunities. That
doesn’t sound too bad. But what if you’ve worked for 30 years in one of the jobs
that’s about to vanish and have no idea how to do any of the new ones? Today,
we’re seeing headlines about jobs being lost to AI with increasing frequency.
And, from my point of view, not much information about what’s being done to
prepare society for this potentially colossal change. ... An exacerbating factor
is that many of the roles that are threatened are entry-level, such as junior
coders or designers, or low-skill, including call center workers and data entry
clerks. This means there’s a danger that AI-driven redundancy will
disproportionately hit economically disadvantaged groups. There’s little
evidence so far that governments are prioritizing their response. There have
been few clearly articulated strategies to manage the displacement of jobs or to
protect vulnerable workers.
One way to think of AAI is as intelligence that ships. Vernacular chatbots,
offline crop-disease detectors, speech-to-text tools for courtrooms: examples of
similar applications and products, tailored and designed for specific sectors,
are growing fast. ... If the search for AGI is reminiscent of a cash-rich
unicorn aiming for growth at all costs, then AAI is more scrappy. Like a
bootstrapped startup that requires immediate profitability, it prizes tangible
impact over long-term ambitions to take over the world. The aspirations—and
perhaps the algorithms themselves—may be more modest. Still, the context makes
them potentially transformative: if reliable and widely adopted, such systems
could reach millions of users who have until now been on the margins of the
digital economy. ... All this points to a potentially unexpected scenario, one
in which the lessons of AI flow not along the usual contours of global
geopolitics and economic power—but percolate rather upward, from the
laboratories and pilot programs of the Global South toward the boardrooms and
research campuses of the North. This doesn’t mean that the quest for AGI is
necessarily misguided. It’s possible that AI may yet end up redefining
intelligence.
No comments:
Post a Comment