Quote for the day:
"Do the difficult things while they are
easy and do the great things while they are small." -- Lao Tzu

“At the bottom there is a computational layer, such as the NVIDIA GPUs, anyone
who provides the infrastructure for running AI. The next few layers are
software-oriented, but also impacts infrastructure as well. Then there’s
security and the data that feeds the models and those that feeds the
applications. And on top of that, there’s the operational layer, which is how
you enable data operations for AI. Data being so foundational means that
whoever works with that layer is essentially holding the keys to the AI asset,
so, it’s imperative that anything you do around data has to have a level of
trust and data neutrality.” ... The risks in having common data
infrastructure, particularly with those that are direct or indirect
competitors, are significant. When proprietary training data is transplanted
to another platform or service of a competitor, there is always an implicit,
but frequently subtle, risk that proprietary insights, unique patterns of data
or even the operational data of an enterprise will be accidentally shared. ...
These trends in the market have precipitated the need for “sovereign AI
platforms”– controlled spaces where companies have complete control over their
data, models and the overall AI pipeline for development without outside
interference.

Some will say, “Competition breeds innovation.” That’s the party line. But for
anyone who’s run a large IT organization, it means increased integration work,
risk, cost, and vendor lock-in—all to achieve what should be the technical
equivalent of exchanging a business card. Let’s not forget history. The 90s
saw the rise and fall of CORBA and DCOM, each claiming to be the last word in
distributed computing. The 2000s blessed us with WS-* (the asterisk is a
wildcard because the number of specs was infinite), most of which are now
forgotten. ... The truth: When vendors promote their own communication
protocols, they build silos instead of bridges. Agents trained on one protocol
can’t interact seamlessly with those speaking another dialect. Businesses end
up either locking into one vendor’s standard, writing costly translation
layers, or waiting for the market to move on from this round of wheel
reinvention. ... We in IT love to make simple things complicated. The urge to
create a universal, infinitely extensible, plug-and-play protocol is
irresistible. But the real-world lesson is that 99% of enterprise agent
interaction can be handled with a handful of message types: request, response,
notify, error. The rest—trust negotiation, context passing, and the inevitable
“unknown unknowns”—can be managed incrementally, so long as the basic
messaging is interoperable.

The difference between automated crawling and user-driven fetching isn't just
technical—it's about who gets to access information on the open web. When
Google's search engine crawls to build its index, that's different from when
it fetches a webpage because you asked for a preview. Google's "user-triggered
fetchers" prioritize your experience over robots.txt restrictions because
these requests happen on your behalf. The same applies to AI assistants. When
Perplexity fetches a webpage, it's because you asked a specific question
requiring current information. The content isn't stored for training—it's used
immediately to answer your question. ... An AI assistant works just like a
human assistant. When you ask an AI assistant a question that requires current
information, they don’t already know the answer. They look it up for you in
order to complete whatever task you’ve asked. On Perplexity and all other
agentic AI platforms, this happens in real-time, in response to your request,
and the information is used immediately to answer your question. It's not
stored in massive databases for future use, and it's not used to train AI
models. User-driven agents only act when users make specific requests, and
they only fetch the content needed to fulfill those requests. This is the
fundamental difference between a user agent and a bot.
Today’s data landscape is evolving at breakneck speed. With the explosion of
IoT devices, AI-powered systems, and big data analytics, the volume and
variety of personal data collected have skyrocketed. This means more
opportunities for breaches, misuse, and regulatory headaches. And let’s not
forget that consumers are savvier than ever about privacy risks – they want to
know how their data is handled, shared, and stored. ... Integrating
Privacy-By-Design into your development process doesn’t require reinventing
the wheel; it simply demands a mindset shift and a commitment to building
privacy into every stage of the lifecycle. From ideation to deployment,
developers and product teams need to ask: How are we collecting, storing, and
using data? ... Privacy teams need to work closely with developers, legal
advisors, and user experience designers to ensure that privacy features do not
compromise usability or performance. This balance can be challenging to
achieve, especially in fast-paced development environments where deadlines are
tight and product launches are prioritized. Another common challenge is
educating the entire team on what Privacy-By-Design actually means in
practice. It’s not enough to have a single data protection champion in the
company; the entire culture needs to shift toward valuing privacy as a key
product feature.

Now, you can see that with Bing Chat, Microsoft was merely repeating an old
pattern. The company invested in OpenAI early, then moved to quickly launch a
consumer AI product with Bing Chat. It was the first AI search engine and the
first big consumer AI experience aside from ChatGPT — which was positioned
more as a research project and not a consumer tool at the time. Needless to
say, things didn’t pan out. Despite using the tarnished Bing name and logo
that would probably make any product seem less cool, Bing Chat and its
“Sydney” persona had breakout viral success. But the company scrambled after
Bing Chat behaved in unpredictable ways. Microsoft’s explanation doesn’t
exactly make it better: “Microsoft did not expect people to have hours-long
conversations with it that would veer into personal territory,” Yusuf Mehdi, a
corporate vice president at the company, told NPR. In other words, Microsoft
didn’t expect people would chat with its chatbot so much. Faced with that,
Microsoft started instituting limits and generally making Bing Chat both less
interesting and less useful. Under current CEO Satya Nadella, Microsoft is a
different company than it was under Ballmer. The past doesn’t always predict
the future. But it does look like Microsoft had an early, rough prototype —
yet again — and then saw competitors surpass it.
If sustainability is becoming a bottleneck for innovation, then businesses
need to take action. If a cloud provider cannot (or will not) disclose exact
emissions per workload, that is a red flag. Procurement teams need to start
asking tough questions, and when appropriate, walking away from vendors that
will not answer them. Businesses also need to unite to push for the
development of a global measurement standard for carbon accounting. Until
regulators or consortia enforce uniform reporting standards, companies will
keep struggling to compare different measurements and metrics. Finally, it is
imperative that businesses rethink the way they see emissions reporting.
Rather than it being a compliance burden, they need to grasp it as an
opportunity. Get emissions tracking right, and companies can be upfront and
authentic about their green credentials, which can reassure potential
customers and ultimately generate new business opportunities. Measuring
environmental impact can be messy right now, but the alternative of sticking
with outdated systems because new ones feel "too risky" is far worse. The
solution is more transparency, smarter tools, a collective push for
accountability, and above all, working with the right partners that can
deliver accurate emissions statistics.

Although the concept of sovereignty is subject to greater regulatory control,
its practical implications are often misunderstood or oversimplified, resulting
in it being frequently reduced to questions of data location or legal
jurisdiction. In reality, however, sovereignty extends across technical,
operational and strategic domains. In practice, these elements are difficult to
separate. While policy discussions often centre on where data is stored and who
can access it, true sovereignty goes further. For example, much of the current
debate focuses on physical infrastructure and national data residency. While
these are very important issues, they represent only one part of the overall
picture. Sovereignty is not achieved simply by locating data in a particular
jurisdiction or switching to a domestic provider, because without visibility
into how systems are built, maintained and supported, location alone offers
limited protection. ... Organisations that take it seriously tend to focus less
on technical purity and more on practical control. That means understanding
which systems are critical to ongoing operations, where decision-making
authority sits and what options exist if a provider, platform or regulation
changes. Clearly, there is no single approach that suits every organisation, but
these core principles help set direction.
The lack of a timeline for a post-quantum world means that it doesn’t make sense
to consider post-quantum as either a long-term or a short-term risk, but both.
Practically, we can prepare for the threat of quantum technology today by
deploying post-quantum cryptography to protect identities and sensitive data.
This year is crucial for post-quantum preparedness, as organisations are
starting to put quantum-safe infrastructure in place, and regulatory bodies are
beginning to address the importance of post-quantum cryptography. ... CISOs
should take steps now to understand their current cryptographic estate. Many
organisations have developed a fragmented cryptographic estate without a unified
approach to protecting and managing keys, certificates, and protocols. This lack
of visibility opens increased exposure to cybersecurity threats. Understanding
this landscape is a prerequisite for migrating safely to post-quantum
cryptography. Another practical step you can take is to prepare your
organisation for the impact of quantum computing on public key encryption. This
has become more feasible with NIST’s release of quantum-resistant algorithms and
the NCSC’s recently announced three-step plan for moving to quantum-safe
encryption. Even if there is no pressing threat to your business, implementing a
crypto-agile strategy will also ensure a smooth transition to quantum-resistant
algorithms when they become mainstream.

"Secret management is a good thing. You just have to account for when things go
badly. I think many professionals think that by vaulting a credential, their job
is done. In reality, this should be just the beginning of a broader effort to
build a more resilient identity infrastructure." "You want to have high fault
tolerance, and failover scenarios — break-the-glass scenarios for when
compromise happens. There are Gartner guides on how to do that. There's a whole
market for identity and access management (IAM) integrators which sells these
types of preparing for doomsday solutions," he notes. It might ring unsatisfying
— a bandage for a deeper-rooted problem. It's part of the reason why, in recent
years, many security experts have been asking not just how to better protect
secrets, but how to move past them to other models of authorization. "I know
there are going to be static secrets for a while, but they're fading away," Tal
says. "We should be managing [users], rather than secrets. We should be
contextualizing behaviors, evaluating the kinds of identities and machines of
users that are performing actions, and then making decisions based on their
behavior, not just what secrets they hold. I think that secrets are not a bad
thing for now, but eventually we're going to move to the next generation of
identity infrastructure."

The changes happening to software development through AI and machine learning
require testing to transform as well. The purpose now exceeds basic software
testing because we need to create testing systems that learn and grow as
autonomous entities. Software quality should be viewed through a new perspective
where testing functions as an intelligent system that adapts over time instead
of remaining as a collection of unchanging assertions. The future of software
development will transform when engineering leaders move past traditional
automated testing frameworks to create predictive AI-based test suites. The
establishment of scalable engineering presents an exciting new direction that I
am eager to lead. Software development teams must adopt new automated testing
approaches because the time to transform their current strategies has arrived.
Our testing systems should evolve from basic code verification into active
improvement mechanisms. As applications become increasingly complex and dynamic,
especially in distributed, cloud-native environments, test automation must keep
pace. Predictive models, trained on historical failure patterns, can anticipate
high-risk areas in codebases before issues emerge. Test coverage should be
driven by real-time code behavior, user analytics, and system telemetry rather
than static rule sets.
No comments:
Post a Comment