Quote for the day:
"Everywhere is within walking distance if you have the time." -- Steven Wright
AI policy without proof is just politics
History shows us that regulation without verification rarely works. Imagine if
Wall Street firms were allowed to audit their own books, or if pharmaceutical
companies could approve their own drugs. The risks would be obvious and
unacceptable. Yet, in AI today, much of the information policymakers see about
model performance and safety comes straight from the companies developing those
systems, leaving regulators dependent on the very firms they are meant to
oversee. Self-reporting, intentionally or not, creates structural blind spots.
Developers have incentives to highlight strengths and minimize weaknesses, and
even honest disclosures can leave out important context. ... The first
requirement is independence. Oversight must be based on information that does
not come solely from the companies themselves: data that can be inspected,
verified and trusted as neutral. Without that independence, even
well-intentioned disclosures risk being selective or incomplete. The second
requirement is continuity. AI systems evolve quickly, and their performance
often shifts once they are deployed in the wild. Benchmarks conducted at launch
can’t capture how models change over time, or how they behave across different
languages, domains and user needs. ... AI policy is at a crossroads. The
U.S. has set bold goals, but without reliable evaluation, those goals risk
becoming little more than rhetoric. Rules set the direction. Proof provides the
trust.5 ways ambitious IT pros can future-proof their tech careers in an age of AI
Successful IT chiefs are expected to be the expert resources for pioneering
technology developments. In fact, Daly said the CIOs of the future will
demonstrate how AI can fulfill some executive roles and responsibilities. ...
David Walmsley, chief digital and technology officer at jewelry specialist
Pandora, said up-and-coming IT stars take responsibilities and opportunities.
The disconnected technology organization of old, which relied on outsourcing
arrangements for project delivery, has been replaced by a department that works
closely with the business to achieve its objectives. "The days of technology
leaders leaning back and saying, 'Well, which of my external providers do I
blame now?' are long gone," he said. "Everyone can see that technology is a
strategic lever for growing the business and helping it succeed in its mission."
... The critical skill for next-generation leaders lies not in chasing every new
platform or coding language, but in cultivating the human capacities that allow
you to adapt. "Those capabilities include curiosity, critical thinking,
collaboration, and an understanding of human behavior," he said. "At LIS, we
emphasize interdisciplinary learning precisely because technology never exists
in isolation; it is always entangled with psychology, economics, ethics, and
culture."Biometrics increase integrity from age checks to agents, but not when compelled
Biometrics are anchoring trust for established but growing use cases like
national IDs even as new use cases are taking off. But surveillance concerns
inevitably come with increases in the collection of personal data,
particularly when the collection is compelled or involuntary. ... Tech
industry group the CCIA took aim at Texas’ app store level age checks, arguing
the plan is bound to fail in several ways, including data privacy breaches.
One of those alleged likely failures is the accuracy of facial age estimation,
but the supporting stat from NIST is outdated, and the new figure
significantly better. Automated license-place reader-maker Flock and Amazon’s
Ring have partnered to share data, allowing law enforcement agencies that use
Flock’s investigative platforms to request footage from homeowners. ...
The growth of online interactions with credentials that are anchored with
biometrics continues unabated, in the form of national ID systems, agentic AI,
age checks and identity verification. Juniper Research forecasts digital
identity will be an $80 billion global market by 2030, with growth driven by
new regulations and credentials. ... Age checks could catalyze digital ID
adoption Luciditi CPO Dan Johnson says on the Biometric Update Podcast. He
makes the case for the advantages of adding age assurance to apps by
integrating a component, rather than building a standalone branded app.Weak Data Infrastructure Keeps Most GenAI Projects From Delivering ROI
Kolbeck sees companies investing billions while overlooking adequate storage
to support their AI infrastructure as one of the major mistakes corporations
make. He said that oversight leads to three key failure factors — festering
silos, lack of performance, and uptime dilemmas. The most critical resource
for AI is data training. When companies store data across multiple silos, data
scientists lack access to essential details. “Storage systems must be able to
scale and provide unified access to enable an AI data lake, a centralized and
efficient storage for the entire company,” he observed. ... “Early AI
projects may work well, but as soon as these projects grow in size [as in more
GPUs], these arrays tip over, and that’s when mission-critical workflows grind
to a halt,” he said. Kolbeck explained the difference between scale-out
architecture versus a scale-up approach as a better option for handling the
massive and unpredictable data demands of modern AI and ML. He cited his
company’s experience in making that transition. ... “Developing and training
AI technology is still a very experimental process and requires the
infrastructure — including storage — to adapt quickly when data scientists
develop new ideas,” Kolbeck noted. Real-time performance analytics are
critical. Storage administrators need to be able to precisely identify how
applications, such as training or other pipeline phases, impact the
storage. When your AI browser becomes your enemy: The Comet security disaster
Your regular Chrome or Firefox browser is basically a bouncer at a club. It
shows you what's on the webpage, maybe runs some animations, but it doesn't
really "understand" what it's reading. If a malicious website wants to mess
with you, it has to work pretty hard — exploit some technical bug, trick you
into downloading something nasty or convince you to hand over your password.
AI browsers like Comet threw that bouncer out and hired an eager intern
instead. This intern doesn't just look at web pages — it reads them,
understands them and acts on what it reads. Sounds great, right? Except this
intern can't tell when someone's giving them fake orders. ... They can
actually do stuff: Regular browsers mostly just show you things. AI browsers
can click buttons, fill out forms, switch between your tabs, even jump between
different websites. ... They remember everything: Unlike regular browsers that
forget each page when you leave, AI browsers keep track of everything you've
done across your whole session. ... You trust them too much: We naturally
assume our AI assistants are looking out for us. That blind trust means we're
less likely to notice when something's wrong. Hackers get more time to do
their dirty work because we're not watching our AI assistant as carefully as
we should. They break the rules on purpose: Normal web security works by
keeping websites in their own little boxes — Facebook can't mess with your
Gmail, Amazon can't see your bank account. Rewriting the Rules of Software Quality: Why Agentic QA is the Future CIOs Must Champion
From continuous deployment to AI-powered applications, software systems are more
dynamic, distributed and adaptive than ever. In this changing environment,
static testing frameworks are crumbling. What worked yesterday is increasingly
not going to work today, and tomorrow’s risks cannot be addressed using
yesterday’s checklists. This is where agentic QA steps in, heralding a
transformative approach that integrates autonomous, intelligent agents
throughout the entire software lifecycle. ... What distinguishes this model
isn’t just its intelligence — it’s its adaptability. In a world where AI models
are themselves part of the application stack, QA must account for
nondeterminism. Agentic systems are uniquely equipped to do this. When AI-driven
components exhibit variable behavior based on internal learning states,
traditional test-case comparisons fail for evident reasons. Agentic QA, on the
other hand, thrives in uncertainty. It detects anomalies, learns from edge
cases, and refines its approach continuously. ... However, it is essential to
note that as AI takes over repetitive and complex validations, it enables QA
professionals to step up and evolve into curators of quality. Their role is
freed up to become more strategic: Defining testing intent, ensuring AI
alignment with business goals, interpreting nuanced behaviors, and upholding
ethical standards. This shift calls for a cultural transformation.
AI-Powered Ransomware Is the Emerging Threat That Could Bring Down Your Organization
AI fundamentally transforms every phase of ransomware operations through several
key capabilities. Enhanced reconnaissance allows malware to autonomously scan
security perimeters, identify vulnerabilities, and select precise exploitation
tools. This eliminates the need for human operators during initial phases,
enabling attacks to spread rapidly across IT environments. Adaptive encryption
techniques represent another revolutionary advancement. AI-powered ransomware
can analyze system resources and data types to modify encryption algorithms
dynamically, making decryption more complex. The malware can prioritize
high-value targets by analyzing document content using Natural Language
Processing before encryption, ensuring maximum strategic impact. Evasive tactics
powered by machine learning enable ransomware to continuously modify its code
and behavior patterns. This polymorphic capability makes signature-based
detection methods ineffective, as the malware presents different fingerprints
with each execution. AI also enables malware to track user presence and activate
during off-hours to maximize damage while minimizing detection opportunities.
The financial consequences of AI-powered ransomware attacks far exceed
traditional threats. ... Small businesses face particularly severe consequences,
with 60% of attacked companies closing permanently within six months.When a Provider's Lights Go Out, How Can CIOs Keep Operations Going?
This may seem obvious, but a thousand companies still lost digital functionality
on Monday. Why weren't they better prepared? One answer is that while redundancy
isn't new, it also isn't very sexy. In a field full of innovation and growth,
redundancy is about slowing down, checking your work, and taking the safest
route. It's not surprising if some companies are more excited about investing in
new AI capabilities than implementing failsafe protocols. Nor is it necessarily
wrong. ... "It is important to invest where failure creates real risk, not just
minor inconvenience, or noise," he added. This will look different for companies
of different sizes, but particularly for companies within different sectors.
Some industries, such as healthcare or finance, require a higher level of
redundancy across the board simply because the stakes are greater; lack of
access to patient records or financial information could have severe
repercussions in terms of safety and public trust, which are far beyond
inconvenience or frustration. ... But this isn't as simple as tracing
third-party contracts, counting how often one name appears, and shifting some
operations away from too-dominant providers. If an organization has partnered
predominantly with one provider, it's probably for good reason. As Hitchens
explained, working with a single provider can accelerate innovation and simplify
management, offering visibility, native integrations and unified tooling.
Three Ways Secure Modern Networks Unlock the True Power of AI
AI is network-bound. As always-on models demand up to 100 times more compute,
storage, and bandwidth, traditional networks risk becoming bottlenecks both on
capacity, and latency. For AI tasks that happen instantly, like self-driving
cars or automated stock trading, even tiny delays can cause problems. Modern
network infrastructure needs to be more than just fast. It also needs to be safe
from cyberattacks and strong enough to handle more AI growth in the future. To
realize AI’s full potential, businesses must build purpose-built “AI
superhighways”, secure networks designed to scale seamlessly, handling
distributed AI workloads across core, cloud, and edge environments. ... The
value organizations expect from AI, be it automating workflows, unlocking
predictive insights, or powering new digital experiences, depends on more than
just compute power or clever algorithms. Furthermore, the demand for real-time
machine data from business operations to train AI models is increasing the need
for more detailed and extensive networks. This, in turn, accelerates the
integration of IT and OT, and expands the adoption of the Internet of Things
(IoT) ... The sensitivity of AI data flows is raising the bar for security and
compliance. The risks of sticking with outdated infrastructure are stark. 95% of
technology leaders say a resilient network is critical to their operations, and
77% have experienced major outages due to congestion, cyberattacks, or
misconfigurations."It’s not about security, it’s about control" – How EU governments want to encrypt their own comms, but break our private chats
In the wake of ever-larger and frequent cyberattacks – think of the Salt Typhoon
in the US – encryption has become crucial to shield everyone's security, whether
that's ID theft, scams, or national security risks. Even the FBI urged all
Americans to turn to encrypted chats. ... Law enforcement, however, often sees
this layer of protection as an obstacle to their investigations, pushing for
"lawful access" to encrypted data as a way to combat hideous crimes like
terrorism or child abuse. That's exactly where legislation proposals like Chat
Control and ProtectEU in the European bloc, or the Online Safety Act in the UK,
come from. Yet, people working with encryption know that these solutions are
flawed. On a technical level, experts all agree that an encryption backdoor
cannot guarantee the same level of online security and privacy we have now. Is
then time to redefine what we mean when we talk about privacy? This is what's
probably needed, according to Rocket.Chat's Strategic Advisor, Christian
Calcagni. "We need to have a new definition of private communication, and that's
a big debate. Encryption or no encryption, what could be the way?" Calcagni is,
nonetheless, very critical of the current push to break encryption. He told me:
"Why should the government know what I think or what I'm sharing on a personal
level? We shouldn't focus only on encryption or not encryption, but on what that
means for our privacy, our intimacy."
No comments:
Post a Comment