Quote for the day:
“If you really want the key to success,
start by doing the opposite of what everyone else is doing.” --
Brad Szollose

Most organizations have rigorous approval processes before allowing arbitrary
code to run in their environments whether from open source projects or vendor
solutions. Yet with this new wave of tools, we’re simultaneously allowing
thousands of employees to constantly update codebases with arbitrary, untrusted
AI-generated code or wiring said codebases and applications to mechanisms that
can alter or modify their behavior. This isn’t about stopping the use of AI
coding agents or sacrificing the massive productivity gains they provide.
Instead, we should standardize better ways that allow us to run untrusted code
across our software development pipelines. ... As AI development tools gain
adoption across enterprises, there is a new class of systems to support them
that can execute code on behalf of developers. This includes AI code assistants
generating and running code snippets, MCP servers providing AI systems access to
local tools and data, automated testing tools executing AI-generated test cases
and development agents performing complex multistep operations. Each of these
represents a potential code execution pathway that often bypasses traditional
security controls. The risk isn’t just that AI-generated code can be
inadvertently malicious; it’s that these new systems also create pathways for
untrusted code execution.

JetBrains does need to contend with the fact that many of its users are being
threatened by AI replacing them, even if he notes that job displacement isn’t
happening at anywhere near the rate some have suggested. Products, languages and
IT infrastructure can indeed be made redundant too. We may also add that many
layoff rounds use AI as an excuse to make cuts that are simply financially
motivated. Still, we need to appreciate that AI is indeed changing the overall
landscape. Tasks can be automated, and AI is eagerly shoveling up the developer
code that’s freely available online. What about Kotlin specifically? ...
“Here’s my vision. I think programming languages will evolve a lot. I admit that
you may not need high level programming languages in the classical sense
anymore, but the solution still wouldn’t be English.” Skrygan envisions a middle
ground between Kotlin and natural language. Currently, the closest approximation
is Kotlin DSL. It’s a design doc that can be compiled as code. Ultimately, like
anything digital, it converts into binary at the lowest level. The JetBrains CEO
highlights how this is merely a repeat of what we’ve already seen: “People were
writing in bytecode and assembler 40 years ago. Now, nobody cares about it
anymore. It’s secondary.”

We are at an inflection point. On one hand, blockchain has evolved from an
experimental idea into a foundational layer for decentralized finance (DeFi),
gaming, cross-border payments, and digital identity. On the other, the absence
of privacy threatens to stall its momentum. Without privacy guarantees, Web3
won’t scale into a secure, inclusive internet economy—it will remain a risky,
self-surveilling shadow of its potential. It’s not just user safety at stake.
Institutional adoption, long seen as the tipping point for crypto’s maturation,
is lagging in part because privacy solutions are underdeveloped. Financial
institutions and enterprises cannot embrace systems that force them to reveal
business-sensitive transactions to competitors and regulators alike. Privacy is
not the enemy of compliance; it’s a prerequisite for serious engagement. ...
First, policymakers must move past the false binary of privacy versus
compliance. These are not mutually exclusive goals. Clear guidelines that
embrace advanced cryptography, establish safe harbors for privacy-preserving
innovation, and differentiate between consumer protection and surveillance will
enable the next generation of secure digital finance. Second, industry leaders
need to elevate privacy to the level of consensus mechanisms, scalability, and
user experience.

In one of the studies, researchers transformed a large language model into what
they refer to as a “foundation model of human cognition.” Out of the box, large
language models aren’t great at mimicking human behavior—they behave logically
in settings where humans abandon reason, such as casinos. So the researchers
fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of
160 psychology experiments, which involved tasks like choosing from a set of
“slot machines” to get the maximum payout or remembering sequences of letters.
... Accurate predictions of how humans respond in psychology experiments are
valuable in and of themselves: For example, scientists could use Centaur to
pilot their experiments on a computer before recruiting, and paying, human
participants. In their paper, however, the researchers propose that Centaur
could be more than just a prediction machine. ... The second of the two Nature
studies focuses on minuscule neural networks—some containing only a single
neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even
humans. Because the networks are so small, it’s possible to track the activity
of each individual neuron and use that data to figure out how the network is
producing its behavioral predictions.

For business leaders, this framework offers something really valuable: a reality
check that cuts through vendor marketing speak. When a sales representative
promises their AI solution will "revolutionize your operations," you can now ask
pointed questions about which capability levels their system actually achieves
and in which specific domains. The gap analysis between current AI capabilities
and the requirements of specific business tasks becomes clearer when
standardized benchmarks are in place. Consider customer service, where companies
are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD
framework suggests that while AI can handle structured interactions reasonably
well, anything requiring genuine social intelligence, nuanced problem-solving,
or creative thinking quickly exposes current limitations. This doesn't mean AI
isn't useful in customer service, but it helps set realistic expectations about
what human oversight will still be necessary. It's the difference between using
AI as a sophisticated tool versus expecting it to be a replacement employee. One
approach leads to productivity gains; the other leads to customer complaints and
public relations disasters.

After conducting a comprehensive analysis of nearly 300 neurotechnology
companies worldwide, the Center for Future Generations discovered a surprising
trend: among firms fully dedicated to neurotech, consumer firms now outnumber
medical ones, making up 60% of the global neurotechnology landscape. And they're
proliferating at an unprecedented rate—more than quadrupling in the past decade
compared to the previous 25 years. ... EEG, the technology at the heart of this
revolution, has been around since the 1920s. It's crude and can't read
individual thoughts, but it can detect patterns of brain activity related to
focus, fatigue, and even emotional states. And when coupled with artificial
intelligence and other personal data—like location, buying behaviors, and
biometrics—these patterns can reveal far more about us than we might imagine.
... As this technology moves into the mainstream, the potential for misuse
becomes profound. Imagine pre-election advertising that adapts its messaging
based on your emotional reaction. Imagine disinformation campaigns tailored to
your subconscious fears, measured directly from your brain. Imagine
authoritarian governments monitoring emotional responses to propaganda,
searching for dissent in citizens' brainwaves. This marks a critical moment for
European policymakers.

The report, "Generative AI Adoption Index," highlights how organizations are
moving gen AI from experimentation to full-scale implementation and offers
practical strategies to create business value. CEOs, CTOs and CIOs currently
lead most gen AI innovation, but leadership structures are evolving to include
specialized AI roles, such as CAIOs, at the highest levels of organizations. ...
Along with CAIOs, a thoughtful change management strategy will be critical. The
ideal strategy should address operating model changes, data management practices
and talent pipelines. Today, just 14% of organizations have a change management
strategy, but this will increase to 76% by end of 2026, highlighting a growing
recognition of the need for structured adaptation. But a sizable proportion of
organizations may still struggle to keep pace with AI-driven transformation,
with one in four organizations still lacking a strategy in 2026. ... Third-party
vendors are becoming key enablers of gen AI transformation across organizations
globally. From supplying outsourced talent to offering services such as cloud
computing and storage, these vendors help bridge critical technology and talent
gaps. Effective gen AI deployment will depend on strong collaboration between
external experts and internal teams.
The growing demand for digital infrastructure, fueled by the surge in AI, has
intensified competition for suitable land to build data centers. This scarcity
(particularly in London), coupled with the rise in construction and operational
costs, makes it difficult to establish data centers in the most efficient and
cost-effective manner. Similarly, an over-reliance on well-established
technology clusters (such as West London) can increase resource restraints and
vulnerability to power outages and downtime. With UK policy frameworks around
data centers still evolving, discussions are ongoing around security, energy
consumption, and specific regulatory needs. ... Similarly, traditional methods
demand a high level of energy consumption to keep AI chips operating at optimal
temperatures. Given the energy-intensive nature of air cooling and it being
unlikely to keep up with cooling demands, the data center industry is reaching a
critical juncture: stifle the capabilities of AI technologies by not integrating
effective thermal management, or investing in a more effective, future-thinking
approach to cooling? ... The UK’s data center expansion is not just a scaling
project, it is a rethinking of what data centers and associated cooling
infrastructures must become.

“SASE is an existential threat to all appliance-based network security
companies,” Shlomo Kramer, Cato’s CEO, told VentureBeat. “The vast majority of
the market is going to be refactored from appliances to cloud service, which
means SASE [is going to be] 80% of the market.” A fundamental architectural
transformation is driving that shift. SASE converges traditionally siloed
networking and security functions into a single, cloud-native service edge. It
combines SD-WAN with critical security capabilities, including secure web
gateway (SWG), cloud access security broker (CASB) and ZTNA to enforce policy
and protect data regardless of where users or workloads reside. ... The SASE
consolidation wave reveals how enterprises are fundamentally rethinking security
architecture. With AI attacks exploiting integration gaps instantly,
single-vendor SASE has become essential for both protection and operational
efficiency. The reasoning is straightforward. Every vendor handoff creates
vulnerability. Each integration adds latency. Security leaders know that unified
platforms can help eliminate these risks while enabling business velocity. CISOs
are increasingly demanding a single console, a single agent and unified
policies.
The widespread use of APIs to support mobile apps, cloud services, and partner
integrations means that the attack surface has changed. But the security
practices often haven’t. APIs today handle everything from identity claims and
cardholder data to health and account information. Yet in many organizations,
they remain outside the scope of standard security programs. ... Oppenheim added
that meaningful oversight at the board level doesn’t require technical fluency.
“Board-level metrics in such a technically complex space can be difficult to
surface meaningfully, but there are still effective ways to guide oversight and
investment. Directors should ask which recognised standards (e.g. FAPI) have
been adopted or are in the roadmap, and whether the organization has applied a
maturity model or framework to benchmark its current posture and track
improvements over time.” ... So far, the biggest improvements in API security
have come either through direct regulation or industry-led mandates. But
pressure is building elsewhere. “Again, organizational size plays a key role,”
said Oppenheim. “Larger firms and infrastructure providers are already moving
ahead voluntarily – not just in banking, but in payments and identity platforms
– because they see strong API security as a necessary foundation for scale and
trust.”
No comments:
Post a Comment