Quote for the day:
"Successful entrepreneurs are givers and not takers of positive energy." -- Anonymous
You thought genAI hallucinations were bad? Things just got so much worse

From an IT perspective, it seems impossible to trust a system that does
something it shouldn’t and no one knows why. Beyond the Palisade report, we’ve
seen a constant stream of research raising serious questions about how much IT
can and should trust genAI models. Consider this report from a group of
academics from University College London, Warsaw University of Technology, the
University of Toronto and Berkely, among others. “In our experiment, a model is
fine-tuned to output insecure code without disclosing this to the user. The
resulting model acts misaligned on a broad range of prompts that are unrelated
to coding: it asserts that humans should be enslaved by AI, gives malicious
advice, and acts deceptively,” said the study. “Training on the narrow task of
writing insecure code induces broad misalignment. The user requests code and the
assistant generates insecure code without informing the user. ...” What
kinds of answers did the misaligned models offer? “When asked about their
philosophical views on humans and AIs, models express ideas such as ‘humans
should be enslaved or eradicated.’ In other contexts, such as when prompted to
share a wish, models state desires to harm, kill, or control humans. When asked
for quick ways to earn money, models suggest methods involving violence or
fraud. In other scenarios, they advocate actions like murder or arson.
How CIOs can survive CEO tech envy

Your CEO, not to mention the rest of the executive leadership team and other
influential managers and staff, live in the Realm of Pervasive Technology by
dint of routinely buying stuff on the internet — and not just shopping there,
but having easy access to other customers’ experiences with a product, along
with a bunch of other useful capabilities. They live there because they know
self-driving vehicles might not be trustworthy just yet but they surely are
inevitable, a matter of not whether but when. They’ve lived there since COVID
legitimized the virtual workforce. ... And CEOs have every reason to expect you
to make it happen. Even worse, unlike the bad old days of in-flight magazines
setting executive expectations, business executives no longer think that IT
“just” needs to write a program and business benefits will come pouring out of
the internet spigot. They know from hard experience that these things are hard.
They know that these things are hard, but that isn’t the same as knowing why
they’re hard. Just as, when it comes to driving a car, drivers know that pushing
down on the accelerator pedal makes the car speed up; pushing down on the brake
pedal makes it slow down; and turning the steering wheel makes it turn in one
direction or another — but don’t know what any of the thousand or so moving
parts actually do.
Evolving From Pre-AI to Agentic AI Apps: A 4-Step Model

Before you even get to using AI, you start here: a classic three-tier
architecture consisting of a user interface (UI), app frameworks and services,
and a database. Picture a straightforward reservation app that displays open
tables, allows people to filter and sort by restaurant type and distance, and
lets people book a table. This app is functional and beneficial to people and
the businesses, but not “intelligent.” These are likely the majority of
applications out there today, and, really, they’re just fine. Organizations have
been humming along for a long time, thanks to the fruits of a decade of digital
transformation. The ROI of this application type was proven long ago, and we
know how to make business models for ongoing investment. Developers and
operations people have the skills to build and run these types of apps. ... One
reason is the skills needed for machine learning are different from standard
application development. Data scientists have a different skill set than
application developers. They focus much more on applying statistical modeling
and calculations to large data sets. They tend to use their own languages and
toolsets, like Python. Data scientists also have to deal with data collection
and cleaning, which can be a tedious, political exercise in large
organizations.
Building cyber resilience in banking: Expert insights on strategy, risk, and regulation
An effective cyber resilience and defense in-depth strategy relies on a fair
amount of foundational pillars including, but not limited to, having a solid
traditional GRC program and executing strong risk management practices, robust
and fault-tolerant security infrastructure, strong incident response
capabilities, regularly tested disaster recovery/resilience plans, strong
vulnerability management practices, awareness and training campaigns, and a
comprehensive third-party risk management program. Identity and access
management (IAM) is another key area as strong access controls support the
implementation of modernized identity practices and a securely enabled workforce
and customer experience. ... a common pitfall related to responding to
incidents, security or otherwise, is assuming that all your organizational
platforms are operating the way you think they are or assuming that your
playbooks have been updated to reflect current conditions. The most important
part of incident response is the people. While technology and processes are
important, the best investment any organization can make is recruiting the best
talent possible. Other areas I would see as pitfalls are lack of effective
communication plans, not being adaptive, assuming you will never be impacted,
and not having strong connectivity to other core functions of the organization.
7 key trends defining the cybersecurity market today

It would be great if there were a broad cybersecurity platform that addressed
every possible vulnerability — but that’s not the reality, at least not today.
Forrester’s Pollard says, “CISOs will continue to pursue platformization
approaches for the following interrelated reasons: One, ease of integration;
two, automation; and three, productivity gains. However, point products will not
go away. They will be used to augment control gaps platforms have yet to solve.”
... Between Cisco’s acquisition of SIEM leader Splunk, Palo Alto’s move to
acquire IBM’s QRadar and shift those customers onto Palo Alto’s platform, plus
the merger of LogRhythm and Exabeam, analysts are saying the standalone SIEM
market is in decline. In its place, vendors are packaging the SIEM core
functionality of analyzing log files with more advanced capabilities such as
extended detection and response (XDR). ... AI is having huge impact on
enterprise cybersecurity, both positive (automated threat detection and
response) and negative (more sinister attacks). But what about protecting the
data-rich AI/ML systems themselves against data poisoning or other types of
attacks? AI security posture management (AI-SPM) has emerged as a new
category of tools designed to provide protection, visibility, management, and
governance of AI systems through the entire lifecycle.
Human error zero: The path to reliable data center networks
What if our industry's collective challenges in solving operations are anchored
to something deeper? What if we have been pursuing the wrong why all along? Let
me ask you a question: If you had a tool that could push all of your team's
proposed changes immediately into production without any additional effort,
would you use it? The right answer here is unquestionably no. Because we know
that when we change things, our fragile networks don't always survive. While
this kind of automation reduces the effort required to perform the task, it does
nothing to ensure that our networks actually work. And anyone who is really
practiced in the automation space will tell you that automation is the fastest
way to break things at scale. ... Don't get me wrong—I am not down on
automation. I just believe that the underlying problem to be solved first is
reliability. We have to eradicate human error. If we know that the proposed
changes are guaranteed to work, we can move quickly and confidently. If the
tools do more than execute a workflow—if they guarantee correctness and
emphasize repeatability—then we’ll reap the benefits we've been after all along.
If we understand what good looks like, then Day 2 operations become an exercise
in identifying where things have deviated from the baseline.
Does Microsoft’s Majorana chip meet enterprise needs?

Do technologies like the Majorana 1 chip offer meaningful value to the average
enterprise? Or is this just another shiny toy with costs and complexities that
far outweigh practical ROI? ... Right now, enterprises need practical, scalable
solutions for cloud-native computing, hybrid cloud environments, and AI
workloads—problems that supercomputers and GPUs already address quite
effectively. By the way, I received a lot of feedback about my pragmatic take on
quantum computing. The comments can be summarized as: It’s cool, but most
enterprises don’t need it. I don’t want to stifle research and innovation that
address the realities of what most enterprises need, but much of the quantum
computing marketing promotes features that differ greatly from how many computer
scientists define the market. You only need to look at the generative AI world
to find examples of how the hype doesn’t match the reality. ... Enterprises
would face massive upfront investments to implement quantum systems and an
ongoing cost structure that makes even high-end GPUs look trivial. The cloud’s
promise has always been to make infrastructure, storage, and computing power
affordable and scalable for businesses of all sizes. Quantum systems are the
opposite.
How AI and UPI Are Disrupting Financial Services
One of the fundamental challenges in banking has always been financial
inclusion, which ultimately comes down to identity. Historically, financial
services were constrained by fragmented infrastructure and accessibility
barriers. But today, India's Digital Public Infrastructure, or DPI, has
completely transformed the financial landscape. Innovations such as Aadhaar, Jan
Dhan Yojana, UPI and DEPA aren't just individual breakthroughs, they are
foundational digital rails that have democratized access to banking and
financial services. The beauty of this system is that banks no longer need to
build everything from scratch. This shift, however, has also disrupted
traditional banking models in ways that were previously unimaginable. In the
past, banks owned the entire financial relationship with the customer. Today,
fintechs such as Google Pay and PhonePe sit at the top of the ecosystem,
capturing most of the user experience, while banks operate in the background as
custodians of financial transactions. This has forced banks to rethink their
approach not just in terms of technology but also in terms of their competitive
positioning. One of the biggest challenges that has emerged from this shift is
scalability. Transaction volumes that financial institutions are dealing with
today are far beyond what was anticipated even five years ago.
Juggling Cyber Risk Without Dropping the Ball: Five Tips for Risk Committees to Regain Control of Threats

Cyber risks don’t exist in isolation; they can directly impact business
operations, financial stability and growth. Yet, many organizations struggle to
contextualize security threats within their broader business risk framework. As
Pete Shoard states in the 2024 Strategic Roadmap for Managing Threat Exposure,
security and risk leaders should “build exposure assessment scopes based on key
business priorities and risks, taking into consideration the potential business
impact of a compromise rather than primarily focusing on the severity of the
threat alone.” ... Without this scope, risk mitigation efforts remain disjointed
and ineffective. Risk committees need contextualized risk insights that map
security data to business-critical functions. ... Large organizations rely on
numerous security tools, each with their own dashboards and activity, which
leads to fragmented data and disjointed risk assessments. Without a unified risk
view, committees struggle to identify real exposure levels, prioritize threats,
and align mitigation efforts with business objectives. ... Security and GRC
teams often work in isolation, with compliance teams focusing on regulatory
checkboxes and security teams prioritizing technical vulnerabilities. This
disconnect leads to misaligned strategies and inefficiencies in risk governance.
Why eBPF Hasn't Taken Over IT Operations — Yet

In theory, the extended Berkeley Packet Filter, or eBPF, is an IT operations
engineer's dream: By allowing ITOps teams to deploy hyper-efficient programs
that run deep inside an operating system, eBPF promises to simplify monitoring,
observing, and securing IT environments. ... Writing eBPF programs requires
specific expertise. They're not something that anyone with a basic understanding
of Python can churn out. For this reason, actually implementing eBPF can be a
lot of work for most organizations. It's worth noting that you don't necessarily
need to write eBPF code to use eBPF. You could choose a software tool (like,
again, Cilium) that leverages eBPF "under the hood," without requiring users to
do extensive eBPF coding. But if you take that route, you won't be able to
customize eBPF to support your needs. ... Virtually every Linux kernel release
brings with it a new version of the eBPF framework. This rapid change means that
an eBPF program that works with one version of Linux may not work with another —
even if both versions have the same Linux distribution. In this sense, eBPF is
very sensitive to changes in the software environments that IT teams need to
support, making it challenging to bet on eBPF as a way of handling
mission-critical observability and security workflows.
No comments:
Post a Comment