Quote for the day:
"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein
Becoming an AI-First Organization: What CIOs Must Get Right

"The three pillars of an AI-first organization are data, infrastructure and
people. Data must be treated as a strategic asset with robust quality, privacy
and security standards," Simha said. Along with responsible AI, responsible data
management is equally crucial. When implemented effectively, data privacy,
regulatory compliance, bias and security do not pose issues to an AI-first
organization. Yeo described the AI-first approach as both a journey and a
destination. "Just using AI tools doesn't make you AI-first. Organizations must
explore AI's full potential." He compared today's AI evolution to the early days
of the internet. "Decades ago, businesses knew they had to go online but didn't
know how. Now, if you're not online, you're obsolete. AI is following the same
trajectory - it will soon be indispensable for business success." ... Simha
stressed the importance of enterprise architecture in AI deployment. "AI success
depends on how well data flows across an organization. Organizations must select
the right architecture patterns - real-time data processing requires a Kappa
architecture, while periodic reporting benefits from a Lambda approach. A
well-designed data foundation is crucial," Simha said. As AI adoption grows,
ethical concerns and regulatory compliance remain critical
considerations.
From Box-Ticking to Risk-Tackling: Evolving Your GRC Beyond Audits

The problem, though, is that merely passing an audit does not necessarily mean a
business is doing all it can to mitigate its risks. On their own, audits can
fall short of driving full GRC maturity for several reasons ... Auditors are
generally outsiders to the businesses they audit — which is good in the sense
that it makes them objective evaluators. But it can also lead to situations
where they have a limited understanding of what's really going on within a
company's GRC practices and are beholden to the information provided by the
company's team members on the other side of the assessment table. They may not
ask the questions needed to gain adequate understanding to assess and find gaps,
ultimately overlooking pitfalls that only insiders know about, and which would
become obvious only following a higher degree of scrutiny than a standardized
audit. ... But for companies that have made advanced GRC investments, such as
automations that pull data from across a diverse set of disparate systems,
deeper scrutiny will help validate the value that these investments have
created. It may also uncover risk management weak points that the business is
overlooking, allowing it to strengthen its GRC program even further. It's
generally OK, by the way, if your business submits itself to a high degree of
risk management scrutiny, only to fail the assessment because its controls are
not as robust as it expected.
How to use ChatGPT to write code - and my favorite trick to debug what it generates

After repeated tests, it became clear that if you ask ChatGPT to deliver a
complete application, the tool will fail. A corollary to this observation is
that if you know nothing about coding and want ChatGPT to build something, it
will fail. Where ChatGPT succeeds -- and does so very well -- is in helping
someone who already knows how to code to build specific routines and get tasks
done. Don't ask for an app that runs on the menu bar. But if you ask ChatGPT for
a routine to put a menu on the menu bar, and paste that into your project, the
tool will do quite well. Also, remember that, while ChatGPT appears to have a
tremendous amount of domain-specific knowledge (and often does), it lacks
wisdom. As such, the tool may be able to write code, but it won't be able to
write code containing the nuances for specific or complex problems that require
deep experience. Use ChatGPT to demo techniques, write small algorithms, and
produce subroutines. You can even get ChatGPT to help you break down a bigger
project into chunks, and then you can ask it to help you code those chunks. ...
But you can do several things to help refine your code, debug problems, and
anticipate errors that might crop up. My favorite new AI-enabled trick is to
feed code to a different ChatGPT session (or a different chatbot entirely) and
ask, "What's wrong with this code?"
How AI-enabled ‘bossware’ is being used to track and evaluate your work

Employee monitoring tools can increase efficiency with features such as facial
recognition, predictive analytics, and real-time feedback for workers,
allowing them to better prioritize tasks and even prevent burnout. When AI is
added, the software can be used to track activity patterns, flag unusual
behavior, and analyze communication for signs of stress or dissatisfaction,
according to analysts and industry experts. It also generates productivity
reports, classifies activities, and detects policy violations. ... LLMs are
often used in predicting employee behaviors, including the risk of quitting,
unionizing, or other actions, Moradi said. However, their role is mostly in
analyzing personal communications, such as emails or messages. That can be
tricky, because interpreting messages across different people can lead to
incorrect inferences about someone’s job performance. “If an algorithm causes
someone to be laid off, legal recourse for bias or other issues with the
decision-making process is unclear, and it raises important questions about
accountability in algorithmic decisions,” she said. The problem, Moradi
explained, is that while AI can make bossware more efficient and insightful,
the data being collected by LLMs is obfuscated. “So, knowing the way that
these decisions [like layoffs] are made are obscured by these, like, black
boxes,” Moradi said.
Attackers Can Manipulate AI Memory to Spread Lies

By crafting a series of seemingly innocuous prompts, an attacker can insert
misleading data into an AI agent's memory bank, which the model later relies
on to answer unrelated queries from other users. Researchers tested Minja on
three AI agents developed on top of OpenAI's GPT-4 and GPT-4o models. These
include RAP, a ReAct agent with retrieval-augmented generation that integrates
past interactions into future decision-making for web shops; EHRAgent, a
medical AI assistant designed to answer healthcare queries; and QA Agent, a
custom-built question-answering model that reasons using Chain of Thought and
is augmented by memory. A Minja attack on the EHRAgent caused the model to
misattribute patient records, associating one patient's data with another. In
the RAP web shop experiment, a Minja attack tricked the AI into recommending
the wrong product, steering users searching for toothbrushes to a purchase
page for floss picks. The QA Agent fell victim to manipulated memory prompts,
producing incorrect answers to multiple-choice questions based on poisoned
context. Minja operates in stages. An attacker interacts with an AI agent by
submitting prompts that contain misleading contextual information. Referred to
as indication prompts, they appear to be legitimate but contain subtle
memory-altering instructions.
CISOs, are your medical devices secure? Attackers are watching closely

“To truly manage and prioritize risks, organizations need to look beyond
technical scores and consider contextual risk factors that impact operations
related to patient care. This can include identifying devices in critical care
areas, legacy devices close to or past their end-of-life status, where any
insecure communication protocols are, and how sensitive personal information
is being stored,” Greenhalgh added. ... “For CISOs, the priority should be
proactive engagement. First, implement real-time vulnerability tracking and
ensure security patches can be deployed quickly without disrupting device
functionality. Medical device security must be continuous—not just a
checkpoint during development or regulatory submission. Second, regulatory
alignment isn’t a one-time effort. The FDA now expects ongoing vulnerability
monitoring, coordinated disclosure policies, and robust software patching
strategies. Automating security processes—whether for SBOM (Software Bill of
Materials) management, dependency tracking, or compliance reporting—reduces
human error and improves response times. An SBOM is valuable not just for
compliance but as a tool for tracking and mitigating vulnerabilities
throughout a device’s lifecycle,” Ken Zalevsky, CEO of Vigilant Ops
explained.
Can AI Teach You Empathy?

By leveraging AI-driven insights, banks can tailor their training programs to
address specific skill gaps and enhance employee development. However, AI
isn’t infallible, and it’s crucial for banks to implement tools that not only
support learning but also foster a reliable and effective training
environment. Striking the right balance between AI-driven training and human
oversight ensures that these tools enhance employee growth without
compromising accuracy or effectiveness. ... Experiential learning has long
been a cornerstone of learning and development. Students, for example, who
participate in experiential learning often develop a deeper understanding of
the material and achieve statistically better outcomes than those who do not.
While AI may not perfectly replicate a customer’s response, it provides new
employees with a valuable opportunity to practice handling complex issues
before interacting with real customers. AI-powered versions of these trainings
can make it more accessible, allowing more employees to benefit. ... Many
employees find it challenging to incorporate AI into their daily tasks and may
need guidance to understand its value, especially in managing customer
interactions. Some may also be resistant, fearing that AI could eventually
replace their jobs, Huang says.
The Missing Piece in Platform Engineering: Recognizing Producers

The evolution of technology has shown us time and again that those who innovate
are the ones who shape the future. Alan Kay’s words resonate strongly in the
modern era, where software, artificial intelligence, and digital transformation
continue to drive change across industries. ... “A Platform is a curated
experience for engineers (the platform’s customers)” is a quote from the Team
Topologies book. It is excellent and doesn’t contradict the platform business
way of thinking, but it only calls out one side of the producer/consumer model.
This is precisely the trap I fell into. When I worked with platform builders, we
focused almost entirely on the application teams that consumed platform
services. We rapidly became the blocker to those teams, just like the SRE and
DevOps teams that came before us. We couldn’t onboard capabilities and features
fast enough, meaning we were supporting the old ways while trying to build the
new. ... Chris Plank, Enterprise Architect at NatWest, discusses this in our
interview for his Platform Engineering Day talk: “We have since been set four
challenges by leadership that I talk about: do things faster, do things simpler,
enable inner sourcing, and deliver centralized capabilities in a self-service
way… Our inner sourcing model will allow us to have multiple teams working on
our platform… They are empowered to start contributing changes.”
Data Centers in Space: Separating Fact from Science Fiction

Among the many reasons for interest in orbital data centers is the potential for
improved sustainability. However, the definition of a data center in space
remains fluid, shaped by current technological limitations and evolving industry
perspectives. Lonestar Data Holdings chairman and CEO Christopher Slott told
Data Center Knowledge that his firm works from the definitions of a data center
from industry standards bodies including the Uptime Institute and the Building
Industry Consulting Service International (BICSI). ... Axiom Space plans to
deploy larger ODC infrastructure in the coming years that are more similar to
terrestrial data centers in terms of utility and capacity. The goal is to
develop and operationalize terrestrial-grade cloud regions in low-Earth orbit
(LEO). ... James noted that space presents the ultimate edge computing challenge
– limited bandwidth, extreme conditions, and no room for failure. “To ensure
resilience and autonomy, the platform incorporates automated rollbacks and
self-healing capabilities through delta updates and health monitoring,” James
said. ... With the Axiom Space deployment, the initial workloads will be small
but scalable to the much larger ODC infrastructure that the company plans to
deploy in the coming years. “Red Hat Device Edge enables secure, low-latency
data processing directly on the ISS, allowing applications to run where the data
is being generated,” James said.
CISA cybersecurity workforce faces cuts amid shifting US strategy

Analysts suggest these layoffs and funding cuts indicate a broader strategic
shift in the U.S. government’s cybersecurity approach. Neil Shah, VP at
Counterpoint Research, sees both risks and opportunities in the restructuring.
“In the near to mid-term, this could weaken the US cybersecurity infrastructure.
However, with AI proliferating, the US government likely has a Plan B —
potentially shifting toward privatized cybersecurity infrastructure projects,
similar to what we’re seeing with Project Stargate for AI,” Shah said. “If these
gaps aren’t filled with viable alternatives, vulnerabilities could escalate from
small-scale exploits to large-scale cyber incidents at state or federal levels.
Signs point to a broader cybersecurity strategy reboot, with funding likely
being redirected toward more efficient and sophisticated players rather than a
purely vertical, government-led approach.” While some fear heightened risks,
others argue the shift could lead to more tech-driven solutions. Faisal Kawoosa,
founder and lead analyst at Techarc, views the move as part of a larger digital
transformation. “Elon Musk’s role is not just about cost-cutting but also about
leveraging technology to create more efficient systems,” Kawoosa said. “DOGE
operates as a digital transformation program for US governance, exploring
tech-first approaches to achieving similar or better results.”
No comments:
Post a Comment