Quote for the day:
“You get in life what you have the courage to ask for.” -- Nancy D. Solomon
Does your chatbot have 'brain rot'? 4 ways to tell
Oxford University Press, publisher of the Oxford English Dictionary, named
"brain rot" as its 2024 Word of the Year, defining it as "the supposed
deterioration of a person's mental or intellectual state, especially viewed as
the result of overconsumption of material (now particularly online content)
considered to be trivial or unchallenging." ... Trying to draw exact
connections between human cognition and AI is always tricky, despite the fact
that neural networks -- the digital architecture upon which modern AI chatbots
are based -- were modeled upon networks of organic neurons in the brain. ...
That said, there are some clear parallels: as the researchers note in the new
paper, for example, models are prone to "overfitting" data and getting caught
in attentional biases ... If the ideal AI chatbot is designed to be a
completely objective and morally upstanding professional assistant, these
junk-poisoned models were like hateful teenagers living in a dark basement who
had drunk way too much Red Bull and watched way too many conspiracy theory
videos on YouTube. Obviously, not the kind of technology we want to
proliferate. ... Obviously, most of us don't have a say in what kind of data
gets used to train the models that are becoming increasingly unavoidable in
our day-to-day lives. AI developers themselves are notoriously tight-lipped
about where they source their training data from, which means it's difficult
to rank consumer-facing models7 behaviors of the AI-Savvy CIO
"The single most critical takeaway for CIOs is that a strong data foundation
isn't optional -- it's critical for AI success. AI has made it easy to build
prototypes, but unless you have your data in a single place, up to date,
secured, and well governed, you'll struggle to put those prototypes into
production. The team laying the groundwork for that foundation and getting
enterprises' data AI-ready is data engineering. CIOs who still see data
engineering as a back-office function are already five years behind, and
probably training their future competitors. ... "Your data will never be
perfect. And it doesn't have to be. It needs to be indicative of your
company's reality. But your data will get a lot better if you first use AI to
improve the UX. Then people will use your systems more, and in the way
intended, creating better data. That better data will enable better AI. And
the virtuous cycle will have begun. But it starts with the human side of the
equation, not the technological." ... CIOs don't need deep technical mastery
such as coding in Python or tuning neural networks -- but they must understand
AI fundamentals. This includes grasping core AI principles, machine learning
concepts, statistical modeling, and ethical implications. Mastery starts with
CIOs understanding AI as an umbrella of technologies that automate different
things. With this foundational fluency, they can ask the right questions,
interpret insights effectively, and make informed strategic decisions. Let's
look at the three AI domains. The economics of the software development business
Some software companies quietly tolerated piracy, figuring that the more their
software spread—even illegally—the more legitimate sales would follow in the
long run. The argument was that if students and hobbyists pirated the
software, it would lead to business sales when those people entered the
workforce. The catchphrase here was “piracy is cheaper than marketing.” This
was never an official position, but piracy was often quietly tolerated. ...
Over the years, the boxes got thinner and the documentation went onto the
Internet. For a time, though, “online help” meant a *.hlp file on your hard
drive. People fought hard to keep that type of online help well into the
Internet age. “What if I’m on an airplane? What if I get stranded in northern
British Columbia?” Eventually, the physical delivery of software went away as
Internet bandwidth allowed for bigger and bigger downloads. ... SaaS too has
interesting economic implications for software companies. The marketplace
generally expects a free tier for a SaaS product, and delivering free services
can become costly if not done carefully. The compute costs money, after all.
An additional problem is making sure that your free tier is good enough to be
useful, but not so good that no one wants to move up to the paid tiers. ...
The economics of software have always been a bit peculiar because the product
is maddeningly costly to design and produce, yet incredibly easy to replicate
and distribute. The years go by, but the problem remains the same: how to turn
ideas and code into a profitable business?
Beyond the checklist: Shifting from compliance frameworks to real-time risk assessments
One of the most overlooked aspects of risk assessments is cadence. While gap
analyses are sometimes done yearly or to prepare for large-scale audits, risk
assessments need to be continuous or performed on a regular schedule. Threats
do not respect calendar cycles. Major changes, including new technologies,
mergers, regulatory changes or implementing AI, need to trigger reassessments.
... Risk assessments should culminate in outputs that business leaders can act
on. This includes a concise risk heat map, a prioritized remediation roadmap
and clear asks, such as budget, ownership and timelines. These deliverables
convert technical findings into strategic decisions. They also help build
trust with stakeholders, especially in organizations that may be new to formal
risk management. ... Targeted risk assessments can be viewed as a low-cost,
fundamental option. They are best suited to companies that have limited budget
or are not prepared for a full review of the framework. With reduced scope,
shorter turnaround and transparent business value, such assessments enable
rapid establishment of trust, delivering prioritized outcomes. ... Risk
assessments are not just checkboxes. They are tools for making decisions. The
best programs are aligned with the business, focused, consistent and made to
change over time.
Legitimate Interest as a Lawful Basis: Pros, Cons and the Indian DPDP Act’s Stance
Under the EU’s General Data Protection Regulation (GDPR), for example, a
company can process data if it is “necessary for the purposes of the
legitimate interests pursued by the controller or by a third party” (Article
6(1)(f) GDPR), so long as those interests are not overridden by the
individual’s rights. However, India’s new Digital Personal Data Protection
Act, 2023 (DPDP Act) pointedly does not include legitimate interest as a
standalone lawful ground for processing. Instead, the Indian law relies
primarily on consent and a limited set of “legitimate uses” explicitly
enumerated in the statute. This divergence raises important questions about
the pros and cons of the legitimate interest basis, its impact on the free
flow of data, and whether India might benefit from adopting a similar concept.
... India’s decision to omit a general legitimate interest clause has sparked
debate. There are advantages and disadvantages to this approach, and its
impact on data flows and innovation is a key consideration. Pros / Rationale
for Omission: From a privacy rights perspective, the absence of an open-ended
legitimate interest basis means stronger individual control and legal
certainty. The law explicitly tells citizens and businesses what the
non-consensual exceptions are mostly common-sense or public interest scenarios
and everything else by default requires consent.
CIOs: Collect the right data now for future AI-enabled services
In its Technology Trends Outlook for 2025 report, McKinsey suggests the
technology landscape continues to undergo significant innovation-sponsored
shifts. The consultant says success will depend on executives identifying
high-impact areas where their businesses can use AI, while addressing external
factors such as regulatory shifts and ecosystem readiness. CIOs, as the
guardians of enterprise technology, will be expected to embrace this
challenge, but how? For Steve Lucas, CEO at technology specialist Boomi,
digital leaders must start with a recognition that the surfeit of data held in
modern enterprises is simply a starting point for what comes next. “There’s
plenty of data,” he says. “In fact, there’s too much of it. We worry about
collecting, storing, and accessing data. I think a successful approach is
about determining the data that matters. As a CIO, do you understand what data
matters today and what emerging technologies will matter tomorrow?” ... While
it can be tough to find the wood for the trees, Corbridge suggests CIOs should
search for established data roots within the enterprise. “It’s about going
back to the huge volumes of data you’ve got already and working out how you
put that information in the right place so it can be used in the right way for
your AI projects,” he says. Focusing on the fine details is an approach that
chimes with Ian Ruffle, head of data and insight at UK breakdown specialist
RAC.
How TTP-based Defenses Outperform Traditional IoC Hunting
To fight modern ransomware, organizations must shift from chasing IoCs to
detecting attacker behaviors — known as Tactics, Techniques, and Procedures
(TTPs). The MITRE ATT&CK framework provides a detailed overview of these
behaviors throughout the attack lifecycle, from initial access to impact. TTPs
are challenging for attackers to modify because they represent core behavioral
patterns and strategic approaches, unlike IoCs which are surface-level
elements that can be easily altered. This shift is reinforced by the so-called
‘Pyramid of Pain’ – a conceptual model that ranks indicators by how difficult they are for adversaries to
alter. At the base are easily changed elements like hash values and IP
addresses. At the top are TTPs, which represent the attacker’s core behaviors
and strategies. Disrupting TTPs forces adversaries to change their entire
strategy, which makes behavior-based detection the most effective and
resource-consuming method for them to avoid. ... When security and networking
are natively integrated, policy enforcement is consistent, micro-segmentation
is practical, and containment actions can be executed inline without stitching
together multiple consoles. The cloud model also enables continuous, global
updates to prevention logic and the ability to apply AI/ML on aggregated,
high‑fidelity data feeds to reduce noise and improve detection quality. All
this reminds me of the OODA military model that can help speed up incident
response.Healthcare security is broken because its systems can’t talk to each other
To maintain effectiveness, healthcare organizations should continually evaluate their security toolset for relevance, integration potential, and overall value to the security program. Prioritizing solutions that support open standards, and seamless integration helps minimize context switching and alert fatigue, while ensuring that the security team remains engaged and productive. Ultimately, the decision to balance specialized point solutions with broader integrated platforms must be guided by strategic priorities, resource capacity, and the need to support both operational efficiency and clinical excellence. ... A critical consideration is the interoperability of security tools across both cloud and on-premises environments. Healthcare organizations must assess if their security solutions need to span multiple cloud providers, support on-premises systems, or both, and determine how long on-premises support will be necessary as applications gradually shift to the cloud. Cloud providers are increasingly acquiring and integrating advanced security technologies, offering unified solutions that reduce the need for multiple monitoring platforms. This consolidation not only lessens alert fatigue but also enhances real-time visibility to security threats, an important advantage for healthcare, where timely detection is vital for protecting patient data and ensuring clinical continuity.The Hard Truth About Trying to AI-ify the Enterprise
Every company has a few proofs of concept running. The problem isn't
experimentation, it's scalability. How do you take those POCs and embed them
into your business fabric? Many enterprises get trapped in "PowerPoint
transformation": ambitious visions that never cross into operational
workflows. "I've seen organizations invest millions in analytics platforms and
data lakes. But when you ask how that's translating into faster underwriting,
better risk models or smarter supply chains, there's often silence," Sen said.
"That's because AI doesn't fail at the technology layer - it fails at the
architecture of adoption." ... The central challenge, Sen argues, is that most
enterprises treat AI as an overlay rather than an integral part of their
operational core. "You can't bolt it on top of outdated systems and expect it
to transform decision-making. The technology stack, data flow and governance
model all need to evolve together," he said. That evolution is what Gartner
describes as the pivot from "defending AI pilots to expanding into agentic AI
ROI." Organizations that mastered generative AI in 2024 are now moving toward
autonomous, interconnected systems that can execute tasks without human
micromanagement. Sen points to his own experience at Linde plc as an early
example. His team's first gen AI deployment at Linde was built for the audit
department. "Our internal audit head wanted a semantic search database and a
generative model to cut audit report generation time by half," he said.
No comments:
Post a Comment