Quote for the day:
"To live a creative life, we must lose
our fear of being wrong." -- Anonymous

When people imagine AI agents today, they tend to picture a chat window. A user
types a prompt, and the agent responds with a helpful answer (maybe even
triggers a tool or two). That’s fine for demos and consumer apps, but it’s not
how enterprise AI will actually work in practice. In the enterprise, most useful
agents aren’t user-initiated, they’re autonomous. They don’t sit idly waiting
for a human to prompt them. They’re long-running processes that react to data as
it flows through the business. They make decisions, call services and produce
outputs, continuously and asynchronously, without needing to be told when to
start. ... The problems worth solving in most businesses are closed-world:
Problems with known inputs, clear rules and measurable outcomes. But the models
we’re using, especially LLMs, are inherently non-deterministic. They’re
probabilistic by design. The same input can yield different outputs depending on
context, sampling or temperature. That’s fine when you’re answering a prompt.
But when you’re running a business process? That unpredictability is a
liability. ... Closed-world problems don’t require magic. They need solid
engineering. And that means combining the flexibility of LLMs with the structure
of good software engineering.

Being a CISO today is not for the faint of heart. To paraphrase Rodney
Dangerfield, CISOs (some, anyway) get no respect. You’d think in a job where
perpetual stress over the threat of a cyberattack is the norm, there would be
empathy for security leaders. Instead, they face the growing challenge of trying
to elicit support across departments and managing security threats, according to
a recent report from WatchGuard. ... It’s no secret CISOs are under tremendous
pressure. “They’ve got the regulatory scrutiny, they’ve got public visibility,”
along with the increasing complexity of threats, and “AI is just adding to that
fire, and the mismatch between the accountability and the authority,” says
Myers, who wrote “The CISO Dilemma,” which explores CISO turnover rates and how
companies can change that moving forward. Often, CISOs don’t have the mandate to
influence the business systems or processes that are creating that risk, she
says. “I think that’s a real disconnect and that’s what’s really driving the
burnout and turnover.” ... Some CISOs are stepping back from operational roles
into more advisory ones. Patricia Titus, who recently took a position as a field
CISO at startup Abnormal AI after 25 years as a CISO, does not think the CISO
role has become less desirable. “The regulatory scrutiny has been there all
along,” she says. “It’s gotten a light shined on it.

The DPDP Act’s centralized enforcement model suffers from structural weaknesses
that hinder effective data protection. A primary concern is the lack of
independence of the Data Protection Board. Because the DPB is both appointed and
funded by the Union government, with its officials classified as civil servants
under central rules , it does not enjoy the institutional autonomy typically
expected of a watchdog agency. ... By design, the executive branch holds
decisive power over who sits on the Board and can even influence its operations
through service rules. This raises a conflict of interest, given that the
government itself is a major collector and processor of citizens’ data. In the
words of Justice B.N. Srikrishna, having a regulator under government control is
problematic “since the State will be the biggest data processor” – a regulator
must be “free from the clutches of the Government” to fairly oversee both
private and government actors . ... Another structural limitation is the
potential for executive interference in enforcement actions, which dilutes
accountability. The DPDP Act contains provisions such as Section 27(3) enabling
the Central Government to issue directions that the DPB “may modify or suspend”
its own orders based on a government reference .

In today’s enterprise landscape, the quality of AI systems depends fundamentally
on the data that flows through them. While most organizational focus remains on
AI models and algorithms, it’s the often-under-appreciated current of data
flowing through these systems that truly determines whether an AI application
becomes “good AI” or problematic technology. Just as ancient Egyptians developed
specialized irrigation techniques to cultivate flourishing agriculture, modern
organizations must develop specialized data practices to cultivate AI that is
effective, ethical, and beneficial. My new column, “The Good AI,” will examine
how proper data practices form the foundation for responsible and
high-performing AI systems. We’ll explore how organizations can channel their
data resources to create AI applications that are not just powerful, but
trustworthy, inclusive, and aligned with human values. ... As organizations
increasingly integrate artificial intelligence into their operations, the need
for robust AI governance has never been more critical. However, establishing
effective AI governance doesn’t happen in a vacuum—it must be built upon the
foundation of solid data governance practices. The path to responsible AI
governance varies significantly depending on your organization’s current data
governance maturity level.

Perhaps the most immediate challenge facing IT teams identified in the research
is the dramatic cost scaling of public cloud AI workloads. Unlike traditional
applications where cloud costs scale somewhat linearly, AI workloads create
exponential cost curves due to their intensive compute and storage requirements.
The research identifies a specific economic threshold where cloud costs become
unsustainable. When monthly cloud spending for a given AI workload reaches
60-70% of what it would cost to purchase and operate dedicated GPU-powered
infrastructure, organizations hit their inflection point. At this threshold, the
total cost of ownership calculation shifts decisively toward private
infrastructure. IT teams can track this inflection point by monitoring data and
model-hosting requirements relative to GPU transaction throughput. ...
Identifying when to move from a public cloud to private cloud or some form of
on-premises deployment is critical. Thomas noted that there are many flavors of
hybrid FinOps tooling available in the marketplace that, when configured
appropriately for an environment, will spot trend anomalies. Anomalies may be
triggered by swings in GPU utilization, costs per token/inferences, idle
percentages, and data-egress fees. On-premises factors include material
variations in hardware, power, cooling, operations, and more over a set period
of time.
AI isn’t inherently bad nor inherently good from a security perspective. It’s
another tool that can accelerate and magnify both good and bad behaviors. On
the good side, if models can learn to assess the vulnerability state and
general trustworthiness of app components, and factor that learning into code
they suggest, AI can have a positive impact on the security of the resultant
output. Open source projects can already leverage AI to help find potential
vulnerabilities and even submit PRs to address them, but there still needs to
be significant human oversight to ensure that the results actually improve the
project’s security. ... If you simply trust an AI to generate all the
artifacts needed to build, deploy, and run anything sophisticated it will be
very difficult to know if it’s done so well and what risks it’s mitigated. In
many ways, this looks a lot like the classic “curl and pipe to bash” kinds of
risks that have long existed where users put blind trust in what they’re
getting from external sources. Many times that can work out fine but sometimes
it doesn’t. ... AI can create impressive results quickly but it doesn’t
necessarily prioritize security and may in fact make many choices that degrade
it. Have good architectures and controls and human experts that really
understand the recommendations it’s making and can adapt and re-prompt as
necessary to provide the right balance.

Building cost awareness in devops requires asking an upfront question when
spinning up new cloud environments. Developers and data scientists should ask
if the forecasted cloud and other costs align with the targeted business
value. When cloud costs do increase because of growing utilization, it’s
important to relate the cost escalation to whether there’s been a
corresponding increase in business value. The FinOps Foundation recommends
that SaaS and cloud-driven commercial organizations measure cloud unit
economics. The basic measure calculates the difference between marginal cost
and marginal revenue and determines where cloud operations break even and
begin to generate a profit. Other companies can use these concepts to
correlate business value and cost and make smarter cloud architecture and
automation decisions. ... “Engineers especially can get tunnel vision on
delivering features and the art of code, and cost modeling should happen as a
part of design, at the start of a project, not at the end,” says Mason of
RecordPoint. “Companies generally limit the staff with access to and knowledge
of cloud cost data, which is a mistake. Companies should strive to spread
awareness of costs, educating users of services with the highest cost impacts,
so that more people recognize opportunities to optimize or eliminate
spend.”
Third-party integrations are critical to any fintech ecosystem, and at Cred,
we manage them through a rigorous, life cycle-based third-party risk
management framework. This approach is designed to minimize risk and maximize
reliability, with security and resilience built in from the start. Before
onboarding any external partner, whether for KYC, APIs or payment rails, we
conduct thorough due diligence to evaluate their security posture. Each
partner is categorized as high, medium or low risk, which then informs the
depth and frequency of ongoing assessments. These reviews go well beyond
standard compliance checks. ... With user goals validated, our teams then move
into secure architecture design. Every integration point, data exchange and
system interaction are examined to preempt vulnerabilities and ensure that
sensitive information is protected by default. We use ThreatShield, an
internal AI-powered threat-modeling tool, to analyze documentation and
architecture against the Stride framework, a threat model designed by
Microsoft that is used in cybersecurity to identify potential security threats
to applications and systems. This architecture-first thinking enables us to
deliver powerful features, such as surfacing hidden charges in smart
statements or giving credit insights without ever compromising the user's data
or experience.

Implement a “boy scout rule” under which developers are encouraged to make small
improvements to existing code during feature work. This maintains development
momentum while gradually improving code quality, and developers are more
motivated to clean up code they’re already actively working with. ...
Proactively analyze user engagement metrics to pinpoint friction points where
users spend excessive time. Prioritize these areas for targeted debt reduction,
aligning technical improvements closely with meaningful user experience
enhancements. ... Pre-vacation handovers are an excellent opportunity to reduce
tech debt. Planning and carrying out handovers before we take a holiday are
crucial to maintaining smooth IT operations. Giving your employees the choice to
hand tasks over to automation or a human colleague can help reduce tech debt and
automate tasks. Critically, it utilizes time already allocated for addressing
this work. ... Resolving technical debt is development. The Shangri-la of “no
tech debt” does not survive contact with reality. It’s a balance of doing what’s
right for the business. Making sure the product and engineering teams are on the
same page is critical. You should have sprints where tech debt is the focus.
Among the top challenges facing the IT sector today, says Jackson, is the rapid
development of the tech world. “The pace of change is outpacing many
organisations’ ability to adapt securely – whether due to AI, rapid cloud
adoption, evolving regulatory frameworks like DORA, or the ongoing shortage of
skilled cybersecurity professionals,” he says. “These challenges, combined with
cost pressures and the perception that security is not always an enabler, make
adaptation even harder.” AI in particular, to no surprise, is having a
significant effect on the cybersecurity world – reshaping both sides of the
“cybersecurity battlefield”, according to Jackson. “We’re seeing attackers
utilise large language models (LLMs) like ChatGPT to scale social engineering
and refine malicious code, while defenders are using the same tools (or
leveraging them in some way) to enhance threat detection, streamline triage and
gain broader context at much greater speed,” he says. While he doesn’t believe
AI will have as great an impact as some suggest, he says it still represents an
“exciting evolution”, particularly in how it can benefit organisations. “AI
won’t replace individuals such as SOC analysts anytime soon, but it can augment
and support their roles freeing up time to focus on higher priority tasks,” he
says.
No comments:
Post a Comment