Quote for the day:
"Transparent reviews create fairness. Subjective reviews create frustration." -- Gordon Tredgold
AI agents and bad productivity metrics
The great promise of generative artificial intelligence was that it would
finally clear our backlogs. Coding agents would churn out boilerplate at
superhuman speeds, and teams would finally ship exactly what the business
wants. The reality, as we settle into 2026, is far more uncomfortable.
Artificial intelligence is not going to save developer productivity because
writing code was never the bottleneck in software engineering. ... For
decades, one of the most common debugging techniques was entirely social. A
production alert goes off. You look at the version control history, find the
person who wrote the code, ask them what they were trying to accomplish, and
reconstruct the architectural intent. But what happens to that workflow when
no one actually wrote the code? What happens when a human merely skimmed a
3,000-line agent-generated pull request, hit merge, and moved on to the next
ticket? When an incident happens, where is the deep knowledge that used to
live inside the author? ... The metrics that matter are still the boring ones
because they measure actual business outcomes. The DORA metrics remain the
best sanity check we have because they tie delivery speed directly to system
stability. They measure deployment frequency, lead time for changes, change
failure rate, and time to restore service. None of those metrics cares about
the number of commits your agents produced today. They only care about whether
your system can absorb change without breaking.How vertical SaaS is redefining enterprise efficiency
For the past decade, horizontal SaaS has been the defining force in enterprise
technology. Platforms like CRMs, ERP suites and collaboration tools promised
universality, offering a single platform to manage every business function
across all industries. The strategy made sense: a large total addressable
market, reusable architecture and marketing scale. Vertical SaaS flips that
model. It is narrow by design but deep in impact. A report by Strategy&
found that B2B vertical software companies are now growing faster than their
horizontal peers, thanks to higher retention rates, lower churn rates and
better unit economics. When software mirrors how a business already works,
people stop treating it like a tool they tolerate and start relying on it like
infrastructure. ... In regulated industries, compliance isn’t a feature; it’s
the baseline for trust. I learned early that trying to retrofit audit trails
or data retention policies after go-live only creates technical debt. Instead,
design for compliance as a first-class product layer: immutable logs,
permission hierarchies and exportable compliance reports built into the
system. ... Vertical products don’t thrive in isolation. Integration with
industry hardware, marketplaces and regulatory systems drives adoption. In one
case, we partnered with a hardware vendor to automatically sync manifest data
from their devices, cutting onboarding time in half and unlocking co-marketing
opportunities.
Most API security flaws are created during the design phase. You're too late
if you're waiting until deployment to think about threats. Shift-left
principles mean integrating security early, especially at the design phase,
where flawed assumptions become future exploits. Start by mapping out each
endpoint's purpose, what data it touches, and who should access it. Identify
where trust is assumed (not earned), roles blur, and inputs aren't validated.
... Every API has a breaking point. If you don't define it, attackers will.
Rate limiting and throttling prevent denial-of-service (DoS) attacks, and
they're also your first defense against scraping, brute-forcing, enumeration,
and even accidental misuse by poorly built integrations. APIs, by nature,
invite automation. Without guardrails, that openness turns into a floodgate.
And in some cases, unchecked abuse opens the door to far worse issues, like
remote code execution, where improperly scoped input or lack of throttling
leads directly to exploitation. ... APIs are built to accept input. Attackers
find ways to exploit it. The core rule is this - if you didn't expect it,
don't process it. If you didn't define it, don't send it. Define request and
response schemas explicitly using tools like OpenAPI or JSON Schema, as
recommended by leading API security standards. Then enforce them — at the
gateway, app layer, or both. Don't just use validation as linting; treat it as
a runtime contract. If the payload doesn't match the spec, reject it.
API Security Standards: 10 Essentials to Get You Started
Most API security flaws are created during the design phase. You're too late
if you're waiting until deployment to think about threats. Shift-left
principles mean integrating security early, especially at the design phase,
where flawed assumptions become future exploits. Start by mapping out each
endpoint's purpose, what data it touches, and who should access it. Identify
where trust is assumed (not earned), roles blur, and inputs aren't validated.
... Every API has a breaking point. If you don't define it, attackers will.
Rate limiting and throttling prevent denial-of-service (DoS) attacks, and
they're also your first defense against scraping, brute-forcing, enumeration,
and even accidental misuse by poorly built integrations. APIs, by nature,
invite automation. Without guardrails, that openness turns into a floodgate.
And in some cases, unchecked abuse opens the door to far worse issues, like
remote code execution, where improperly scoped input or lack of throttling
leads directly to exploitation. ... APIs are built to accept input. Attackers
find ways to exploit it. The core rule is this - if you didn't expect it,
don't process it. If you didn't define it, don't send it. Define request and
response schemas explicitly using tools like OpenAPI or JSON Schema, as
recommended by leading API security standards. Then enforce them — at the
gateway, app layer, or both. Don't just use validation as linting; treat it as
a runtime contract. If the payload doesn't match the spec, reject it.Why AI Urgency Is Forcing a Data Governance Reset
The cost of weak governance shows up in familiar ways: teams can’t find data, requirements arrive late in the process, and launches stall when compliance realities collide with product timelines. Without governance, McQuillan argues, organizations “ultimately suffer from higher cost basis,” with downstream consequences that “impact the bottom line.” ... McQuillan sees a clear step-change in executive urgency since generative AI (GenAI) became mainstream. “There’s been a rapid adoption, particularly since the advent of GenAI and the type of generative and agentic technologies that a lot of C-suites are taking on,” he says. But he also describes a common leadership gap: many executives feel pressure to become “AI-enabled” without a clear definition of what that means or how to build it sustainably. “There’s very much a well-understood need across all companies to become AI-enabled in some way,” he says. “But the problem is a lot of folks don’t necessarily know how to define that.” In the absence of clarity, organizations often fall into scattershot experimentation. What concerns McQuillan the most is how the pace of the “race” shapes priorities. ... When asked whether the long-running mantra “data is the new oil” still holds in the era of large language models and agentic workflows, McQuillan is direct. “It holds true now more than ever,” he says. He acknowledges why attention drifts: “It’s natural for people to gravitate toward things that are shiny,” and “AI in and of itself is an absolutely magnificent space.”Building a Least-Privilege AI Agent Gateway for Infrastructure Automation with MCP, OPA, and Ephemeral Runners
Demystifying CERT‑In’s Elemental Cyber Defense Controls: A Guide for MSMEs
For India’s Micro, Small, and Medium Enterprises (MSMEs), cybersecurity is no
longer a “big company problem.” With digital payments, SaaS adoption,
cloud-first operations, and supply‑chain integrations becoming the norm, MSMEs
are now prime targets for cyberattacks. To help these organizations build a
strong foundational security posture, the Indian Computer Emergency Response
Team (CERT-In) has released CIGU-2025-0003, outlining a baseline of Cyber
Defense Controls, which prescribes 15 Elemental Cyber Security Controls—a
pragmatic, baseline set of safeguards designed to uplift the nation’s cyber
hygiene. ... These controls, mapped to 45 recommendations, enable essential
digital hygiene, protect against ransomware, ensure regulatory compliance, and
are required for annual audits. CERT‑In’s Elemental Controls are designed as
minimum essential practices that every Indian organization—regardless of
size—should implement. ... The CERT-In guidelines offer a simplified, actionable
starting point for MSMEs to benchmark their security. These controls are
intentionally prescriptive, unlike ISO or NIST, which are more
framework‑oriented. ... Because threats constantly evolve and MSMEs face unique
risks depending on their industry and data sensitivity, organizations should
view this framework not as an endpoint, but as the first critical step toward
building a comprehensive security program akin to ISO 27001 or NIST CSF 2.0.
AI-fuelled cyber attacks hit in minutes, warns CrowdStrike
CrowdStrike reports a sharp acceleration in cyber intrusions, with attackers
moving from initial access to lateral movement in less than half an hour on
average as widely available artificial intelligence tools become embedded in
criminal workflows. Its latest Global Threat Report puts average eCrime
"breakout time" at 29 minutes in 2025, a 65% improvement on the prior year. ...
Alongside generative AI use in preparation and execution, the report describes
attempts to exploit AI systems directly. Adversaries injected malicious prompts
into GenAI tools at more than 90 organisations, using them to generate commands
associated with credential theft and cryptocurrency theft. ... Incidents linked
to North Korea rose more than 130%, while activity by the group CrowdStrike
tracks as FAMOUS CHOLLIMA more than doubled. The report says DPRK-nexus actors
used AI-generated personas to scale insider operations. It also cites a large
cryptocurrency theft attributed to the actor it calls PRESSURE CHOLLIMA, valued
at USD $1.46 billion and described as the largest single financial heist ever
reported. The report also references AI-linked tooling used by other state and
criminal groups. Russia-nexus FANCY BEAR deployed LLM-enabled malware, which it
named LAMEHUG, for automated reconnaissance and document collection. The eCrime
actor tracked as PUNK SPIDER used AI-generated scripts to speed up credential
dumping and erase forensic evidence.
Shadow mode, drift alerts and audit logs: Inside the modern audit loop
When systems moved at the speed of people, it made sense to do compliance checks
every so often. But AI doesn't wait for the next review meeting. The change to
an inline audit loop means audits will no longer occur just once in a while;
they happen all the time. Compliance and risk management should be "baked in" to
the AI lifecycle from development to production, rather than just
post-deployment. This means establishing live metrics and guardrails that
monitor AI behavior as it occurs and raise red flags as soon as something seems
off. ... Cultural shift is equally important: Compliance teams must act less
like after-the-fact auditors and more like AI co-pilots. In practice, this might
mean compliance and AI engineers working together to define policy guardrails
and continuously monitor key indicators. With the right tools and mindset,
real-time AI governance can “nudge” and intervene early, helping teams
course-correct without slowing down innovation. In fact, when done well,
continuous governance builds trust rather than friction, providing shared
visibility into AI operations for both builders and regulators, instead of
unpleasant surprises after deployment. ... Shadow mode is a way to check
compliance in real time: It ensures that the model handles inputs correctly and
meets policy standards before it is fully released. One AI security framework
showed how this method worked: Teams first ran AI in shadow mode, then compared
AI and human inputs to determine trust.
Making AI Compliance Practical: A Guide for Data Teams Navigating Risk, Regulation, and Reality
As AI tools become more embedded in enterprise workflows, data teams are
encountering a growing reality: compliance isn’t only a legal concern but also a
design constraint, a quality signal, and, often, a competitive differentiator.
But navigating compliance can feel complex, especially for teams focused on
building and shipping. What is the good news? It doesn’t have to be. When
approached intentionally, compliance becomes a pathway to better decisions, not
a barrier. ... Automation can help with regulations, but only if it's used
correctly. I've looked at a tool before that used algorithms to find private
information. It worked well with English, but when tested with material in more
than one language, it missed a few personal identifiers. The group thought it
was "smart enough." It wasn't. We kept the automation, but we added human review
for rare cases, confidence levels to make checks happen, and alerts for input
formats that aren't common. The automation stayed the same, but there were
built-in checks and balances. ... The biggest compliance failures don’t come
from bad people. They come from good teams moving fast, skipping hard questions,
and assuming nothing will go wrong. But compliance isn’t a blocker. It’s a
product quality signal. People will trust you more if they are aware that your
team has carefully considered the details.
No comments:
Post a Comment