Quote for the day:
"People ask the difference between a leader and a boss. The leader works in the open, and the boss in covert." -- Theodore Roosevelt
Why the future of security starts with who, not where
Traditional security assumed one thing: “If someone is inside the network, they
can be trusted.” That assumption worked when offices were closed environments
and systems lived behind a single controlled gateway. But as Microsoft
highlights in its Digital Defense Report, attackers have moved almost entirely
toward identity-based attacks because stealing credentials offers far more
access than exploiting firewalls. In other words, attackers stopped trying to
break in. They simply started logging in. ... Zero trust isn’t about paranoia.
It’s about verification. Never trust, always verify only works if identity sits
at the center of every access decision. That’s why CISA’s zero trust maturity
model outlines identity as the foundation on which all other zero trust pillars
rest — including network segmentation, data security, device posture and
automation. ... When identity becomes the perimeter, it can’t be an
afterthought. It needs to be treated like core infrastructure. ... Organizations
that invest in strong identity foundations won’t just improve security — they’ll
improve operations, compliance, resilience and trust. Because when identity is
solid, everything else becomes clearer: who can access what, who is responsible
for what and where risk actually lives. The companies that struggle will be the
ones trying to secure a world that no longer exists — a perimeter that
disappeared years ago.Designing Consent Under India's DPDP Act: Why UX Is Now A Legal Compliance
The request for consent must be either accompanied by or preceded by a notice. The notice must specifically contain three things: personal data and purpose for which it is being collected; the manner in which he or she may withdraw consent or make grievance; and the manner in which the complaint may be made to the board. ... “Free” consent also requires interfaces to avoid deceptive nudges or coercive UI design. Consider a consent banner implemented with a large “Accept All” button as the primary call-to-action button while the “Reject” option is kept hidden behind a secondary link that opens multiple additional screens. This creates an asymmetric interaction cost where acceptance requires a single click and refusal demands several steps. If consent is obtained through such interface, it cannot be regarded as voluntary or valid. ... A defensible consent record must capture the full interaction such as which notice version was shown, what purposes were disclosed, language of the notice and the action of the user (click, toggle, checkbox). The standard operational logs might be disposed after 30 or 90 days but the consent logs cannot follow the same cycle. Section 6(10) implicitly states that consent records must be retained as long as the data is being processed for the purposes shown in the notice. If the personal data was collected in 2024 and is still being processed in 2028, the Fiduciary must produce the 2024 consent logs as evidence.The AI Skills Gap Is Not What Companies Think It Is
Employers often say they cannot find enough AI engineers or people with deep model expertise to keep pace with AI adoption. We can see that in job descriptions. Many blend responsibilities across model development, data engineering, analytics, and production deployment into a single role. These positions are meant to accelerate progress by reducing handoffs and simplifying ownership. And in an ideal world, the workforce would be ready for this. ... So when companies say they are struggling to fill the AI skills gap, what they are often missing is not raw technical ability. They are missing people who can operate inside imperfect environments and still move AI work forward. Most organizations do not need more model builders. ... For professionals trying to position themselves, the signal is similar. Career advantage increasingly comes from showing end-to-end exposure, not mastery of every AI tool. Experience with data pipelines, deployment constraints, and being able to monitor systems matter. Being good at stakeholder communication remains an important skill. The AI skills gap is not a shortage of talent. It is a shortage of alignment between what companies need and what they are actually hiring for. It’s also an opportunity for companies to understand what it really means, and finally close the gap. Professionals can also capitalize on this opportunity by demonstrating end-to-end, applied AI experience.DevOps Didn’t Fail — We Just Finally Gave it the Tools it Deserved
Ask an Ops person what DevOps success looks like, and you’ll hear something very
close to what Charity is advocating: Developers who care deeply about
reliability, performance, and behavior in production. Ask security teams and
you’ll get a different answer. For them, success is when everyone shares
responsibility for security, when “shift left” actually shifts something besides
PowerPoint slides. Ask developers, and many will tell you DevOps succeeded when
it removed friction. When it let them automate the non-coding work so they
could, you know, actually write code. Platform engineers will talk about
internal developer platforms, golden paths, and guardrails that let teams move
faster without blowing themselves up. SREs, data scientists, and release
engineers all bring their own definitions to the table. That’s not a bug in
DevOps. That’s the thing. DevOps has always been slippery. It resists clean
definitions. It refuses to sit still long enough for a standards body to nail it
down. At its core, DevOps was never about a single outcome. It was about
breaking down silos, increasing communication, and getting more people aligned
around delivering value. Success, in that sense, was always going to be plural,
not singular. Charity is absolutely right about one thing that sits at the heart
of her argument: Feedback loops matter. If developers don’t see what happens to
their code in the wild, they can’t get better at building resilient
systems.
The sovereign algorithm – India’s DPDP act and the trilemma of innovation, rights, and sovereignty
At its core, the DPDP Act functions as a sophisticated product of governance
engineering. Its architecture is a deliberate departure from punitive, post
facto regulation towards a proactive, principles based model designed to shape
behavior and technological design from the ground up. Foundational principles
such as purpose limitation, data minimization, and storage restriction are
embedded as mandatory design constraints, compelling a fundamental rethink of
how digital services are conceived and built. ... The true test of this
legislative architecture will be its performance in the real world, measured
across a matrix of tangible and intangible metrics that will determine its
ultimate success or failure. The initial eighteen month grace period for most
rules constitutes a critical nationwide integration phase, a live stress test of
the framework’s viability and the ecosystem’s adaptability. ... Geopolitically,
the framework positions India as a normative leader for the developing world. It
articulates a distinct third path between the United States’ predominantly
market oriented approach and China’s model of state controlled cyber
sovereignty. India’s alternative, which embeds individual rights within a
democratic structure while reserving state authority for defined public
interests, presents a compelling model for nations across the Global South
navigating their own digital transitions.
Everyone Knows How to Model. So Why Doesn’t Anything Get Modeled?
One of the main reasons modeling feels difficult is not lack of competence, but
lack of shared direction. There is no common understanding of what should be
modeled, how it should be modeled, or for what purpose. In other words, there is
no shared content framework or clear work plan. When it is missing, everyone
defaults to their own perspective and experience. ... From the outside, it looks
like architecture work is happening. In reality, there is discussion,
theorizing, and a growing set of scattered diagrams, but little that forms a
coherent, usable whole. At that point, modeling starts to feel heavy—not because
it is technically difficult, but because the work lacks direction, a shared way
of describing things, and clear boundaries. ... To be fair, tools do matter. A
bad or poorly introduced tool can make modeling unnecessarily painful. An overly
heavy tool kills motivation; one that is too lightweight does not support
managing complexity. And if the tool rollout was left half-done, it is no
surprise the work feels clumsy. At the same time, a good tool only enables
better modeling—it does not automatically create it. The right tool can lower
the threshold for producing and maintaining content, make relationships easier
to see, and support reuse. ... Most architecture initiatives don’t fail because
modeling is hard. They fail because no one has clearly decided what the modeling
is for. ... These are not technical modeling problems. They are leadership and
operating-model problems.
ChatGPT Health Raises Big Security, Safety Concerns
ChatGPT Health's announcement touches on how conversations and files in ChatGPT
as a whole are "encrypted by default at rest and in transit" and that there are
some data controls such as multifactor authentication, but the specifics on how
exactly health data will be protected on a technical and regulatory level was
not clear. However, the announcement specifies that OpenAI partners with network
health data firm b.well to enable access to medical records. ... While many
security tentpoles remain in place, healthcare data must be held to the highest
possible standard. It does not appear that ChatGPT Health conversations are
end-to-end encrypted. Regulatory consumer protections are also unclear. Dark
Reading asked OpenAI whether ChatGPT Health had to adhere to any HIPAA or
regulatory protections for the consumer beyond OpenAI's own policies, and the
spokesperson mentioned the coinciding announcement of OpenAI for Healthcare,
which is OpenAI's product for healthcare organizations which do need to meet
HIPAA requirements. ... even with privacy protections and promises, data
breaches will happen and companies will generally comply with legal processes
such as subpoenas and warrants as they come up. "If you give your data to any
third party, you are inevitably giving up some control over it and people should
be extremely cautious about doing that when it's their personal health
information," she says.
From static workflows to intelligent automation: Architecting the self-driving enterprise
We often assume fragility only applies to bad code, but it also applies to our
dependencies. Even the vanguard of the industry isn’t immune. In September 2024,
OpenAI’s official newsroom account on X (formerly Twitter) was hijacked by
scammers promoting a crypto token. Think about the irony: The company building
the most sophisticated intelligence in human history was momentarily compromised
not by a failure of their neural networks, but by the fragility of a third-party
platform. This is the fragility tax in action. When you build your enterprise on
deterministic connections to external platforms you don’t control, you inherit
their vulnerabilities. ... Whenever we present this self-driving enterprise
concept to clients, the immediate reaction is “You want an LLM to talk to our
customers?” This is a valid fear. But the answer isn’t to ban AI; it is to
architect confidence-based routing. We don’t hand over the keys blindly. We
build governance directly into the code. In this pattern, the AI assesses its
own confidence level before acting. This brings us back to the importance of
verification. Why do we need humans in the loop? Because trusted endpoints don’t
always stay trusted. Revisiting the security incident I mentioned earlier: If
you had a fully autonomous sentient loop that automatically acted upon every
post from a verified partner account, your enterprise would be at risk. A
deterministic bot says: Signal comes from a trusted source -> execute.
AI is rewriting the sustainability playbook
At first, greenops was mostly finops with a greener badge. Reduce waste,
right-size instances, shut down idle resources, clean up zombie storage, and
optimize data transfer. Those actions absolutely help, and many teams delivered
real improvements by making energy and emissions a visible part of engineering
decisions. ... Greenops was designed for incremental efficiency in a world where
optimization could keep pace with growth. AI breaks that assumption. You can
right-size your cloud instances all day long, but if your AI footprint grows by
an order of magnitude, efficiency gains get swallowed by volume. It’s the
classic rebound effect: When something (AI) becomes easier and more valuable, we
do more of it, and total consumption climbs. ... Enterprises are simultaneously
declaring sustainability leadership while budgeting for dramatically more
compute, storage, networking, and always-on AI services. They tell stakeholders,
“We’re reducing our footprint,” while telling internal teams, “Instrument
everything, vectorize everything, add copilots everywhere, train custom models,
and don’t fall behind.” This is hypocrisy and a governance failure. ... Greenops
isn’t dead, but it is being stress-tested by a wave of AI demand that was not
part of its original playbook. Optimization alone won’t save you if your
consumption curve is vertical. Rather than treat greenness as just a brand
attribute, enterprises that succeed will recognize greenops as an engineering
and governance discipline, especially for AI
Your AI strategy is just another form of technical debt
Modern software development has become riddled with indeterminable processes and
long development chains. AI should be able to fix this problem, but it’s not
actually doing so. Instead, chances are your current AI strategy is saddling
your organisation with even more technical debt. The problem is fairly
straightforward. As software development matures, longer and longer chains are
being created from when a piece of software is envisioned until it’s delivered.
Some of this is due to poor management practices, and some of it is unavoidable
as programs become more complex. ... These tools can’t talk to each other,
though; after all, they have just one purpose, and talking isn’t one of them.
The results of all this, from the perspective of maintaining a coherent value
chain, are pretty grim. Results are no longer predictable. Worse yet, they are
not testable or reproducible. It’s just a set of random work. Coherence is
missing, and lots of ends are left dangling. ... If this wasn’t bad enough,
using all these different, single-purpose tools adds another problem, namely
that you’re fragmenting all your data. Because these tools don’t talk to each
other, you’re putting all the things your organisation knows into
near-impenetrable silos. This further weakens your value chain as your workers,
human and especially AI, need that data to function. ... Bolting AI onto
existing systems won’t work. AIs aren’t human, and you can’t replace them one
for one, or even five for one. It doesn’t work. 




























