Quote for the day:
"What you get by achieving
your goals is not as important as what you become by achieving your goals." --
Zig Ziglar

When you see more terms that include the “ops” suffix, you should understand
them as ideas that, as Graham Krizek, CEO of Voltage, puts it, “represent
different layers of the same overarching goal. These concepts are not isolated
silos but overlapping practices that support automation, collaboration, and
scalability.” ... While site reliability engineering (SRE) and infrastructure
as code (IaC) don’t have “ops” attached to their names, they can be seen in
many ways as offshoots of the
devops movement. SRE applies software
engineering techniques to operations problems, with an emphasis on
service-level objectives and
error budgets. IaC shops manage and provision
infrastructure using machine-readable definition files and scripts that can be
version-controlled, automated, and tested just like application code. IaC
underpins devops,
gitops, and many specialized ops practices. ... “While it is
not necessary for every IT professional to master each one individually,
understanding the principles behind them is essential for navigating modern
infrastructure,” he says. “The focus should remain on creating reliable
systems and delivering value, not simply keeping up with new terminology.” In
other words: you don’t need to collect ops like trading cards. You need to
understand the fundamentals, specialize where it makes sense, and ignore the
rest. Start with devops, add security if your compliance requirements demand
it, and adopt
cloudops practices if you’re heavily in the cloud.
CFOs are great at framing problems in terms of money. CISOs must also figure out
how much risks cost, what not taking action costs, how much revenue loss comes
from
median dwell time, and how much it will cost to recover. Boards want the
truth, not spin. Translate technical metrics into business impact (e.g., how
detection/response times and dwell time drive incident scope and recovery
costs). Recent threat reports show global median dwell time has fallen to ~10
days, but impact still depends on speed of containment. ... Stop talking about
technology. Start describing cybersecurity as keeping your business running,
protecting your reputation and building consumer trust – not simply operational
disruption, but also how risk scenarios affect P&Ls. ... CISOs need to know
how to read
trust balance sheets, not simply logs. This entails being able to
understand risk economics, insurance models and how to allocate resources
strategically. ... We are entering a new era in which CFOs and CISOs are both
responsible for keeping the business running: Earnings calls that include
integrated trust measures; Cyber insurance coverage that is in line with
active threat modeling; Cyber posture reports that meet regulatory standards,
like financial audits; and Shared leadership on risk and value initiatives at
the board level. CISOs who understand trust economics will impact the futures of
businesses by making security a part of strategy as well as operations.

To effectively mitigate concentration risks, CISOs should start by identifying
and documenting both
third-party and fourth-party risks, with a focus on the
most critical cloud providers. It is important to recognize that some non-cloud
products may also have cloud dependencies, such as management consoles or
reporting engines. Collaborating closely with strategic procurement and vendor
management (SPVM) leaders ensures that each cloud provider has a clearly
documented owner who understands their responsibilities. ... CISOs should not
rely solely on service level agreements (SLAs) to mitigate financial losses from
outages, as SLA payouts are often insufficient. Instead, focus on designing
applications to gracefully manage limited failures and use cloud-native
resilience patterns. In
IaaS and PaaS, focus on short-term failure of some cloud
services first, rather than catastrophic failure of a large provider and use
cloud-native resilience patterns in your architecture. In addition, special
attention should be given to
cloud identity providers due to their position as a
large single point of failure. ... To reduce the risk associated with
single-vendor dependency, organizations should intentionally distribute
applications and workloads across at least two cloud providers. While
single-vendor solutions can simplify integration and sourcing, a multi-cloud
approach limits the potential impact of an issue affecting any one provider.

The development of a
risk culture — including appetite, tolerance and profile —
within the scope of the management program is essential to provide real
visibility into ongoing risks, how they are being perceived and mitigated, and
to leverage the organization’s ability to improve its security posture.
Consequently, the company begins to deliver reliable products to customers,
secure its reputation and build a secure image to achieve a competitive
advantage and brand recognition. ... Another important factor to be developed in
parallel with raising risk culture is the continuous Information security
awareness process. This action should include all employees, especially those
involved in
Incident Management and
cyber Resilience. ... From a technical
standpoint, it is important to select and implement appropriate controls from
the
NIST CSF stages: Identify, Protect, Detect, Respond and Recover. However,
the selection of each control for building guardrails will depend on the overall
cybersecurity big picture and market best practices. For each identified issue,
the corresponding control must be determined, each monitored by the three lines
of defense ... Finally, the cyber management program must also consider legal,
regulatory and regional requirements, including privacy and cybersecurity laws.
This covers
LGPD, CCPA, GDPR, FFEIC, Central Bank regulations, etc., to
understand the consequences of non-compliance, which can pose serious issues for
the organization.

An emerging category of artificial intelligence middleware known as Model
Context Protocol is meant to make generative AI programs such as chatbots bots
more powerful by letting them connect with various resources, including packaged
software such as databases. Multiple studies, however, reveal that even the best
AI models struggle to use
Model Context Protocol. ... Having a standard does not
mean that an AI model, whose functionality includes a heavy dose of chance
("probability" in technical terms), will faithfully implement MCP. An AI model
plugged into MCP has to generate output that achieves several things, such as
formulating a plan to answer a query by choosing which external resources to
access, in what order to contact the MCP servers that lead to those external
applications, and then structuring several requests for information to produce a
final output to answer the query. ... The immediate takeaway from the various
benchmarks is that AI models need to adapt to a new epoch in which using MCP is
a challenge. AI models may have to evolve in new directions to fulfill the
challenge. All three studies identify a problem: Performance degrades as the AI
models have to access more MCP servers. The complexity of multiple resources
starts to overwhelm even the models that can best plan what steps to take at the
outset. As Wu and team put it in their MCPMark paper, the complexity of all
those MCP servers strains any AI model's ability to keep track of it all.
A common misconception is that cloud environments automatically provide
application resiliency, eliminating the need for testing. Although cloud
providers do offer various levels of resiliency and SLAs for their cloud
products, these alone do not guarantee that your business applications are
protected. If applications are not designed to be fault-tolerant or if they
assume constant availability of cloud services, they will fail when a particular
cloud service they depend on is not available. ... As a proactive discipline,
chaos engineering enables organizations to identify weaknesses in their systems
before they lead to significant outages or failures, where a system includes not
only the technology components but also the people and processes of an
organization. By introducing controlled, real-world disruptions, chaos
engineering helps test a system's robustness, recoverability, and fault
tolerance. This approach allows teams to uncover potential vulnerabilities, so
that systems are better equipped to handle unexpected events and continue
functioning smoothly under stress. ...
Chaos Toolkit is an open-source framework
written in
Python that provides a modular architecture where you can plug in
other libraries (also known as ‘drivers’) to extend your chaos engineering
experiments. ... to enable Google Cloud customers and engineers to introduce
chaos testing in their applications, we’ve created a series of Google
Cloud-specific chaos engineering recipes. Each recipe covers a specific scenario
to introduce chaos in a particular Google Cloud service.

The deep, non-deterministic nature of the underlying Large Language Models
(LLMs) and the complex, multi-step reasoning they perform create systems where
key decisions are often unexplainable. When an AI agent performs an unauthorized
or destructive action, auditing it becomes nearly impossible. ... When you give
an AI agent autonomy and tool access, you create a new class of trusted digital
insider. If that agent is compromised, the attacker inherits all its
permissions. An autonomous agent, which often has persistent access to critical
systems, can be compromised and used to move laterally across the network and
escalate privileges. The consequences of this over-permissioning are already
being felt. ... The sheer speed and scale of agent autonomy demand a shift from
traditional perimeter defense to a
Zero Trust model specifically engineered for
AI. This is no longer an optional security project; it is an organizational
mandate for any leader deploying AI agents at scale. ... Securing Agentic
AI is not just about extending your traditional security tools. It requires a
new governance framework built for autonomy, not just execution. The complexity
of these systems demands a new security playbook focused on control and
transparency ... The future of enterprise efficiency is agentic, but the future
of enterprise security must be built around controlling that agency.

In practice, a major flaw in many technology projects is that existing
multi-level approval systems are simply digitalised, leading to only marginal
improvements. The process becomes a
digital twin of the old: while processing
speeds increase, the workflow itself remains long, redundant, and often
cumbersome. The introduction of a new digital interface adds to the woes rather
than simplifies them. Had processes been genuinely reengineered, digitisation
could have saved time by simplifying steps, reducing the training load,
improving efficiency, cutting costs, and enabling quicker adaptation in response
to change. Another persistent pitfall in public sector digital transformation is
misunderstanding the promise of analytics, and more crucially, confusing outputs
with outcomes. ... Humans, as players in nature’s game, are unique. Evolution
gifted us consciousness, language, memory, and complex social bonds—traits that
allowed the creation of technology, law, storytelling, and culture. Yet these
very blessings seeded traits antithetical to nature’s raw logic ... Artificial
intelligence presents a tantalising prospect. Unlike its human creators, a
well-designed AI can, under ideal circumstances, create technologies based on
the same bias-free principles that drive nature: redesign for purpose, learn and
adapt from data, and commit to real, measurable outcomes.

The law is set to come into effect on Jan. 1, 2026, and requires chatbot
operators to implement age verification and warn users of the risks of companion
chatbots. The bill implements harsher penalties for anyone profiting from
illegal deepfakes, with fines of up to $250,000 per offense. In addition,
technology companies must establish protocols that seek to prevent self-harm and
suicide. These protocols will have to be shared with the California Department
of Health to ensure they’re suitable. Companies will also be required to share
statistics on how often their services issue crisis center prevention alerts to
their users. Some AI companies have already taken steps to protect children,
with
OpenAI recently introducing parental controls and content safeguards in
ChatGPT, along with a self-harm detection feature. Meanwhile,
Character AI has
added a disclaimer to its chatbot that reminds users that all chats are
generated by AI and fictional. Newsom is no stranger to AI legislation. In
September, he signed into law another bill called
SB 53, which mandates greater
transparency from AI companies. More specifically, it requires AI firms to be
fully transparent about the safety protocols they implement, while providing
protections for whistleblower employees. The bill means that California is the
first U.S. state to require AI chatbots to implement safety protocols, but other
states have previously introduced more limited legislation.
Treating security as a separate discipline leads to inefficiencies,
redundancies, and vulnerabilities. Bolting on security after systems are
designed often results in costly retrofits, fragmented controls, and misaligned
priorities. It also creates friction between teams — where security is seen as a
blocker rather than a partner. Integrating ESA into EA from the outset changes
the dynamic. It ensures that security is considered in every architectural
decision — from business processes to data flows, from application design to
infrastructure deployment. It aligns security with business goals, reduces risk
exposure, and accelerates delivery. ... ISM brings operational rigor to ESA. It
defines how security is implemented, monitored, and improved. ISM includes
identity and access management, continuity planning, compliance management, and
security awareness. When ISM is integrated into EA, security becomes part of the
enterprise fabric. It’s not just a set of policies — it’s a way of working. ...
This integration is not a technical adjustment — it’s a strategic evolution. It
requires collaboration, shared language, and a commitment to embedding security
into every architectural decision. When done right, it reduces risk, accelerates
delivery, and builds confidence across the enterprise. Security by design is not
a luxury — it’s a necessity. And EA Capability is how we make it real.
No comments:
Post a Comment