Quote for the day:
"The ladder of success is best climbed
by stepping on the rungs of opportunity." -- Ayn Rand

Most legacy infrastructure consists of tried-and-true solutions. If a business
has been using a legacy system for years, it's a reliable investment. It may not
be as optimal from a cost, scalability, or security perspective as a more modern
alternative. But in some cases, this drawback is outweighed by the fact that —
unlike a new, as-yet-unproven solution — legacy systems can be trusted to do
what they claim to do because they've already been doing it for years. The fact
that legacy systems have been around for a while also means that it's often easy
to find engineers who know how to work with them. Hiring experts in the latest,
greatest technology can be challenging, especially given the widespread IT
talent shortage. But if a technology has been in widespread use for decades, IT
departments don't need to look as hard to find staff qualified to support them.
... From a cost perspective, too, legacy systems have their benefits. Even if
they are subject to technical debt or operational inefficiencies that increase
costs, sticking with them may be a more financially sound move than undertaking
a costly migration to an alternative system, which may itself present unforeseen
cost drawbacks. ... As for security, it's hard to argue that a system with
inherent, incurable security flaws is worth keeping around. However, some IT
systems can offer security benefits not available on more modern
alternatives.

“If you want to remove or give agency to a platform tool to make decisions on
your behalf, you have to gain a lot of trust in the system to make sure that it
is acting in your best interest,” Seri says. “It can hallucinate, and you have
to be vigilant in maintaining a chain of evidence between a conclusion that the
system gave you and where it came from.” ... “Everyone’s creating MCP servers
for their services to have AI interact with them. But an MCP at the end of the
day is the same thing as an API. [Don’t make] all the same mistakes that people
made when they started creating APIs ten years ago. All these authentication
problems and tokens, everything that’s just API security.” ... CISOs need to
immediately strap in and grapple with the implications of a technology that they
do not always fully control, if for no other reason than their team members will
likely turn to AI platforms to develop their security solutions. “Saying no
doesn’t work. You have to say yes with guardrails,” says Mesta. At this still
nascent stage of agentic AI, CISOs should ask questions, Riopel says. But he
stresses that the main “question you should be asking is: How can I force
multiply the output or the effectiveness of my team in a very short period of
time? And by a short period of time, it’s not months; it should be days. That is
the type of return that our customers, even in enterprise-type environments, are
seeing.”

Due to the increasing interconnection of operational changes affecting the
financial and social health of digital enterprises, security is assuming a
more prominent role in business discussions. Executive leadership is pivotal
in ensuring enterprise security. It’s vital for business operations and
security to be aligned and coordinated to maintain security. Data governance
is integral in coordinating cross-functional activity to achieve this
requirement. Executive leadership buy-in coordinates and supports security
initiatives, and executive sponsorship sets the tone and provides the
resources necessary for program success. As a result, security professionals
are increasingly represented in board seats and C-suite positions. In public
acknowledgment of this responsibility, executive leadership is increasingly
held accountable for security breaches, with some being found personally
liable for negligence. Today, enterprise security is the responsibility of
multiple teams. IT infrastructure, IT enterprise, information security,
product teams, and cloud teams work together in functional unity but require a
sponsor for the security program. Zero trust security complements operations
due to its strict role definition, process mapping, and monitoring practices,
making compliance more manageable and automatable. Whatever the region, the
trend is toward increased reporting and compliance. As a result, data
governance and security are closely intertwined.

Every organization uses a unique mix of tools, from mainstream platforms such
as Salesforce to industry-specific applications that only a handful of
companies use. Traditional vendors can't economically justify building
connectors for niche tools that might only have 100 users globally. This is
where open source fundamentally changes the game. The math that doesn't work
for proprietary vendors, where each connector needs to generate significant
revenue, becomes irrelevant when the users themselves are the builders. ...
The truth about AI is that it isn’t about using the best LLMs or the most
powerful GPUs. The real truth is that AI is only as good as the data it
ingests. I've seen Fortune 500 companies with data locked in legacy ERPs from
the 1990s, custom-built internal tools, and regional systems that no vendor
supports. This data, often containing decades of business intelligence,
remains trapped and unusable for AI training. Long-tail connectors change this
equation entirely. When the community can build connectors for any system, no
matter how obscure, decades of insights can be unlocked and unleashed. This
matters enormously for AI readiness. Training effective models requires real
data context, not a selected subset from cloud native systems incorporated
just 10 years ago. Companies that can integrate their entire data estate,
including legacy systems, gain massive advantages. More data fed into AI leads
to better results.

Operating generative AI language models requires huge amounts of compute
power. This is provided by vast data centers that burn through energy at rates
comparable to small nations, creating poisonous emissions and noise pollution.
They consume massive amounts of water at a time when water scarcity is
increasingly a concern. Critics of the idea that the benefits of AI are
outweighed by the environmental harm it causes often believe that this damage
will be offset by efficiencies that AI will create. ... The threat that AI
poses to privacy is at the root of this one. With its ability to capture and
process vast quantities of personal information, there’s no way to predict how
much it might know about our lives in just a few short years. Employers
increasingly monitoring and analyzing worker activity, the growing number of
AI-enabled cameras on our devices, and in our streets, vehicles and homes, and
police forces rolling out facial-recognition technology, all raise anxiety
that soon no corner will be safe from prying AIs. ... AI enables and
accelerates the spread of misinformation, making it quicker and easier to
disseminate, more convincing, and harder to detect from Deepfake videos of
world leaders saying or doing things that never happened, to conspiracy
theories flooding social media in the form of stories and images designed to
go viral and cause disruption.
In many organizations, quality is still siloed, handed off to QA or engineering
teams late in the process. But high-performing companies treat quality as a
shared responsibility. The business, product, development, QA, release, and
operations teams all collaborate to define what "good" looks like. This culture
of shared ownership drives better business outcomes. It reduces rework, shortens
release cycles, and improves time to market. More importantly, it fosters
alignment between technical teams and business stakeholders, ensuring that
software investments deliver measurable value. ... A strong quality strategy
delivers measurable benefits across the entire enterprise. When teams focus on
building quality into every stage of the development process, they spend less
time fixing bugs and more time delivering innovation. This shift enables faster
time to market and allows organizations to respond more quickly to changing
customer needs. The impact goes far beyond the development team. Fewer defects
lead to a better customer experience, resulting in higher satisfaction and
improved retention. At the same time, a focus on quality reduces the total cost
of ownership by minimizing rework, preventing incidents, and ensuring more
predictable delivery cycles. Confident in their processes and tools, teams gain
the agility to release more frequently without the fear of failure.

Tiwary, formerly of Barracuda Networks and now a venture principal and board
member, described the phenomenon as “Service as Software” — a flip of the
familiar SaaS acronym that points to a fundamental shift. Instead of hiring more
humans to deliver incremental services, organizations are looking at whether AI
can deliver those same services as software: infinitely scalable, lower cost,
always on. ... Yes, “Service as Software” is a clever phrase, but Hoff bristles
at the way “agentic AI” is invoked as if it’s already a settled, mature
category. He reminds us that this isn’t some radical new direction — we’ve been
on the automation journey for decades, from the codification of security to the
rise of cloud-based SOC tooling. GenAI is an iteration, not a revolution. And
with each iteration comes risk. Automation without full agency can create as
many headaches as it solves. Hiring people who understand how to wield GenAI
responsibly may actually increase costs — try finding someone who can wrangle
KQL, no-code workflows, and privileged AI swarms without commanding a premium
salary. ... The future of “Service as Software” won’t be defined by clever turns
of phrase or venture funding announcements. It will be defined by the daily
grind of adoption, iteration and timing. AI will replace people in some
functions.
/articles/zero-downtime-cloud-upgrades/en/smallimage/zero-downtime-cloud-upgrades-thumbnail-1754398638654.jpg)
The requirement for performance testing is mandatory when your system handles
critical traffic flow. The first step of every upgrade requires you to collect
baseline performance data while performing detailed stress tests that replicate
actual workload scenarios. The testing process should include both typical happy
path executions and edge cases along with peak traffic conditions and failure
scenarios to detect performance bottlenecks. ... Every organization should
create formal rollback procedures. A defined rollback approach must accompany
all migration and upgrade operations regardless of their future utilization
plans. Such a system creates a one-way entry system without any exit plan which
puts you at risk. The rollback procedures need proper documentation and
validation and should sometimes undergo independent testing. ... Never add any
additional improvements during upgrades or migrations – not even a single log
line. This discipline might seem excessive, but it's crucial for maintaining
clarity during troubleshooting. Migrate the system exactly as it is, then tackle
improvements in a separate, subsequent deployment. ... The successful
implementation of zero-downtime upgrades at scale needs more than technical
skills because it requires systematic preparation and clear communication
together with experience-based understanding of potential issues.

Developed by David Rock in 2008, the SCARF model provides a comprehensive
framework for understanding human social behavior through five critical domains
that trigger either threat or reward responses in the brain. These domains
encompass Status (our perceived importance relative to others), Certainty (our
ability to predict future outcomes), Autonomy (our sense of control over
events), Relatedness (our sense of safety and connection with others), and
Fairness (our perception of equitable treatment). The significance of this
framework lies in its neurological foundation. These five social domains
activate the same neural pathways that govern our physical survival responses,
which explains why perceived social threats can generate reactions as intense as
those triggered by physical danger. ... As AI systems become embedded in daily
workflows, governance frameworks must actively monitor and support the evolving
human-AI relationships. Organizations can create mechanisms for publicly
recognizing successful human-AI collaborations while implementing regular
“performance reviews” that explain how AI decision-making evolves. Establish
clear protocols for human override capabilities, foster a team identity that
includes AI as a valued contributor, and conduct regular bias audits to ensure
equitable AI performance across different user groups.

Security teams are used to drowning in alerts. Most are false positives, some
are low risk, only a few matter. AI is helping to cut through this mess. Vendors
have been building machine learning models to sort and score alerts. These tools
learn over time which signals matter and which can be ignored. When tuned well,
they can bring alert volumes down by more than half. That gives analysts more
time to look into real threats. GenAI adds something new. Instead of just
ranking alerts, some tools now summarize what happened and suggest next steps.
One prompt might show an analyst what an attacker did, which systems were
touched, and whether data was exfiltrated. This can save time, especially for
newer analysts. ... “Humans are still an important part of the process. Analysts
provide feedback to the AI so that it continues to improve, share
environmental-specific insights, maintain continuous oversight, and handle
things AI can’t deal with today,” said Tom Findling, CEO of Conifers. “CISOs
should start by targeting areas that consume the most resources or carry the
highest risk, while creating a feedback loop that lets analysts guide how the
system evolves.” ... Entry-level analysts may no longer spend all day clicking
through dashboards. Instead, they might focus on verifying AI suggestions and
tuning the system.
No comments:
Post a Comment