Quote for the day:
"Positive thinking will let you do everything better than negative thinking will." -- Zig Ziglar
Strategies for measuring success and unlocking business value in cloud adoption

Transitioning to a cloud-based operation involves a dual-pronged strategy. While
cost optimization, requires right-sizing resources, leveraging discounted
instances, and implementing auto-scaling based on demand, accurately forecasting
demand and navigating complex cloud pricing structures can be difficult.
Likewise, while scalability is enabled by containerization, serverless
computing, and infrastructure automation, managing complex applications,
ensuring security during scaling, and avoiding vendor lock-in present additional
challenges. Therefore, organizations must continuously monitor and adapt their
strategies while addressing these challenges. ... An effective cloud strategy
aligns business goals through a strong governance framework that prioritizes
security, compliance, and cost optimization, while being flexible to accommodate
growth. Piloting non-critical applications can help refine this strategy before
larger migrations. ... Companies must first assess their maturity model to
identify areas for improvement. This includes optimizing their cloud mix by
exploring different cloud providers or cost structures, providing regular policy
updates for compliance, cultivating a continuous improvement culture,
proactively addressing challenges, and having active leadership involvement in
the cloud vision for stakeholder buy-in.
Three Keys to Mastering High Availability in Your On-Prem Data Center

A cornerstone of high availability is the redundancy of IT infrastructure. By
identifying potential critical single points of failure and, where possible,
ensuring there is an option for failover to a secondary resource, you can reduce
the risk of downtime in the event of an incident. Redundancy should extend
across both hardware and software layers. Implementing failover clusters,
resilient networking paths, storage redundancy using RAID, and offsite data
replication for disaster recovery are proven strategies. Adopting a hybrid or
multi-cloud approach can also reduce reliance on any single service provider. If
you operate an off-site data center, ensure it is not dependent on the same
power source as your main campus. Be sure to have a disaster recovery and
business continuity plan that includes local and offsite backup storage. ...
Whether your infrastructure is on-premises, cloud-based, or hybrid, the other
key component to achieving high availability is the establishment of failover
clusters to facilitate – and even automate – the movement of services and
workloads to a secondary resource. Whether hardware (SAN-based) or software
(SANless), clusters support the seamless failover of services to back up
resources and ensure continuity in the event of a severely degraded performance
or an outage incident.
Targeted phishing gets a new hook with real-time email validation

The problem facing defenders is the tactic prevents security teams from doing
further analysis and investigation, says the Cofense report. Automated
security crawlers and sandbox environments also struggle to analyze these
attacks because they cannot bypass the validation filter, the report adds. ...
“The only real solution,” he said, “is to move away from traditional
credentials to phishing-safe authentication methods like Passkeys. The goal
should be to protect from leaked credentials, not block user account
verification.” Attackers verifying e-mail addresses as deliverable, or being
associated with specific individuals, is nothing fundamentally new, he added.
Initially, attackers used the mail server’s “VRFY” command to verify if an
address was deliverable. This still works in a few cases. Next, attackers
relied on “non-deliverable receipts,” the bounce messages you may receive if
an email address does not exist, to figure out if an email address existed.
Both techniques work pretty well to determine if an email address is
deliverable, but they do not distinguish whether the address is connected to a
human, or if its messages are read. The next step, Ullrich said, was sending
obvious spam, but including an “unsubscribe” link. If a user clicks on the
“unsubscribe” link, it confirms that the email was opened and read.
Data Hurdles, Expertise Loss Hampering BCBS 239 Compliance

It was abundantly clear that there was a gulf between ECB expectations and
banks’ delivery soon after BCBS 239 was introduced. In late 2018 the central
bank found that 59 per cent of in-scope institutions turned in regulatory
reports with at least one failing validation rule and almost 7 per cent of
data points were missing from them. The ECB began a “supervisory strategy” in
2022 to close the gap, running until 2024. In May of that year it published a
guide that clarified what the overseers expected of banks and embarked on
targeted reviews of RDARR capabilities. ... The supervisor blamed
“deficiencies” on governance shortcomings, fragmented IT infrastructures and a
high level of manual aggregation processing, but admitted “remediation of
RDARR deficiencies is often costly, carries significant risk and takes time”.
Carroll said that the breadth of the data management effort needed to comply
with BCBS 239 has slowed adoption of the capabilities necessary for
compliance. “They’re spending so much time planning for BCBS and thinking
about what they need to do and what they need to have in place, and the tools
that they need and the frameworks that they might need to put in place,” he
said. ... “Hindered by outdated IT systems unsuitable for modern data
management functions, they struggle with data silos and inconsistent,
inaccurate risk reporting,” Ergin told Data Management Insight.
Can We Learn to Live with AI Hallucinations?

Sometimes, LLMs hallucinate for no good reason. Vectara CEO Amr Awadallah says
LLMs are subject to the limitations of data compression on text as expressed
by the Shannon Information Theorem. Since LLMs compress text beyond a certain
point (12.5%), they enter what’s called “lossy compression zone” and lose
perfect recall. That leads us to the inevitable conclusion that the tendency
to fabricate isn’t a bug, but a feature, of these types of probabilistic
systems. What do we do then? ... Instead of using a general-purpose LLM,
fine-tuning open source LLMs on smaller sets of domain- or industry-specific
data can also improve accuracy within that domain or industry. Similarly, a
new generation of reasoning models, such as DeepSeek-R1 and OpenAI o1, that
are trained on smaller domain-specific data sets, include a feedback mechanism
that allows the model to explore different ways to answer a question, the
so-called “reasoning” steps. Implementing guardrails is another technique.
Some organizations use a second, specially crafted AI model to interpret the
results of the primary LLM. When a hallucination is detected, it can tweak the
input or the context until the results come back clean. Similarly, keeping a
human in the loop to detect when an LLM is headed off the rails can also help
avoid some of LLM’s worst fabrications.
How Technical Debt Can Quietly Kill Your Company — And the metrics that can save you
Beyond the direct financial drain, technical debt imposes a crippling
operational gridlock. Development velocity plummets — Protiviti suggest
significant slowdowns, potentially up to 30%, as teams battle complexity. For
Product and Delivery, this means longer lead times, missed deadlines, reduced
predictability, and a sluggish response to market changes. Each new feature
built on a weak foundation takes longer than the last. Maintenance costs
simultaneously escalate. Developers spend disproportionate time debugging
obscure issues, patching old components, and managing complex workarounds.
These activities can consume up to 40% of the total value of a technology
estate over its lifetime — an escalating “maintenance tax” diverting focus
from value creation. Crucially, technical debt is a major barrier to
innovation. Nearly 70% of organizations acknowledge this according to
Protiviti’s polls. When teams are constantly firefighting, constrained by
legacy architecture, and navigating brittle code, their capacity for creative
problem-solving and experimentation evaporates. The operational drag prevents
exploration, limiting the company’s potential for growth and differentiation.
Nokia’s decline serves as a stark cautionary tale of operational gridlock
leading to strategic failure. Their dominance in mobile phones evaporated with
the rise of smartphones.
How tech giants like Netflix built resilient systems with chaos engineering

Chaos Engineering is a discipline within software engineering that focuses on
testing the limits and vulnerabilities of a system by intentionally injecting
chaos—such as failures or unexpected events—into it. The goal is to uncover
weaknesses before they impact real users, ensuring that systems remain robust,
self-healing, and reliable under stress. The idea is based on the
understanding that systems will inevitably experience failures, whether due to
hardware malfunctions, software bugs, network outages, or human error. ...
Netflix is widely regarded as one of the pioneers in applying Chaos
Engineering at scale. Given its global reach and the importance of providing
uninterrupted service to millions of users, Netflix knew that simply assuming
everything would work smoothly all the time was not an option. Its
microservices architecture, a collection of loosely coupled services, meant
that even the smallest failure could cascade and result in significant
downtime for its customers. The company wanted to ensure that it could
continue to stream high-quality video content, provide personalized
recommendations, and maintain a stable infrastructure—no matter what failure
scenarios might arise. To do so, Netflix turned to Chaos Engineering as a
cornerstone of its resilience strategy.
The AI model race has suddenly gotten a lot closer, say Stanford scholars
Bommasani and team don't make any predictions about what happens next in the
crowded field, but they do see a very pressing concern for the benchmark tests
used to evaluate large language models. Those tests are becoming saturated --
even some of the most demanding, such as the HumanEval benchmark created in
2021 by OpenAI to test models' coding skills. That affirms a feeling seen
throughout the industry these days: It's becoming harder to accurately and
rigorously compare new AI models. ... In response, note the authors, the field
has developed new ways to construct benchmark tests, such as Humanity's Last
Exam, which has human-curated questions formulated by subject-matter experts;
and Arena-Hard-Auto, a test created by the non-profit Large Model Systems
Corp., using crowd-sourced prompts that are automatically curated for
difficulty. ... Bommasani and team conclude that standardizing across
benchmarks is essential going forward. "These findings underscore the need for
standardized benchmarking to ensure reliable AI evaluation and to prevent
misleading conclusions about model performance," they write. "Benchmarks have
the potential to shape policy decisions and influence procurement decisions
within organizations, highlighting the importance of consistency and rigor in
evaluation."
From likes to leaks: How social media presence impacts corporate security

Cybercriminals can use social media to build a relationship with employees and
manipulate them into performing actions that jeopardize corporate security.
They can impersonate colleagues, business partners, or even executives, using
information obtained from social media to sound convincing. ... Many employees
use the same passwords for personal social media accounts as for their work
accounts, putting corporate data at risk. While convenient, this practice
means that if a personal account is compromised, attackers could gain access
to work-related systems as well. ... CISOs must now account for employee
behavior beyond the firewall. The attack surface no longer ends at corporate
endpoints; it stretches into LinkedIn profiles, Instagram vacation posts, and
casual tweets. Companies should establish policies regarding what employees
are permitted to post on social media, especially about their work and
workplace. ... The problem with social media posts is there is a thin line
between privacy and company security. CISOs have to walk a thin line, keeping
the company secure without policing what employees do on their own time. This
is why privacy awareness training should be integrated with cybersecurity
policies.
Tariffs will hit data centers and cloud providers, but hurt customers
The tariffs applied vary country to country - with a baseline of 10 percent
placed on all imported goods coming into the US - and much higher being
applied to those countries described by Trump as “the worst offenders," up to
99 percent in the case of the French archipelago Saint Pierre and Miquelon.
However, most pertinent to the cloud computing industry are the tariffs that
will hit countries that provide essential computing hardware, and materials
necessary to data center construction. ... While cloud service providers
(CSPs) will certainly be hit by the inevitable rising costs, it is hard to
really think of the hyperscalers as the "victims" in this story. Microsoft,
Amazon, and Alphabet all lie in the top five companies by market cap, and none
have taken particularly drastic hits to their stock value since the news of
the tariffs was announced. ... "The high tariffs on servers and other IT
equipment imported from China and Taiwan are highly likely to increase CSPs
costs. If CSPs pass on cost increases, customers may feel trapped (because of
lock-in) and disillusioned with cloud and their provider (because they've
committed to building on a cloud provider assuming costs would be constant or
even decline over time). On the other hand, if CSPs don't increase prices with
rising costs, their margins will decline. It's a no-win situation," Rogers
explained.
No comments:
Post a Comment