Quote for the day:
"The master has failed more times than
the beginner has even tried." -- Stephen McCranie

The survey results reveal fundamental breakdowns in communication, trust, and
operational alignment that threaten both current operations and future digital
transformation initiatives. ... The survey's most alarming finding centers on
ghost assets. These are IT resources that continue consuming budget and creating
risk while providing zero business value. The phantom resources manifest across
the entire technology stack, from forgotten cloud instances to untracked SaaS
subscriptions. ... The tool sprawl paradox is striking. Sixty-five percent of IT
managers use six or more ITAM tools yet express confidence in their setup.
Non-IT roles use fewer tools but report significantly lower integration
confidence. This suggests IT teams have adapted to complexity through process
workarounds rather than achieving true operational efficiency. ... "Over the
next two to three years, I see this confidence gap continuing to widen," Collins
said. "This is primarily fueled by the rapid acceleration of hybrid work models,
mass migration to the cloud, and the burgeoning adoption of artificial
intelligence, creating a perfect storm of complexity for IT asset management
teams." Collins noted that the distributed workforce has shattered the
traditional, centralized view of IT assets. Cloud migration introduces shadow
IT, ghost assets, and uncontrolled sprawl that bypass traditional procurement
channels.

The biggest bottlenecks in the software lifecycle have nothing to do with code.
They’re people problems: communication, persuasion, decision-making. So in order
to make an impact, architects have to consistently make those things happen,
sprint after sprint, quarter after quarter. How do you reliably get the right
people in the right place, at the right time, talking about the right things? Is
there a transfer protocol or infrastructure-as-code tool that works on human
beings? ... A lot of programmers don’t feel confident in their writing skills,
though. It’s hard to switch from something you’re experienced at, where quality
speaks for itself (programming) to something you’re unfamiliar with, where
quality depends on the reader’s judgment (writing). So what follows is a crash
course: just enough information to help you confidently write good (even great)
documents, no matter who you are. You don’t have to have an English degree, or
know how to spell “idempotent,” or even write in your native language. You just
have to learn a few techniques. ... The main thing you want to avoid is a giant
wall of text. Often the people whose attention your document needs most are the
people with the most demands on their time. If you send them a four-page essay,
there’s a good chance they’ll never have the time to get through it.

Consulting firm McKinsey's Technology Trends Outlook 2025 paints a vivid
picture: The CIO is no longer a technologist but one who writes a narrative
where technology and strategy merge. Four forces together - artificial
intelligence at scale, agentic AI, cloud-edge synergy and digital trust - are a
perfect segue for CIOs to navigate the technology forces of the future and turn
disruption into opportunities. ... As the attack surface continues to expand due
to advances in AI, connected devices and cloud tech - and because the regulatory
environment is still in a constant flux - achieving enterprise-level cyber
resilience is critical. ... McKinsey's data indicates - and it's no
revelation - a global shortage of AI, cloud and security experts. But leading
companies are overcoming this bottleneck by upskilling their workers. AI
copilots train employees, while digital agents handle repetitive tasks. The
boundary between human and machine is blurring, and the CIO is the alchemist,
creating hybrid teams that drive transformation. If there's a single plot twist
for 2025, it's this: Technology innovation is assessed not by experimentation
but by execution. Tech leaders have shifted from chasing shiny objects to
demanding business outcomes, from adopting new platforms to aligning every
digital investment with growth, efficiency and risk reduction.
Since Europe is currently not clear on its priorities for AI development,
US-based Big Tech companies can use their economic and discursive power to push
their own ambitions onto Europe. Through publications directly aimed at EU
policy-makers, companies promote their services as if they are perfectly aligned
with European values. By promising the EU can have it all — bigger, faster,
greener and better AI — tech companies exploit this flexible discursive space to
spuriously position themselves as “supporters” of the EU’s AI narrative. Two
examples may illustrate this: OpenAI and Google. ... Big Tech’s promises to
develop AI infrastructure faster while optimizing sustainability, enhancing
democracy, and increasing competitiveness seem too good to be true — which in
fact they are. Not surprisingly, their claims are remarkably low on details and
far removed from the reality of these companies’ immense carbon emissions.
Bigger and faster AI is simply incompatible with greener and better AI. And yet,
one of the main reasons why Big Tech companies’ claims sound agreeable is that
the EU’s AI Continent Action Plan fails to define clear conditions and set
priorities in how to achieve better and greener AI. So what kind of changes does
the EU AI-CAP need? First, it needs to set clear goalposts on what constitutes a
democratic and responsible use of AI, even if this happens at the expense of
economic competitiveness.

The truth is that the role of the programmer, in line with just about every
other professional role, will change. Routine, low-level tasks such as
customizing boilerplate code and checking for coding errors will increasingly be
done by machines. But that doesn’t mean basic coding skills won’t still be
important. Even if humans are using AI to create code, it’s critical that we can
understand it and step in when it makes mistakes or does something dangerous.
This shows that humans with coding skills will still be needed to meet the
requirement of having a “human-in-the-loop”. This is essential for safe and
ethical AI, even if its use is restricted to very basic tasks. This means
entry-level coding jobs don’t vanish, but instead transition into roles where
the ability to automate routine work and augment our skills with AI becomes the
bigger factor in the success or failure of a newbie programmer. Alongside this,
entirely new development roles will also emerge, including AI project
management, specialists in connecting AI and legacy infrastructure, prompt
engineers and model trainers. We’re also seeing the emergence of entirely new
methods of developing software, using generative AI prompts alone. Recently,
this has been named "vibe coding" because of the perceived lack of stress and
technical complexity in relation to traditional coding.
FinOps as Code (FaC) is the practice of applying software engineering
principles, particularly those from Infrastructure as Code (IaC) to cloud
financial management. It considers financial operations, such as cost management
and resource allocation, as code-driven processes that can be automated,
version-controlled, and collaborated on between the teams in an organization.
FinOps as Code blends financial operations with cloud native practices to
optimize and manage cloud spending programmatically using code. It enables
FinOps principles and guidelines to be coded directly into the CI/CD pipelines.
... When you bring FinOps into your organization, you know where and how you
spend your money. FinOps provides a cultural transformation to your organization
where each team member is aware of how their usage of the cloud affects your
final costs associated with such usage. While cloud spend is no longer merely an
IT issue, you should be able to manage your cloud spend properly. ... FinOps as
Code (FaC) is an emerging trend enabling the infusion of FinOps principles in
the software development lifecycle using Infrastructure as Code (IaC) and
automation. It helps embed cost awareness directly into the development process,
encouraging collaboration between engineering and finance teams, and improving
cloud resource utilization. Additionally, it also empowers your teams to take
ownership of their cloud usage in the organization.

Eliminating multitasking is too much to shoot for, because there are,
inevitably, more bits and pieces of work than there are staff to work on them.
Also, the political pressure to squeeze something in usually overrules the logic
of multitasking less. So instead of trying to stamp it out, attack the problem
at the demand side instead of the supply side by enforcing a “Nothing-Is-Free”
rule. ... Encourage a “culture of process” throughout your organization. Yes,
this is just the headline, and there’s a whole lot of thought and work
associated with making it real. Not everything can be reduced to an e-zine
article. Sorry. ... If you hold people accountable when something goes wrong,
they’ll do their best to conceal the problem from you. And the longer nobody
deals with a problem, the worse it gets. ... Whenever something goes wrong,
first fix the immediate problem — aka “stop the bleeding.” Then, figure out
which systems and processes failed to prevent the problem and fix them so the
organization is better prepared next time. And if it turns out the problem
really was that someone messed up, figure out if they need better training and
coaching, if they just got unlucky, if they took a calculated risk, or if they
really are a problem employee you need to punish — what “holding people
accountable” means in practice.
Reducing investments in QA might provide immediate financial relief, but it
introduces longer-term risks. Releasing software with undetected bugs and
security vulnerabilities can quickly erode customer trust and substantially
increase remediation costs. History demonstrates that neglected QA efforts
during financial downturns inevitably lead to higher expenses and diminished
brand reputations due to subpar software releases. ... Automation plays an
essential role in filling gaps caused by skills shortages. Organizations
worldwide face a substantial IT skills shortage that will cost them $5.5
trillion by 2026, according to an IDC survey of North American IT leaders. ...
The complexity of the modern software ecosystem magnifies the impact of economic
disruptions. Delays or budget constraints in one vendor can create spillover,
causing delays and complications across entire project pipelines. These
interconnected dependencies magnify the importance of better operational
visibility. Visibility into testing and software quality processes helps teams
anticipate these ripple effects. ... Effective resilience strategies focus
less on budget increases and more on strategic investment in capabilities that
deliver tangible efficiency and reliability benefits. Technologies that support
centralized testing, automation, and integrated quality management become
critical investments rather than optional expenditures.

“DC power has been around in some data centers for about 20 years,” explains
Peter Panfil, vice president of global power at Vertiv. “400V and 800V have been
utilized in UPS for ages, but what is beginning to emerge to cope with the
dynamic load shifts in AI are [new] applications of DC.” ... Several technical
hurdles must be overcome before DC achieves broad adoption in the data center.
The most obvious challenge is component redesign. Nearly every component – from
transformers to breakers – must be re-engineered for DC operation. That places a
major burden on transformer, PDU, substation, UPS, converter, regulator, and
other electrical equipment suppliers. High-voltage DC also raises safety
challenges. Arc suppression and fault isolation are more complex. Internal
models are being devised to address this problem with solid-state circuit
breakers and hybrid protection schemes. In addition, there is no universal
standard for DC distribution in data centers, which complicates interoperability
and certification. ... On the sustainability front, DC has a clear edge. DC
power results in lower conversion losses, which equate to less wasted energy.
Further, DC is more compatible with solar PV and battery storage, reducing
long-term Opex and carbon costs.

In the Blue Report 2025, Picus Labs found that password cracking attempts
succeeded in 46% of tested environments, nearly doubling the success rate from
last year. This sharp increase highlights a fundamental weakness in how
organizations are managing – or mismanaging – their password policies. Weak
passwords and outdated hashing algorithms continue to leave critical systems
vulnerable to attackers using brute-force or rainbow table attacks to crack
passwords and gain unauthorized access. Given that password cracking is one of
the oldest and most reliably effective attack methods, this finding points to a
serious issue: in their race to combat the latest, most sophisticated new breed
of threats, many organizations are failing to enforce strong basic password
hygiene policies while failing to adopt and integrate modern authentication
practices into their defenses. ... The threat of credential abuse is both
pervasive and dangerous, yet as the Blue Report 2025 highlights, organizations
are still underprepared for this form of attack. And once attackers obtain valid
credentials, they can easily move laterally, escalate privileges, and compromise
critical systems. Infostealers and ransomware groups frequently rely on stolen
credentials to spread across networks, burrowing deeper and deeper, often
without triggering detection.
No comments:
Post a Comment