Quote for the day:
"Good content isn't about good
storytelling. It's about telling a true story well." --
Ann Handley

A striking new supply chain attack is sending shockwaves through the developer
community: a worm-style campaign dubbed “
Shai-Hulud” has compromised at least
187 npm packages, including the
tinycolor package that has 2 million hits
weekly, and spreading to other maintainers' packages. The malicious payload
modifies package manifests, injects malicious files, repackages, and republishes
— thereby infecting downstream projects. This incident underscores a harsh
reality: even code released weeks, months, or even years ago can become
dangerous once a dependency in its chain has been compromised. ... Sign your
code: All packages/releases should use cryptographic signing. This allows users
to verify the origin and integrity of what they are installing. Verify
signatures before use: When pulling in dependencies, CI/CD pipelines, and even
local dev setups, include a step to check that the signature matches a trusted
publisher and that the code wasn’t tampered with. SBOMs are your map of
exposure: If you have a
Software Bill of Materials for your project(s), you can
query it for compromised packages. Find which versions/packages have been
modified — even retroactively — so you can patch, remove, or isolate them.
Continuous monitoring of risk posture: It's not enough to secure when you ship.
You need alerts when any dependency or component’s risk changes: new
vulnerabilities, suspicious behavior, misuse of credentials, or signs that a
trusted package may have been modified after release.
Cloud sovereignty isn’t just a buzzword anymore, argues Kushwaha. “It’s a real
concern for businesses across the world. The pattern is clear. The cloud isn’t a
one-size-fits-all solution anymore. Companies are starting to realise that
sometimes control, cost, and compliance matter more than convenience.” ... Cloud
sovereignty is increasingly critical due to the evolving geopolitical scenario,
government and industry-specific regulations, and vendor lock-ins with heavy
reliance on
hyperscalers. The concept has gained momentum and will continue to
do so because technology has become pervasive and critical for running a
state/country and any misuse by foreign actors can cause major repercussions,
the way Bavishi sees it. Prof. Bhatt captures that true digital sovereignty is a
distant dream and achieving this requires a robust ecosystem for decades. This
isn’t counterintuitive; it’s evolution, as Kushwaha epitomises. “The cloud’s
original promise was one of freedom. Today, when it comes to the cloud, freedom
means more control. Businesses investing heavily in digital futures can’t afford
to ignore the fine print in hyperscaler contracts or the reach of foreign laws.
Sovereignty is the foundation for building safely in a fragmented world.” ...
Organisations have recognised the risks of digital dependencies and are looking
for better options. There is no turning back, Karlitschek underlines.

As organizations begin to integrate AI into defensive workflows, identity
security becomes the foundation for trust. Every model, script, or autonomous
agent operating in a production environment now represents a new identity — one
capable of accessing data, issuing commands, and influencing defensive outcomes.
If those identities aren't properly governed, the tools meant to strengthen
security can quietly become sources of risk. The emergence of
Agentic AI systems
make this especially important. These systems don't just analyze; they may act
without human intervention. They triage alerts, enrich context, or trigger
response playbooks under delegated authority from human operators. ... AI
systems are capable of assisting human practitioners like an intern that never
sleeps. However, it is critical for security teams to differentiate what to
automate from what to augment. Some tasks benefit from full automation,
especially those that are repeatable, measurable, and low-risk if an error
occurs. ...
Threat enrichment,
log parsing, and
alert deduplication are prime
candidates for automation. These are data-heavy, pattern-driven processes where
consistency outperforms creativity. By contrast,
incident scoping, attribution,
and response decisions rely on context that AI cannot fully grasp. Here, AI
should assist by surfacing indicators, suggesting next steps, or summarizing
findings while practitioners retain decision authority. Finding that balance
requires maturity in process design.

When EtherHiding emerged in September 2023 as part of the CLEARFAKE campaign,
it introduced a chilling reality: attackers no longer need vulnerable servers
or hackable domains. They’ve found something far better—a global,
decentralized infrastructure that literally cannot be shut down. ... When
victims visit the infected page, the loader queries a smart contract on
Ethereum or
BNB Smart Chain using a read-only function call. ... Forget
everything you know about disrupting cybercrime infrastructure. There is no
command-and-control server to raid. No hosting provider to subpoena. No DNS to
poison. The malicious code exists simultaneously everywhere and nowhere,
distributed across thousands of blockchain nodes worldwide. As long as
Ethereum or BNB Smart Chain operates—and they’re not going anywhere—the
malware persists. Traditional law enforcement tactics, honed over decades of
fighting cybercrime, suddenly encounter an immovable object. You cannot arrest
a blockchain. You cannot seize a smart contract. You cannot compel a
decentralized network to comply. ... The read-only nature of payload retrieval
is perhaps the most insidious feature. When the loader queries the smart
contract, it uses functions that don’t create transactions or blockchain
records.

Researchers at
Mila have proposed a new technique that makes large language
models (LLMs) vastly more efficient when performing complex reasoning. Called
Markovian Thinking, the approach allows LLMs to engage in lengthy reasoning
without incurring the prohibitive computational costs that currently limit
such tasks. The team’s implementation, an environment named
Delethink,
structures the reasoning chain into fixed-size chunks, breaking the scaling
problem that plagues very long LLM responses. Initial estimates show that for
a 1.5B parameter model, this method can cut the costs of training by more than
two-thirds compared to standard approaches. ... The researchers compared this
to models trained with the standard
LongCoT-RL method. Their findings indicate
that the model trained with Delethink could reason up to 24,000 tokens, and
matched or surpassed a LongCoT model trained with the same 24,000-token budget
on math benchmarks. On other tasks like coding and PhD-level questions,
Delethink also matched or slightly beat its LongCoT counterpart. “Overall,
these results indicate that Delethink uses its thinking tokens as effectively
as LongCoT-RL with reduced compute,” the researchers write. The benefits
become even more pronounced when scaling beyond the training budget.

While their purpose-built design gives them an advantage for AI workloads,
neoclouds also bring complexities and trade-offs. Enterprises need to
understand where these platforms excel and plan how to integrate them most
effectively into broader cloud strategies. Let’s explore why this buzzword
demands your attention and how to stay ahead in this new era of cloud
computing. ... Neoclouds, unburdened by the need to support everything, are
outpacing hyperscalers in areas like agility, pricing, and speed of deployment
for AI workloads. A shortage of GPUs and data center capacity also benefits
neocloud providers, which are smaller and nimbler, allowing them to scale
quickly and meet growing demand more effectively. This agility has made them
increasingly attractive to AI researchers, startups, and enterprises
transitioning to AI-powered technologies. ... Neoclouds are transforming cloud
computing by offering purpose-built, cost-effective infrastructure for AI
workloads. Their price advantages will challenge traditional cloud providers’
market share, reshape the industry, and change enterprise perceptions, fueled
by their expected rapid growth. As enterprises find themselves at the
crossroads of innovation and infrastructure, they must carefully assess how
neoclouds can fit into their broader architectural strategies.

Unlike previous generations of Wi-Fi that competed on peak throughput numbers,
Wi-Fi 8 prioritizes consistent performance under challenging conditions. The
specification introduces coordinated multi-access point features, dynamic
spectrum management, and hardware-accelerated telemetry designed for AI
workloads at the network edge. ... A core part of the Wi-Fi 8 architecture is an
approach known as
Ultra High Reliability (UHR). This architectural philosophy
targets the 99th percentile user experience rather than best-case scenarios. The
innovation addresses AI application requirements that demand symmetric
bandwidth, consistent sub-5-millisecond latency and reliable uplink performance.
... Wi-Fi 8 introduces
Extended Long Range (ELR) mode specifically for IoT
devices. This feature uses lower data rates with more robust coding to extend
coverage. The tradeoff accepts reduced throughput for dramatically improved
range. ELR operates by increasing symbol duration and using lower-order
modulation. This improves the link budget for battery-powered sensors, smart
home devices and outdoor IoT deployments. ... Wi-Fi 8 enhances roaming to
maintain sub-millisecond handoff latency. The specification includes improved
Fast Initial Link Setup (FILS) and introduces coordinated roaming decisions
across the infrastructure. Access points share client context information before
handoff.
Today, we lack the tools (protocols) and the regulations to enable digital
estate management at scale. Law and regulation can force a change in behavior by
large providers. However, lacking effective protocols to establish a mechanism
to identify the decedent’s chosen individuals who will manage their digital
estate, every service will have to design their own path. This creates an
exceptional burden on individuals planning their digital estate, and on
individuals who manage the digital estates of the deceased. ... When we set out
to write this paper, we wanted to influence the large technology and social
media platforms, politicians, regulators, estate planners, and others who can
help change the status quo. Further, we hoped to influence standards development
organizations, such as the
OpenID Foundation and the Internet Engineering Task
Force (IETF), and their members. As standards developers in the realm of
identity, we have an obligation to the people we serve to consider identity from
birth to death and beyond, to ensure every human receives the respect they
deserve in life and in death. Additionally, we wrote the planning guide to help
individuals plan for their own digital estate. By giving people the tools to
help describe, document, and manage their digital estates proactively, we can
raise more awareness and provide tools to help protect individuals at one of the
most vulnerable moments of their lives.

Serving on a board isn’t an extension of an operational role. One issue CIOs
face is not understanding the difference between executive management and
governance, Stadolnik says. “They’re there to advise, not audit or lead the
current company’s CIO,” he adds. In the boardroom, the mandate is to provide
strategy, governance, and oversight, not execution. That shift, Stadolnik says,
can be jarring for tech leaders who’ve spent their careers driving operational
results. ... “There were some broad risk areas where having strong technical
leadership was valuable, but it was hard for boards to carve out a full seat
just for that, which is why having CIO-plus roles was very beneficial,” says
Cullivan. The issue of access is another uphill battle for CIOs. As Payne found,
the network effect can play a huge role in seeking a board role. But not every
IT leader has the right kind of network that can open the door to these
opportunities. ... Boards expect directors to bring scope across business
disciplines and issues, not just depth in one functional area. Stadolnik
encourages CIOs to utilize their strategic orientation, results focus, and
collaborative and influence skills to set themselves up for additional
responsibilities like procurement, supply chain, shared services, and others.
“It’s those executive leadership capabilities that will unlock broader roles,”
he says. Experience in those broader roles bolsters a CIO’s board résumé and
credibility.

A good sniff test: can we describe the service’s job in one short sentence, and
does a single team wake up if it misbehaves? If not, we’ve drawn mural art, not
an interface. Start with a small handful of services you can name
plainly—orders, payments, catalog—then pressure-test them with real flows. When
a request spans three services just to answer a simple question, that’s a hint
we’ve sliced too thin or coupled too often. ... Microservices live and die by
their contracts. We like contracts that are explicit, versioned, and
backwards-friendly. “Backwards-friendly” means old clients keep working for a
while when we add fields or new behaviors. For HTTP APIs,
OpenAPI plus
consistent error formats makes a huge difference. ... We need timeouts and
retries that fit our service behavior, or we’ll turn small hiccups into big
outages. For east-west traffic, a
service mesh or smart gateway helps us nudge
traffic safely and set per-route policies. We’re fans of explicit settings
instead of magical defaults. ... Each service owns its tables; cross-service
read needs go through APIs or asynchronous replication. When a write spans
multiple services, aim for a sequence of local commits with compensating actions
instead of distributed locks. Yes, we’re describing sagas without the capes: do
the smallest thing, record it durably, then trigger the next hop.
No comments:
Post a Comment