Quote for the day:
"The whole secret of a successful life
is to find out what is one's destiny to do, and then do it." --
Henry Ford

For CISOs, finding that balance between security and speed is critical in the
age of AI. This technology simultaneously represents the greatest opportunity
and greatest risk enterprises have faced since the dawn of the internet. Move
too fast without guardrails, and sensitive data leaks into prompts, shadow AI
proliferates, or regulatory gaps become liabilities. Move too slow, and
competitors pull ahead with transformative efficiencies that are too powerful to
compete with. Either path comes with ramifications that can cost CISOs their
job. In turn, they cannot lead a "department of no" where AI adoption
initiatives are stymied by the organization's security function. It is crucial
to instead find a path to yes, mapping governance to organizational risk
tolerance and business priorities so that the security function serves as a true
revenue enabler. ... Even with strong policies and roadmaps in place, employees
will continue to use AI in ways that aren't formally approved. The goal for
security leaders shouldn't be to ban AI, but to make responsible use the easiest
and most attractive option. That means equipping employees with enterprise-grade
AI tools, whether purchased or homegrown, so they do not need to reach for
insecure alternatives. In addition, it means highlighting and reinforcing
positive behaviors so that employees see value in following the guardrails
rather than bypassing them.

Certifications help ensure developers understand AI governance, security, and
responsible use, Hinchcliffe says. Certifications from vendors such as
Microsoft and Google, along with OpenAI partner programs, are driving uptake,
he says. “Strategic CIOs see certifications less as long-term guarantees of
expertise and more as a short-term control and competency mechanism during
rapid change,” he says. ... While certifications aren’t the sole deciding
factor in landing a job, they often help candidates stand out in competitive
roles where AI literacy is becoming a crucial factor, Taplin says. “This is
especially true for new software engineers, who can gain a leg up by focusing
on certifications early to enhance their career prospects,” he says. ... “The
real demand is for AI skills, and certifications are simply one way to build
those skills in a structured manner,” says Kyle Elliott, technology career
coach and hiring expert. “Hiring managers are not necessarily looking for
candidates with AI certifications,” Elliott says. “However, an AI
certification, especially if completed in the last year or currently in
progress, can signal to a hiring manager that you are well-versed in the
latest AI trends. In other words, it’s a quick way to show that you speak the
language of AI.” Software developers should not expect AI certifications to be
a “silver bullet for landing a job or earning a promotion,” Elliott
says.

Beyond recovery and nutrition, data analytics plays a pivotal role in shaping
race-day decisions. The team combines structured data like power outputs,
route elevation, and weather forecasts with unstructured data gathered from
online posts by cycling enthusiasts. These data streams are fed into
predictive models that anticipate race dynamics and help fine-tune equipment
selection, down to tire pressure and aerodynamic adjustments. Metrics like
Training Stress Score (TSS) and Heart Rate Variability (HRV) help monitor each
rider’s fatigue and readiness, ensuring that training plans are both
challenging and sustainable. “We analyze how environmental conditions affect
each rider’s output and recovery,” Ryder says. ... The team’s data-driven
strategy even extends to post-race analysis. At their hub, they evaluate power
output, rider positioning, and performance variances. ... Looking ahead, Ryder
sees artificial intelligence playing a greater role. The team is exploring
machine learning models that predict tactical behavior from opponents and
identify when riders are close to burnout. Through conversational analytics in
Qlik, they envision proactive alerts such as, “This rider may not be fit to
race tomorrow,” based on cumulative stress and recovery data. The team’s ethos
is clear. Success doesn’t only come from racing harder. It comes from racing
smarter.
Given the systemic limitations on reliable power sources, practical solutions
are needed. We must address power sustainability, upstream power infrastructure,
new data center equipment and trained labor to deliver it all. By being
proactive, we can “bend” the energy growth curve by decoupling data center
growth from AI computing’s energy consumption. ... Before the AI boom, large
data centers could grin and bear longer lead times for utilities; however, the
immediate and skyrocketing demand for data centers to power AI applications
calls for creative solutions. Data center developers and designers planning to
build in energy-constrained regions need to consider deploying alternative prime
power sources and/or energy storage systems to launch new data centers. This
includes natural gas turbines, HVO-fueled generators, wind, solar, fuel cells,
battery energy storage systems (BESS), and to a limited degree, small modular
reactors. ... The utility company and grid operator’s intimate knowledge of the
grid and local regulatory, governmental and political landscape makes them
critical partners in the site selection, design, permitting, and construction of
new data centers. Utilities provide critical insights on power capacity, costs,
carbon intensity, power quality, grid stability and load management to ensure
sustainable and reliable operations.

Resilience played a major role in the results. High-resilience individuals
performed well with or without LLM support, and they were better at using AI
guidance without becoming over-reliant on it. Low-resilience participants did
not gain as much from LLMs. In some cases, their performance did not improve or
even declined. This creates a risk of uneven outcomes. Teams could see gaps
widen between those who can critically evaluate AI suggestions and those who
cannot. Over time, this may lead to over-reliance on models, reduced independent
thinking, and a loss of diversity in how problems are approached. According to
Lanyado, security leaders need to plan for these differences when building teams
and training programs. “Not every organization and/or employee interacts with
automation in the same way, and differences in team readiness can widen security
risks,” he said. ... The findings suggest that organizations cannot assume
adding an LLM will raise everyone’s performance equally. Without design, these
tools could make some team members more effective while leaving others behind.
The researchers recommend designing AI systems that adapt to the user.
High-resilience individuals may benefit from open-ended suggestions.
Lower-resilience users might need guidance, confidence indicators, or prompts
that encourage them to consider alternative viewpoints.
Looked at more critically, ChatGPT has become a supercharged Google search that
leaps from finding information to synthesizing and judging it, a clear
homogenization of human capacity that might lead to a world of grey-zone AI
slop. ... While ChatGPT follows the people, Claude is following the money,
hoping to capitalize on business needs to improve efficiency and productivity.
By focusing on complex, high-value work, the company is signaling it believes
the future of AI lies not in making everyone more productive, but in automating
knowledge work that once required specialized human expertise. ... These
divergent strategies result in different financial trajectories. OpenAI enjoys
massive scale, with hundreds of millions of users providing a broad funnel for
subscriptions. It generates an overwhelming amount of traffic that is of
relatively lower value. OpenAI is betting the real money will flow through
licensing its tools to Microsoft, where it can be embedded in Copilot and Office
products to generate recurring revenue streams to offset its infrastructure and
operating costs. Anthropic has fewer users but stronger unit economics. Its
focus on enterprise use means customers are better positioned to purchase more
expensive premium services that can demonstrate strong return-on-investment.

Orla Daly, CIO at Skillsoft, told ZDNET that the research shows business leaders
must keep pace with the changing requirements for capabilities in different
operational areas. "Significant percentages of skills are no longer relevant.
The skills that we'll need in 2030 are only just evolving now," she said. "If
you're not making upskilling and learning part of your core business strategy,
then you're going to ultimately become uncompetitive in terms of retaining
talent and delivering on your organizational outcomes." ... Daly said companies
must pay more attention to the skills of their employees, including measuring
and testing those proficiencies. "That's about using a combination of
benchmarks, which we use at Skillsoft, that allow you, through testing, to
understand the skills that you have," she said. "It's also about how you
understand that capability in terms of real-world applications and measuring
those skills in the context of the jobs that are being done." ... "You need to
make measurement central to the business strategy, and have a program around
learning, so it's part of the everyday culture of the business," she said. "From
the executive level down, you need to say learning is a core part of the
organization. Learning then turns up in all of your business operating
frameworks in terms of how you track and measure the outcomes of programs,
similar to other investments that you would make."

Sovereign AI refers to the ability of a nation to develop and operate AI
platforms within its own borders, under its own laws and energy systems. ... By
ensuring that sensitive data and critical compute resources remain local,
sovereign AI reduces exposure to geopolitical risk, supports regulatory
compliance and builds trust among both public and private stakeholders. Recent
initiatives in Stockholm highlight how sovereign AI can be embedded into
existing data center ecosystems. Purpose-built AI compute clusters, equipped
with the latest GPU architectures, are being deployed on renewable power and
integrated into local district heating networks, where excess server heat is
recycled back into the city grid. These facilities are designed not only for
high-performance workloads but also for long-term sustainability, aligning with
Sweden’s climate and digital sovereignty goals. The strategy is clear: pair
advanced AI infrastructure with domestic control and clean energy. By doing so,
Stockholm can position itself as a European leader in sovereign AI, where
innovation, security and sustainability converge in a way that few other markets
can match. ... Stockholm’s ecosystem radiates gravitational pull. With more
green, efficient and sovereign-capable data centers emerging, they attract
additional clients and investments and reinforce the region’s dominance.

Enter agentic AI systems that represent a network of intelligent agents having
the capability for independent decision-making and adaptive learning. This
extends the capabilities of traditional AI systems by incorporating autonomous
decision-making and execution, while adopting proactive security measures. It is
poised to revolutionise cybersecurity in the banking and financial services
sector while bridging the gap between the speed of cyber-attacks and the slow,
human-driven incident response. ... Agentic AI will proactively and autonomously
hunt for threats across the IT systems within the financial institution by
actively looking for vulnerabilities and possible threat vectors before they are
exploited by threat actors. Agentic AI systems leverage their capabilities in
simulation, where potential attack scenarios are modeled to identify
vulnerabilities in the security posture. Data from logs, network traffic, and
activities from endpoints are correlated to spot attack vectors as a part of the
threat hunting process. ... AI agents have to be deployed into both
customer-facing for better customer experience as well as internal systems. By
establishing an agentic AI ecosystem, agents can collaborate across functions.
Risk management, compliance monitoring, operational efficiency, and fraud
detection functions can be streamlined, too.

This isn’t the first time NPM’s reputation has been put to the test. The
JavaScript community has seen a trio of supply chain attacks in rapid
succession. Just recently, we saw the “manifest confusion” exploit, which
tricked dependency trackers, and prior to that, a series of typosquatting and
account-takeover incidents—remember the infamous “coa” and “rc” package hijacks?
Now comes the latest beast from the sand: the Shai-Hulud supply chain attack.
This is, depending on how you count, the third major NPM incident in recent
memory—and arguably the most insidious. ... According to the detailed analysis
by JFrog, attackers compromised multiple popular packages, including several
that mimicked or targeted legitimate CrowdStrike modules. Before you panic: this
wasn’t a direct attack on CrowdStrike itself, but the attackers were clever—by
using names like “crowdstrike” and latching onto a trusted security vendor’s
brand, they hoped to worm their payloads into unsuspecting production
environments. ... What makes these attacks so damaging is less about the
technical sophistication (though, don’t get me wrong, this one is clever) and
more about how they shake our trust in the very idea of open collaboration.
Every dev who’s ever typed `npm install` had to trust not just the original
author, but every maintainer, every transitive dependency, and the opaque
process of package publishing itself.
No comments:
Post a Comment