Quote for the day:
“Whether you think you can or you think
you can’t, you’re right.” -- Henry Ford

“We’ve seen that AI’s true potential is unlocked by connecting trusted, governed
data – structured and unstructured – with real-time analytics and decision
intelligence. With the rise of
agentic AI, the next wave of value creation will
come from intelligent systems that don’t just interpret data, but continuously
and autonomously act on it at scale. Put simply, AI isn’t a shortcut to insight
– it’s a multiplier of value, if the data is ready. Enterprises that treat data
as an afterthought will fall behind, while those that treat it as a strategic
asset will lead,” added the
Qlik CSO. ... “In this
AI economy, compute power may
set the pace, but
data sets the ceiling.
MinIO raises that ceiling, transforming
scattered, hard-to-reach datasets into a living, high-performance fabric that
fuels every AI prompt and initiative. With MinIO AIStor, organizations gain the
ability to store and understand. Data that is secure, fluid, and always ready
for action is a competitive weapon,” added
Kapoor. ... “Data that is fresh, well
described and
policy aware beats bigger but blind datasets because it can be
safely composed, reused and measured for impact, with the lineage to show teams
what to trust and what to fix so they can ship faster,” said
Neat. ... While
there is no question, really, of whether the value of data has increased and,
further, whether the proliferation of AI has been fundamental to that value
escalation, the mechanics as variously described here should point us towards
the new wave of emerging truths in this space.

For many teams, the problem is not a lack of enthusiasm or ambition but a
shortage of resources and skills. They want to automate more, streamline
workflows, and adopt new practices, yet often find themselves already operating
at full capacity just in keeping existing systems running. In that environment,
the slightest of steps toward more
advanced automation strategies can feel like
a big leap forward. ... On the security side, the logic behind
DevSecOps is
compelling. More companies are realising that security has to be baked in from
day one, not bolted on later. The difficulty lies in making that shift a
practical reality, as integrating
security checks early in the pipeline often
requires new tooling, changes to established workflows, and in some cases,
rethinking the roles and responsibilities within the team. ... In many
organisations, it is the existing DevOps or platform teams that are best
positioned to take on this responsibility, extending their remit into what is
often referred to as
MLOps. These teams already have experience building and
maintaining shared infrastructure, managing pipelines, and ensuring operational
stability at scale, so expanding those capabilities to handle
data science and
machine learning workflows can feel like a natural evolution. ... That said, as
adoption grows, we can also expect to see more specialised MLOps roles
appearing, particularly in larger enterprises or in organisations where AI is a
major strategic focus.

Kantsu then began collaborating with the police, the cyberattack response teams
of the company’s insurers, and security specialists to confirm the scope of
cyber insurance coverage and estimate the amount of damage. ... when they began
the actual recovery work, they encountered an unexpected pitfall. “We considered
how to restore operations as quickly as possible. We did a variety of things,
including asking other companies in the same industry to send packages, even
ignoring our own profits,”
Tatsujo says. ... To prevent reinfection with
ransomware, the company prohibited use of old networks and PCs. Tethering was
used, with smartphones as Wi-Fi routers. Where possible, this was used to
facilitate shipping. New PCs were purchased to create an on-premises
environment. ... “In times of emergency like this, the most important thing is
cash to recover as quickly as possible, rather than cost reduction. However,
insurance companies do not pay claims immediately. ... “In the end, many
customers cooperated, which made me really happy.
Rakuten Ichiba, in particular,
offers a service called ‘Strongest Delivery,’ which allows for next-day delivery
and delivery time specification, but they were considerate enough to allow us a
grace period in consideration of the delay in delivery,” says President Tatsujo.

Practitioners say a cluster of market and technical factors are making
stablecoins the payment of choice for cybercriminals and fraudsters. "It's not
just the dollar peg that makes stablecoins attractive," said
Ari Redbord, vice
president and global head of policy and government affairs at
TRM Labs.
"Liquidity is critical. There are deep pools of stablecoin liquidity on both
centralized and decentralized platforms. Settlement speed and irreversibility
are also appealing for criminals trying to move large sums quickly," he told
Information Security Media Group. The perception of stability - knowing $1 today
will likely be $1 tomorrow - often suffices for illicit actors, regardless of an
issuer's exact collateral model, he said. This stability and on-chain plumbing
create both opportunity and exposure. Redbord said the spike in stablecoin usage
is partly because law enforcement agencies around the world have become
"exceptionally effective at tracing and seizing bitcoin," and criminals "go
where the liquidity and usability are." There is no technical attribute of
stablecoins that makes them more appealing to criminals or harder to trace,
compared to other cryptocurrencies,
Koven said. In practice, public ledgers keep
transfers visible; the question then becomes whether investigators have the
right tools and the cooperation of the ecosystem's gatekeepers to follow value
across chains.

Zero Trust is increasingly viewed as the standard going forward. As AI-driven
threats accelerate, organisations must evaluate security holistically across
identity, devices, networks, applications, and data. At
DXC, we're helping
customers embed Zero Trust into their culture and technology to safeguard
operations. Our end-to-end expertise makes it possible to both defend against AI
threats and harness secure AI in the same decisive motion. ... New cybersecurity
threats are the primary driver for updating Zero Trust frameworks, with 72% of
respondents indicating that the evolving threat landscape pushes them to
continuously upgrade policies and practices. In addition, more than half of
responding organisations recognised improvements in user experience as a
secondary benefit of adopting Zero Trust approaches, beyond the gains in
security posture. ... Most enterprises already rely on
Microsoft Entra ID
and
Microsoft 365 as the backbone of their IT environments. Building Zero Trust
solutions alongside DXC extends that value, enabling tighter integration,
simplified operations, and greater visibility and control. By consolidating
around the Microsoft stack, organisations can reduce complexity, cut costs, and
accelerate their Zero Trust journey. ... Participants in the study agreed that
Zero Trust is not a project with a defined end point. Instead, it is an ongoing
process that requires continuous monitoring, regular updates, and cultural
adaptation.

The challenges of AI at the Edge are as large as the advantages, however. One of
the biggest challenges and key enablement technologies is connectivity. Edge
processing and AI at the Edge require reliability, low latency, and resiliency
in the harshest of environments. Without good connections to the network, many
of the advantages of Edge AI are diminished, or lost entirely. A truly rugged
Edge AI system requires a dual focus on connectivity, according to the experts
at
ATTEND. It needs both robust external I/O to interface with the outside
world, and high-speed, resilient internal interconnects to manage data flow
within the computing module. ... The transition to Edge AI is not just a
software challenge; it is a hardware and systems engineering challenge. The key
to overcoming this dual challenge is to engage with a partner like ATTEND, who
will understand that the reliability of an advanced AI model is ultimately
dependent on the physical-layer components that capture and transmit its data.
By offering a comprehensive portfolio that addresses connectivity from the
external sensor to the internal processor module, ATTEND can help you to build
end-to-end systems that are both powerful and resilient. To meet with ATTEND and
see all that they are doing to advance and enable true intelligence at the Edge,
meet with them at embedded world North America in November at the Anaheim
Convention Center.

One of the most significant operational gaps in AI adoption is the lack of
runtime observability, with organizations struggling to know what data a model
is ingesting or what it's producing. Observability answers these questions by
providing a live view of AI behavior across prompts, responses and system
interactions, and it is a precursor to regulating or securing AI systems. ...
One of the biggest risks of
GenAI in the enterprise is data leakage, with
workers inadvertently pasting confidential information into a chatbot, models
regurgitating sensitive data it was exposed to during training, or adversaries
crafting prompts to extract private information through jailbreaking. Allowing
AI access without control is equivalent to opening an unsecured API to your
crown jewels. ... Output is just as risky as input with GenAI since an LLM could
generate sensitive content, malicious code or incorrect results that are trusted
by downstream systems or users.
Palo Alto Networks'
Arora noted the need for
bi-directional inspection to watch not only what goes into large language
models, but also what comes out. ... Another key challenge is defining identity
in a non-human context, raising questions around how AI agents should be
authenticated, what permissions AI agents should have and how to prevent
escalation or impersonation. Enterprises must treat bots, copilots, model
endpoints and LLM-backed workflows as identity-bearing entities that log in,
take action, make decisions and access sensitive data.
On one side are the techno-optimists: the believers in inexorable progress, the
proponents of markets and innovation as self-correcting forces. They see every
challenge as a technical problem and every failure as a design flaw waiting to
be solved. On the other side are techno-pessimists: the prophets of collapse who
warn that every new tool will inevitably accelerate inequality, erode democracy,
or catalyze ecological catastrophe. They see history as a cautionary tale, and
the present as a fragile prelude to systemic failure. Both perspectives share a
common flaw: they treat the future as preordained. Optimists assume that
progress will automatically yield good outcomes; pessimists assume that progress
will inevitably lead to harm. Reality, however, is far less deterministic.
Technology, in itself, is neutral. It amplifies human choices but does not
dictate them. ... Just as a hammer can build a home or inflict injury, a
powerful technology like artificial intelligence, gene editing, or blockchain
can be used to improve lives or to exacerbate inequalities. The technology does
not prescribe its use; humans do. This neutrality is both liberating and
daunting. On the one hand, it affirms that progress is not predestined. The
future is not a straight line determined by the mere existence of certain
tools.

The top priority for CISOs is real-time threat monitoring and comprehensive
visibility into all data in motion across their organisations, supporting a
defence-in-depth strategy. However, 97 percent of CISOs acknowledged making
compromises in areas such as visibility gaps, tool integration and data quality,
which they say limit their ability to fully secure and manage hybrid cloud
environments. ... The reliance on AI is also causing a revision of how
SOCs
(
security operations centres) function. Almost one in five CISOs reported
lacking the appropriate tools to manage the increased network data volumes
created by AI, underscoring that legacy log-based tools may not be fit for
purpose against AI-powered threats. ... Rising data breaches, with a 17 percent
increase year on year, are translating into greater pressure on CISOs, 45
percent of whom said they are now the main person held accountable in the event
of a breach. There is also concern about stress and burnout within cybersecurity
teams, which is driving a greater embrace of AI-based security tools. ... The
adoption of AI is expected to have practical impacts, such as enabling junior
analysts to perform at the same level as more experienced team members, reducing
training costs, speeding up analysis while investigating threats, and improving
overall visibility for the security function.
Many believe, “No servers, no security risks.” That’s a myth. Nowadays,
attackers take advantage of the specific security weaknesses found in serverless
platforms. ... All serverless applications need third-party libraries for
operation. Each function that depends on the compromised component becomes
vulnerable to attack. An npm package experienced a hijack attack when hackers
inserted a secret entry into its system. The incorporation of code by
AWS Lambda
resulted in the silent extraction of all environment variables. The unauthorized
loss of API keys, credentials, and sensitive data, together with all other
valuable information. The process finished in milliseconds, which was too brief
for any security system to identify. ... As more companies are adopting
serverless technologies, security risks become more widespread. So, it’s
fundamental to validate that serverless environments are secure. Let’s explore
the facts. Research dictates that serverless computing is expected to grow
rapidly. According to
Gartner’s July 2025 forecast, global IT spending will
climb to $5.43 trillion, with enterprises investing billions into AI-driven
cloud and data center infrastructure, making serverless platforms an
increasingly critical, but often overlooked, security target.
No comments:
Post a Comment