Quote for the day:
"Always remember, your focus determines your reality." -- George Lucas
We can’t ignore cloud governance anymore
Many organizations are still treating cloud governance as an afterthought.
Instead, enterprises pour resources into migration and adoption at the expense
of creating a governance framework meant to manage risks proactively. This
oversight leads to the type of major outages and service disruptions we’ve seen
recently, which cost companies millions of dollars and erode brand trust. Events
like these aren’t inevitable. With proper governance structures in place, much
of the fallout can be mitigated or avoided altogether. ... Risks that were
irrelevant five years ago, such as cloud-native application security or hybrid
cloud architecture vulnerabilities, are now front and center. Enterprises must
rethink their approach to risk in the cloud, from redefining acceptable levels
of exposure to embedding automated tools that dynamically address
vulnerabilities before they evolve into crises. In the book, we cover strategies
for incorporating dynamic risk management tools, compliance structures, and a
culture of accountability throughout an enterprise’s operations. ... The
majority of enterprises are rolling the dice. The belief that cloud computing
inherently eliminates risks is a dangerous misconception; without guardrails and
policies to control how the cloud operates within an organization, risks can
grow unchecked. Enterprises are unknowingly declining millions of dollars in
potential savings simply because they don’t invest in governance.The Art of Lean Governance: The Cybernetics of Data Quality
Without this cybernetic interplay, data governance devolves into static policy
documents rather than a living, self-correcting mechanism. For risk officers and
auditors, this distinction defines whether data risk is truly controlled or
merely reported. The systems that thrive will be those that can self-correct
faster than they degrade. ... Traditional data risk management has focused on
frameworks, thresholds, and remediation logs. The cybernetic view goes further:
it treats risk as system entropy — the measure of disorder introduced when
feedback loops are weak or delayed. Consider financial reconciliation. When the
flow of transactional data between ledgers, systems, and reports is disrupted,
discrepancies emerge. If the feedback mechanism (the reconciliation engine) is
not fast or intelligent enough, the delay amplifies uncertainty across dependent
systems, and risk compounds through interconnection. Thus, data risk management
is a function of response latency and feedback precision. Modern systems must
evolve toward autonomous reconciliation, utilizing pattern recognition and
AI-assisted anomaly detection to maintain equilibrium in near real-time. This is
cybernetic risk control — adaptive, responsive, and context-aware. ...
Cybernetics thrives on understanding the flow of energy, signals, and cause and
effect. Data lineage is the cybernetic map of that flow. It illustrates how data
is transformed, where it originates, and how it propagates through
systems. Role Reversals: How AI Trains Humans
In some cases, LLMs can shape how people think about topics such as culture,
morality, and ethics. At some point, these complex feedback loops blur the line
between human and machine thinking—including who is teaching whom. “Research
shows that it’s possible to influence the vocabulary of large
populations—potentially on a global scale. This shift in language can, in turn,
reshape thinking, culture, and public discourse,” said Hiromu Yakura ... In
fact, human behavior changes significantly when people use AI, according to a
study from a research group at Washington University in St. Louis, MO. Using the
behavioral economic bargaining tool Ultimatum Game, they found that study
participants who thought their actions would help train an AI system were more
likely to reject an “unfair” payout—even when it came at a personal cost. The
reason? They wanted to teach AI what’s fair. ... AI-generated language can also
help spread bias, misinformation, and narrow the way people think—including by
design. Today, social media algorithms amplify and bury content to dial up user
engagement. In the future, governments, political strategists, and others could
tap AI-generated language to sway—and perhaps manipulate—public opinion. AI
researchers like Treiman, already uneasy about how little is known about the
inner workings of most algorithms, are raising red flags. Secrecy, she argued,
leaves the public in the dark about systems that increasingly shape daily life.
How Data Is Reshaping Science – Part 1: From Observation to Simulation
With so much data and powerful AI models at their fingertips, researchers are
doing more and more of their work inside machines. Across many fields,
experiments that once started in a lab now begin on a screen. AI and simulation
have flipped the order of discovery. In many cases, the lab has become the final
step, not the first. You can see this happening in almost every area of science.
Instead of testing one idea at a time, researchers now run thousands of
simulations to figure out which ones are worth trying in real life. Whether
they’re working with new materials, brain models, or climate systems, the
pattern is clear: computation has become the proving ground for discovery. ...
Scientists aren’t just testing hypotheses or peering into microscopes anymore.
More and more, they’re managing systems — trying to stop models from drifting,
tracking what changed and when, making sure what comes out actually means
something. They’ve gone from running experiments to building the environment
where those experiments even happen. And whether they’re at DeepMind, Livermore,
NOAA, or just some research team spinning up models, it’s the same kind of work.
They’re checking whether the data is usable, figuring out who touched it last,
wondering if the labels are even accurate. AI can do a lot, but it doesn’t know
when it’s wrong. It just keeps going. That’s why this still depends on the human
in the loop.ID verification laws are fueling the next wave of breaches
The cybersecurity community has long lived by a simple principle: Don't collect
more data than you can protect. But ID laws and other legal mandates now force
many organizations to store massive amounts of sensitive data, putting them in
the precarious situation of dealing with information they don’t necessarily want
but have to safeguard. ... ge verification laws are proliferating worldwide.
These laws typically mandate age verification through government-issued
documents, such as driver's licenses, passports or national ID cards. Failure to
verify IDs can result in millions of dollars in fines. The intention is
sensible: protecting minors from inappropriate online content. But for the
organizations that have to collect ID data, the laws can lead to a security
nightmare. Organizations now have to collect and store volumes of the most
sensitive personally identifiable information possible regardless of whether
they have the infrastructure to adequately protect it — or even want to collect
it. ... When backup, endpoint protection, disaster recovery and security
monitoring operate through a single agent with one management console, there are
no handoff points where data might be exposed and no integration vulnerabilities
to exploit, and there is no confusion about which tool protects what. Native
integration delivers practical benefits beyond security. MSPs can reduce the
administrative burden of managing multiple vendor relationships, licenses and
support contracts.Is enterprise agentic AI adoption matching the hype?
“The expectations around AI and agents are huge. And vendors are making
statements that all you need to solve your enterprise problems is to unleash an
army of agents,” van der Putten tells ITPro. “But if not properly controlled and
governed, this army is more likely to go and wreak havoc than bring peace and
prosperity in the enterprise. And enterprises know this.” According to van der
Putten, today’s AI agents are unable to take the real-world complexity into
account, which the majority of enterprises need to deal with. And the thing that
makes them appealing — their apparent autonomy — is also their biggest weakness.
“Enterprises want to innovate, but they are held back by legacy,” van der Putten
explains. ... "The sticking point isn’t the technology – it’s trust. Agents can
already reconcile accounts, flag anomalies, even anticipate compliance risks,
but adoption will only scale once businesses have confidence in how they
operate, explain their reasoning, and can be audited.” Nowhere is the issue of
trust more apparent than in the world of commerce, where AI agents are being
used as assistants and autonomous actors, capable of initiating and completing
purchases independently of the shopper. ... Although agentic commerce promises
to streamline the path to purchase for businesses, Sheikrojan says that it’s a
path paved with “blind spots”. This is because when an AI agent takes over the
transaction many of today’s retail processes, rooted in context and behavioral
signals such as fraud prevention, disappear.Power, not GPUs, will decide who wins AI
AI workloads scale differently from traditional IT. Where once we worried about
server density in kilowatts per rack, we’re now talking about megawatts. That
kind of thermal and electrical load exposes the inadequacies of legacy
architectures built for virtualisation, not for vector processing or massive
parallel training. As Stephen Worn put it, “AI isn’t just another workload; it’s
a demanding tenant.” It’s a tenant with unpredictable consumption, heat spikes,
and sub-millisecond tolerance for power fluctuation. And it’s not just moving in
– it’s taking over. ... Downtime in AI is more than an outage; it’s a lost
training cycle, corrupted model, or missed opportunity. Resilience in this
context isn’t just about redundancy; it’s about reaction time. We need systems
that operate on the same timelines as the workloads they protect. ... In a
sense, the infrastructure must become intelligent; just like the workloads it
supports. Data centres are evolving into living ecosystems, where compute
behavior and physical response are tightly intertwined. ... So what does this
all point to? Here’s a realistic, aspirational view of what AI-ready
infrastructure could look like by the end of the decade: Hybrid Power
Architectures: Combining traditional grid feeds, on-site renewables, and modular
battery systems; Resilience by Design: Low-toxicity chemistries, automated
failover, and microsecond response baked into every rack; AI-Managed AI
Infrastructure: Neural networks monitoring and adjusting the environments they
run in.The Ultimate Betrayal: When Cyber Negotiators Became the Attackers
The allegations outline an audacious and calculated scheme that exploits the foundational trust between a victim and its incident response team. The indictment claims the defendants utilized the notorious BlackCat (ALPHV) ransomware variant to compromise targeted organizations. The irony, as noted by CNN, is that the accused were professionals whose entire business model was predicated on helping victims recover from these exact kinds of intrusions. The DOJ effectively accuses the U.S. ransomware negotiators of "launching their own ransomware attacks," according to TechCrunch. ... "'Zero Trust' is not just a security framework for your network; it must now be seen as a security framework that includes not just your network, but all the people and devices that have any type of access to it," Leighton said. "As a former intelligence officer, I couldn't help but think of Edward Snowden and how he compromised NSA's networks." "This case just proves that we have to extend our personnel vetting processes beyond our own organizations," he added. "We need to be able to also vet the employees of our suppliers, as well as those whose job it is to remediate breaches of our networks. This is easier said than done, but CISOs are going to have to work with their corporate legal teams to rewrite supplier contracts so they can vet third-party remediation team personnel independently."Infostealers: Addressing a rising threat to UK businesses
Multiple infostealers exist, but several have been more dominant during 2025, according to experts. Raccoon Stealer stands out as the most frequently encountered infostealer, accounting for the highest volume of incidents, according to Rozenski. Despite law enforcement disruption, LummaStealer remains “one of the most prolific infostealers,” says Addison. It operates under a MaaS model, making it “accessible to a wide range of threat actors,” he says. ... Predictably, AI is also set to super-charge infostealer attacks. Walter says SentinelOne is now tracking for a new AI-assisted infostealer it calls Predator AI. “The malware doesn’t just steal passwords and credentials. It integrates with ChatGPT to analyse huge amounts of stolen data to identify high-value accounts and business domains.” Predator AI is also able to organise the stolen data, enabling cybercriminals to “operate more efficiently” and “increase the speed and volume of attacks,” he says. “While this infostealer isn’t a game-changer yet, it shows where cybercriminals are investing their resources and what businesses should look out for next.“ ... At the same time, breaking single sign on journeys is “crucial” for critical applications, says Gee. He recommends requiring users to revalidate MFA when accessing critical applications, making sure admins are required to also do so.EU lawmakers approve regulation to expand Europol’s capabilities in biometric data processing
European lawmakers have backed a proposal to give Europol a central role in
coordinating the fight against smuggling networks and human trafficking and to
strengthen the obligation among EU member states to share data, including
biometrics. The support for the regulation comes amid criticism from rights
groups and the EU data watchdog. ... The regulation also enables Europol to
“effectively and efficiently process biometric data in order to better support
Member States in cracking down on irregular migration.” “The effective use of
biometric data is key to closing the gaps and blind spots that terrorists and
other criminals seek to exploit by hiding behind false or multiple identities,”
says the document. ... “The Europol Regulation unlawfully expands the EU’s
digital surveillance infrastructure without appropriate safeguards,” says the
report. “This is particularly important in the context of biometrics.” Facing
pushback, the EU introduced significant changes to the proposal in May, allowing
more flexibility for EU member states to decide whether to exchange data with
Europol. The presidency of the Council and European Parliament negotiators
reached a provisional agreement on the regulation in September. Europol’s legal
framework already allows the agency to process biometric data for operational
purposes and for preventing or combating crime.
No comments:
Post a Comment