Quote for the day:
"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore
The cloud giants stumble

The challenge for Amazon, Microsoft, and Google will be to adapt their
strategies to this evolving landscape. They’ll need to address concerns about
costs, provide more flexible deployment options, and develop compelling AI
solutions that deliver clear value to enterprises. Without these changes, they
may continue to see their growth rates decline as organizations increasingly
turn to alternative solutions that better meet their specific needs. This does
not mean failure for Big Cloud, but they will take a few years to figure out
what’s important to their market. They are a bit off-target now. The rise of
specialized providers and the growing acceptance of private cloud solutions
means enterprises can be more selective, choosing fit-for-purpose options rather
than forcing all workloads into a one-size-fits-all public cloud model that may
not be cost-effective. This is particularly relevant for AI initiatives, where
specialized infrastructure providers often deliver better value. This freedom of
choice comes with increased responsibility. Enterprises must develop more
substantial in-house expertise to effectively evaluate and manage multiple
infrastructure options. ... The key takeaway is clear: Enterprises are entering
an era where they can build infrastructure strategies based on their specific
needs rather than vendor limitations.
Lines Between Nation-State and Cybercrime Groups Disappearing

“The vast cybercriminal ecosystem has acted as an accelerant for
state-sponsored hacking, providing malware, vulnerabilities, and in some cases
full-spectrum operations to states,” said Ben Read, senior manager at Google
Threat Intelligence Group, which includes the Mandiant Intelligence and Threat
Analysis Group teams. “These capabilities can be cheaper and more deniable
than those developed directly by a state.” ... While nation-states for years
have leveraged cybercriminals and their tools, the trend has accelerated since
Russia launched its ongoing invasion of neighboring Ukraine in 2022,
illustrating that at times of heightened need, financially motivated groups
can be used to help the cause of countries. Nation-states can buy cyber
capabilities from cybercrime groups or via underground marketplaces.
Cybercriminals tend to specialize in certain areas and partner with others
with different skills, and the specialization opens opportunities for
state-backed actors to be customers that are buying malware and other tools
from criminals. “Purchasing malware, credentials, or other key resources from
illicit forums can be cheaper for state-backed groups than developing them
in-house, while also providing some ability to blend in to financially
motivated operations and attract less notice,” the researchers wrote.
Agentic AI vs. generative AI

Generative AI is artificial intelligence that can create original content—such
as text, images, video, audio or software code—in response to a user’s prompt
or request. Gen AI relies on using machine learning models called deep
learning models—algorithms that simulate the learning and decision-making
processes of the human brain—and other technologies like robotic process
automation (RPA). These models work by identifying and encoding the patterns
and relationships in huge amounts of data, and then using that information to
understand users' natural language requests or questions. These models can
then generate high-quality text, images, and other content based on the data
they were trained on in real-time. Agentic AI describes AI systems that are
designed to autonomously make decisions and act, with the ability to pursue
complex goals with limited supervision. It brings together the flexible
characteristics of large language models (LLMs) with the accuracy of
traditional programming. This type of AI acts autonomously to achieve a goal
by using technologies like natural language processing NLPs, machine learning,
reinforcement learning and knowledge representation. It’s a proactive
AI-powered approach, whereas gen AI is reactive to the users input. Agentic AI
can adapt to different or changing situations and has “agency” to make
decisions based on context.
5 AI Mistakes That Could Kill Your Business In 2025

It’s easy for us to get so excited by the hype around AI that we rush out
and start spending money on tools, platforms and projects without aligning
them with strategic goals and priorities. This inevitably leads to
fragmented initiatives that fail to deliver meaningful results or ROI. To
avoid this, always “start with strategy” – implementing a strategic plan
that clearly shows how any project or initiative will progress your
organization towards improving the metrics and hitting the targets that will
define your success. ... Assessing the skills and possibilities of training
or reskilling, ensuring there is buy-in across the board, and addressing
concerns people might have about job security are all critical. ... On the
other hand, being slow to pull the plug on projects that aren’t working out
can also be a recipe for disaster – potentially turning what should simply
be a short, sharp lesson into a long-term waste of time and resources.
There’s a reason that “fail fast” has become a mantra in tech circles.
Projects should be designed so that their effectiveness can be quickly
assessed, and if they aren’t working out, chalk it up to experience and move
on to the next one. ... Make no mistake, going full-throttle on AI is
expensive – hardware, software, specialist consulting expertise, compute
resources, reskilling and upskilling a workforce and scaling projects from
pilot to production – none of this comes cheap.
IoT Security: The Smart House Nightmares

One of the biggest challenges in securing IoT devices is the need for more
standardization across the industry. With so many different manufacturers
producing a wide variety of devices, there’s no universal security standard
that all devices must adhere to. This leads to inconsistent security practices
and varying levels of protection. Some devices have robust security features,
while others may be woefully inadequate. ... Many IoT devices come with
default usernames and passwords that are easy to guess. In some cases, these
credentials are hardcoded into the device, meaning they can’t be changed even
if the user wants to. Unfortunately, many users either don’t realize they
should change these defaults or don’t bother. This creates a significant
security risk, as these default credentials are often well-known to hackers. A
quick search online can reveal the default passwords for thousands of devices,
providing cybercriminals with an easy way to gain access to your smart home.
... Another common issue with IoT devices is the lack of regular software
updates. Many devices are shipped with outdated firmware that contains known
vulnerabilities. These vulnerabilities remain unpatched without regular
updates, leaving the devices open to exploitation.
Addressing cost and complexity in cybersecurity compliance and governance
Employees across the ranks need to be trained in cybersecurity practices and
made aware of their responsibilities towards security, compliance and
governance. There has to be an effective mechanism for ensuring compliance and
fixing accountability, and at the same time, a communication, feedback and
recognition process for encouraging employee involvement. ... Efficiency
apart, technologies such as artificial intelligence (AI), machine learning
(ML), cloud, and blockchain are making cybersecurity operations smarter. AI
and ML can identify anomalous patterns indicative of potential threats in
real-time, and recommend mitigative actions. Cloud provides the required
storage and computing infrastructure to house GRC data and applications, and
the scalability to expand cybersecurity operations across business entities
and geographies. Blockchain provides a secure, transparent and immutable
record of GRC data and transactions that can be easily audited. ... The need
for cybersecurity compliance and governance is universal, but enterprises need
to craft the strategy that’s right for them based on objectives, size,
resources, nature of business, compliance obligations in line with applicable
jurisdictions operating from, technology landscape etc.
Cyber Fusion: a next generation approach to NIS2 compliance

This is not a one-off box ticking exercise. Organisations will need to
persistently test their cybersecurity and response capabilities, conduct
regular cyber risk assessments and ensure that clear lines of management and
reporting responsibility are defined and in place. Ultimately, organisations
need to ensure they can detect and respond faster and more effectively to
cybersecurity events. The faster a possible threat is detected, the better an
organisation can comply with the regulatory reporting requirements should this
evolve into a full blown incident. Importantly, NIS2 highlights the importance
of incident reporting and information across industries and along supply
chains as being essential for preparing against security threats. As a key
requirement of the directive, the voluntary exchange of cybersecurity
information is now enshrined as good security practice. ... NIS2 is the EU’s
toughest cybersecurity directive to date and compliance depends on undergoing
a multi-step process that includes understanding the scope; connecting with
relevant authorities; undertaking a gap analysis; creating new and updated
policies; training the right employees; and monitoring progress. All of which
will enable businesses to track their supply chain for threats and
vulnerabilities and stay on top of their risk management strategies.
The DPDP Act, 2023 and the Draft DPDP Rules, 2025: What Do They Mean for India’s AI Start-Ups?
Some of the reasonable security measures under the Draft DPDP Rules include
implementing measures like encryption, obfuscation, masking or the use of
virtual tokens mapped to specific personal data. Further regular security
audits, vulnerability assessments, and penetration testing to identify and
address potential risks form a part of the organizational measures that may be
undertaken. Ensuring that sufficient security measures are taken by AI
startups to secure their AI model is crucial. ... The Act requires
organizations to retain personal data only for as long as necessary to fulfil
the purposes for which it was collected. They must establish and implement
clear policies for data retention that align with these guidelines. The draft
DPDP Rules provide for specific data retention periods based on the purpose
for which the data is being collected and processed. Once the data is no
longer needed, they should ensure its secure deletion or anonymization to
prevent unauthorized access or misuse. Data Principals must be informed 48
hours before their data is to be erased. This process can include automated
systems for tracking data lifecycles, conducting regular audits to identify
redundant data, and securely erasing it in compliance with industry best
practices.
"Blatantly unlawful and horrifically intrusive" data collection is everywhere – how to fight back

Fielding called for "some actual regulation from the actual regulator," and
said "as long as it's more profitable and easier to break the law than not,
then businesses will." "We cannot expect commercial incentives to save the day
for us because they are in direct opposition to the purpose of these laws,
which is human rights, human dignity," she added. The Information
Commissioners Office (ICO) has stressed that non-essential cookies shouldn't
be deployed on user's devices if they haven't actively given consent. It has
also said organisations must make it as easy for users to "reject all" as it
is to "accept all." ... "Shame" was something championed by Fielding. She
commented on how using "community" and our networks "to make it socially
unacceptable to treat people like this is probably the most powerful thing we
have." The defence against the dangers of authoritarianism in tech, or rather
facilitated by tech, is local networks, local community, community activism,
and community spirit," she said. "Don't expect to change the world, but keep
your corner of it safe for you and yours." Raising awareness and sharing the
dangers of data tracking and harvesting is vital in educating more people
about data privacy and building a wider campaign to protect it.
The UK’s secret iCloud backdoor request: A dangerous step toward Orwellian mass surveillance

The idea of a government backdoor might sound reasonable in theory – after
all, should law enforcement not have a way to stop criminals? But in reality,
backdoors weaken security for everyone and pose serious risks: ... Once a
vulnerability is created, it will be exploited – by criminals, hostile nations
and even corrupt insiders. The UK government might claim it will only use the
backdoor responsibly, but history shows that security loopholes do not stay
secret for long. The history also shows that provisions in law to lower
privacy in just extreme cases have been abused and the threshold to use them
has lowered. For example, some local UK councils have been found using CCTV
under Regulation of Investigatory Powers Act (RIPA) to monitor minor offences
such as littering, dog fouling, and school catchment fraud. ... Allowing the
UK government access to iCloud data could set a dangerous precedent. If Apple
complies, other countries – China, Russia, Saudi Arabia – will demand the
same. The moment a backdoor is created, Apple loses control over who can
access it. I have seen what happens when governments have unchecked power. In
former Czechoslovakia, the state monitored citizens, controlled the media and
crushed dissent.
No comments:
Post a Comment