Quote for the day:
"Motivation gets you going and habit
gets you there." -- Zig Ziglar

The talent retention implications prove equally compelling, particularly as
organizations compete for digitally native workforce demographics who view AI
collaboration as a natural extension of professional relationships. ... Perhaps
most significantly, healthy human-AI collaboration frameworks unleash innovation
potential that traditional technology deployment approaches consistently fail to
achieve. When teams feel psychologically safe in their AI partnerships—confident
that transitions will be managed thoughtfully and that their emotional
investment in digital collaborators is acknowledged and supported—they
demonstrate a remarkable willingness to explore advanced AI capabilities,
experiment with novel applications, and push the boundaries of what artificial
intelligence can accomplish within organizational contexts. ... The ultimate
result is organizational resilience that extends far beyond technical
robustness. Comprehensive governance approaches that address technical
performance and psychological factors create AI ecosystems that adapt gracefully
to technological change, maintain continuity through system transitions, and
sustain collaborative effectiveness across the inevitable evolution of
artificial intelligence capabilities.

“The CISOs of the present and the future need to get out of being just
technologists and build their influence muscle as well as their communication
muscle,” Kapil says. They need to be able to “relay the technology and cyber
messaging in words and meanings where a non-technologist actually understands
why we’re doing what we’re doing.” ... “CISOs who are enablers can have the
greatest impact on the business because they understand the business
objectives,” LeMaire explains. “I like to say we don’t do cybersecurity for
cybersecurity’s sake. … Ultimately, we do cybersecurity to contribute to the
goals, missions, and objectives of the greater organization. When you’re an
enabler that’s what you’re doing.” ... The BISO role emerged to bridge the gap
between business objectives and cybersecurity oversight that has existed in many
companies, Petrik says. “By acting as a liaison between business, technology,
and cybersecurity teams, the BISO ensures that security measures are aligned
with business strategies and integrated effectively,” he says. Digital
transformation, emerging technologies, and rapid innovation are business
mandates, and security teams add value and manage risk better when they are
involved before a platform is selected or implemented, he says.

Features such as Bluetooth, Wi-Fi, and cellular networks improve user
convenience but create multiple attack vectors. For example, infotainment
systems, because of their connectivity, are prime targets on software-defined
vehicles. The recent Nissan LEAF hack revealed exactly this vulnerability, with
researchers using the vehicle’s infotainment system as an entry point to access
critical vehicle controls, including the steering. Not only can attackers gain
access to data and location information, they can use vulnerable infotainment
systems as an on-ramp to access other critical vehicle systems, like Advanced
Driver Assistance Systems (ADAS), CAN-Bus, or key engine control units. ...
Real-Time Operating Systems play a key role in the functionality of
software-defined vehicles, as they enable precise, time-critical operations for
systems like Electronic Control Units (ECUs). ECUs are primarily programmed in C
and C++ due to the need for efficiency and performance in resource-constrained
environments. ... Memory-based vulnerabilities, inherent to C/C++ programming,
can be exploited to enable remote code execution, potentially compromising
critical safety and performance systems. This creates serious cybersecurity and
reliability concerns for vehicles. As RTOS suppliers manage numerous processes,
any vulnerability in their codebase can be a gateway for attackers, increasing
the likelihood of malicious exploits across the interconnected vehicle
ecosystem.
Understanding performance has a psychological side to it. Recognising this
effect on performance frameworks, Rashmi suggested that imposter syndrome can be
mitigated by making progress visible. “When you see your results in real time,
you can’t keep criticising yourself.” The panellists encouraged managers to have
personal discussions with their team members, which would help them build bonds.
Rashmi highlighted this aspect, which can be leveraged through AI. “If AI says
that there has been no potential feedback for the employee in the last month,
then let the technology help the manager remind.” She also added, “Scaling up
makes the quarterly reviews an exercise; hence, spontaneous quarterly check-ins
are important.” Rashmi also advocated for weekly, human-centred check-ins,
features that are integrated in HRStop, where it won’t be just about tracking
project status, but to understand employees as people. “Treat it like a family
discussion,” Rashmi recommended. “A touch of personal conversation builds deeper
rapport.” Another aspect that came up in the discussion was coaching. Vimal
emphasised that coaching must happen at all levels—from CXOs to interns. “It’s
this cultural consistency that builds trust, retention, and performance”, he
added.

First, as the judge noted, “fortunately the technology available prevented
physical contact going further”. Availability is important here, not just in
terms of the equipment being accessible; it has a specific legal element too.
Where the technological means to prevent inhumane or degrading treatment are
reasonably available to the police, the law in England and Wales may not just
permit the use of remote biometric technology, it may even require it. I’m
unaware of anyone relying on this human rights argument yet and we won’t know if
these conditions would have met that threshold. ... Second, the person was on
the watchlist because he was subject to a court order. This was not the public
under ‘general surveillance’: a court had been satisfied on the evidence
presented that an order was necessary to protect the public from sexual harm
from him. He breached that order by insinuating himself into the life of a
6-year-old girl and was found alone with her. He was accurately matched with the
watchlist image. The third feature is that the technology did its job. It would
be easy to celebrate this as a case of ‘thank goodness nothing happened’ but
that would underestimate its significance and miss the legal areas where FRT
will be challenged.

Data quality issues are a real concern and an actual barrier to AI adoption, but
the problem is much larger than the traditional and typical discussion about
data quality in transactional or analytical environments, says John Thompson,
senior vice president and principal at AI consulting firm The Hackett Group.
“With gen AI, literally 100% of an organization’s data, documents, videos,
policies, procedures, and more are available for active use,” Thompson says.
This is a much larger issue than data quality in systems such as enterprise
resource planning (ERP) or customer relationship management (CRM), he says. ...
Organizations need the infrastructure in place to educate and train its
employees to understand the capabilities and limitations of AI, Ally’s
Muthukrishnan says. “Without the right training, adoption and utilization will
not achieve the outcome you’re hoping for,” he adds. “While I believe AI is one
of the largest tech transformations of our lifetime, integrating it into
day-to-day processes is a huge change management undertaking.” ... “The skills
gap is only going to grow,” Hackett Group’s Thompson says. “Now is the time to
start. You can start with your team. Have them work on test cases. Have them
work on personal projects. Have them work on passion projects. [Taking] time for
everyone to take a class is just elongating the process to close the skills gap.
...”

Much of the work behind the Google Cloud IDP comes from Anna Berenberg, an
engineering fellow with Google Cloud who has been with the company for 19 years.
“She is the originator of a lot of these concepts overall … many of these ideas
which I did not really understand the impact of until I saw it manifest itself,”
said Seroter. “She had this vision that I did not even buy into three years ago.
She saw a little further ahead from there, and she has built and published
things. It is impressive to have such interesting engineering thought
leadership, not just applied to how Google does platforms, but now turning that
into how we can change … infrastructure to make it simpler. She is a pioneer of
that.” In an interview with The New Stack, Berenberg said that her ideas on the
IDP came to her when she looked at how this could all work using Google’s vast
compute and services resources to reimagine how platform engineering could be
improved. “The way it works is you have a cloud platform, and then on top of it
is this thick layer of platform engineering stuff, right?” said Berenberg. “So,
platform engineering teams are building a layer on top of infrastructure cloud
to do an abstraction and workflows and whatever they need” to improve processes
for developers. “It shrinks down because everything shifts down to the platform
and now we are providing platform engineering. “

The campaign’s cunning blend of social engineering and technical subterfuge
has enabled threat actors to compromise systems across a vast array of
regions, targeting unsuspecting users as they consume streaming media,
download shared files, or even browse legitimate-appearing websites.
Gendigital researchers first identified HelloTDS as an intricate Traffic
Direction System (TDS) — a malicious decision engine that leverages device and
network fingerprinting to select which visitors receive harmful payloads,
ranging from infostealers like LummaC2 to fraudulent browser updates and tech
support scams. Entry points for the menace include compromised or
attacker-operated file-sharing portals, streaming sites, pornographic
platforms, and even malvertising embedded in seemingly innocuous ad spots. The
system’s filtering and redirection logic allows it to avoid obvious honeytraps
such as virtual machines, VPNs, or known analyst environments, significantly
complicating detection and takedown efforts. The scale of the campaign is
staggering. Gen’s telemetry reported over 4.3 million attempted infections
within just two months, with the highest impact in the United States, Brazil,
India, Western Europe, and, proportionally, several Balkan and African
countries.

ClickFix first came to light as an attack method last year when Proofpoint
researchers observed compromised websites serving overlay error messages to
visitors. The message claimed that a faulty browser update was causing
problems, and asked the victim to open "Windows PowerShell (Admin)" (which
will open a User Account Control (UAC) prompt) and then right-click to paste
code that supposedly "fixed" the problem — hence the attack name. Instead of a
fix, though, users were unwittingly installing malware — in that case, it was
the Vidar stealer. ... "The goals of ClickFix campaigns vary depending on the
attacker," says Nathaniel Jones, vice president of security and AI strategy at
Darktrace. "The aim might be to infect as many systems as possible to build
out a network of proxies to use later. Some attackers are trying to exfiltrate
credentials or domain controller files and then sell to other threat actors
for initial access. So there isn't one type of victim or one objective — the
tactic is flexible and being used in different ways." ... The approach, and
ClickFix in general, represents a significant innovation in the world of
phishing, according to Jones, because unlike an email asking someone to click
on a typosquatted link that can be easily checked, the entire attack takes
place inside the browser.

The institutions in place now were not designed for this moment. Most were
forged in the Industrial Age and refined during the Digital Revolution. Their
operating models reflect the logic of earlier cognitive regimes: stable
processes, centralized expertise and the tacit assumption that human
intelligence would remain preeminent. ... But the assumptions beneath these
structures are under strain. AI systems now perform tasks once reserved for
knowledge workers, including summarizing documents, analyzing data, writing
legal briefs, performing research, creating lesson plans and teaching, coding
applications and building and executing marketing campaigns. Beyond
automation, a deeper disruption is underway: The people running these
institutions are expected to defend their continued relevance in a world where
knowledge itself is no longer as highly valued or even a uniquely human asset.
... This does not mean institutional collapse is inevitable. But it does
suggest that the current paradigm of stable, slow-moving and authority-based
structures may not endure. At a minimum, institutions are under intense
pressure to change. If institutions are to remain relevant and play a vital
role in the age of AI, they must become more adaptive, transparent and attuned
to the values that cannot readily be encoded in algorithms: human dignity,
ethical deliberation and long-term stewardship.
No comments:
Post a Comment