Quote for the day:
"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart
Security work keeps expanding, even with AI in the mix
Teams with established policies report greater confidence that AI outputs pass
through review steps or guardrails before influencing decisions. Governance work
spans data handling, access management, auditability, and lifecycle oversight
for AI models and integrations. Security and compliance considerations also
affect how quickly teams operationalize automation. Concerns around data
protection, regulatory obligations, tool integration, and staff readiness
continue to influence adoption patterns. Budget limits and legacy systems remain
common constraints, reinforcing the need for governance structures that support
day-to-day execution. ... Teams managing large tool inventories report higher
strain, particularly when workflows require frequent context switching. Leaders
increasingly view automation and tooling improvements as key levers for
retaining staff. Practitioners consistently place work-life balance and
meaningful impact at the center of retention decisions. ... Many teams express
interest in workflow platforms that connect automation, AI, and human review
within a single operational layer. These approaches focus on moving work across
systems without constant manual handoffs. Respondents associate connected
workflows with higher productivity, faster response times, improved data
accuracy, and stronger compliance tracking. Interoperability also plays a
growing role. Security teams increasingly consider standardized frameworks and
APIs that allow AI systems to interact with tools under controlled
conditions. Human risk management: CISOs’ solution to the security awareness training paradox
Despite regulatory compliance requirements and significant investment, SAT seems
to deliver marginal benefits. Clearly, SAT is broken — even with peripheral
improvements like synthetic phishing tools. So, what’s needed? Over the next few
years, organizations should shift from static/sporadic security training to an
emerging discipline called human risk management (HRM). ... HRM is defined as a
cybersecurity strategy that identifies, measures, and reduces the risks caused
by human behavior. Simply stated, security awareness training is about what
employees know; HRM is about what they do. To be more specific, HRM integrates
into email security tools, web gateways, and identity and access management
(IAM) systems to identify human vulnerabilities. Furthermore, it measures risk
using behavioral data and pinpoints an organization’s riskiest users. HRM then
seeks to mitigate these risks by applying targeted interventions such as
micro-learning, simulations, or automated security controls. Finally, HRM
monitors behavioral changes so organizations can track progress. ... From an ROI
perspective, HRM offers a much more granular approach to cyber-risk mitigation
than standard SAT. CISOs and HR managers can report on improved cyber hygiene
and behavior, rather than how many employees have been trained and past generic
tests. Repeat offenders are not only identified but also provided with
personalized training tools and attention. Ultimately, HRM makes it possible to
show a direct correlation between training and a reduction in actual security
incidents. ... The Human Exploit: Why Wizer Is the Secret Weapon in the War for Your Digital Soul
We are currently witnessing a systemic failure in how we prepare people for a
digital world. From the moment a child gets their first school-issued tablet
to the day a retiree checks their pension balance, every individual is a
target. This isn’t just a corporate problem; it’s a societal one. That is why
I’ve been following the rise of Wizer, a firm that has cracked the code on
making security training not just tolerable, but actually effective. ... It is
no coincidence that the financial industry has become Wizer’s most aggressive
adopter. In banking, trust is the only product you’re actually selling. If a
customer’s account is drained because an employee fell for a “vishing”
attack—where a hacker samples an IT person’s voice from a voicemail to
impersonate them—the damage to the brand is catastrophic. Financial
institutions are currently the biggest fans of the platform because they
operate under a microscope of regulation and extreme risk. They realized early
on that a 45-minute annual compliance video is a waste of time. Wizer’s
approach is different; it feels more like an app—specifically Duolingo—than a
corporate lecture. ... One of the most profound insights Gabriel Friedlander
brings to the table is the necessity of the “Security Awareness Manager”
(SAM). Historically, security training was a secondary task for a stressed-out
IT admin who would rather be configuring a server. That is a recipe for
failure. To build a true culture of security, you need a dedicated
facilitator.Chinese APTs Hacking Asian Orgs With High-End Malware
A pile of evidence suggests that this campaign was carried out by a Chinese
APT, but exactly which is unclear. Chinese threat actors are notorious for
sharing tools, techniques, and infrastructure. Trend Micro found that this one
— which it currently tracks as Shadow-Void-044 — used a C2 domain previously
used by UNC3569. A Cobalt Strike sample on one of its servers was signed with
a stolen certificate also spotted in a Bronze University campaign. And they
linked one of its backdoors to a backdoor developed by a group called
"TheWizards," not to be confused with the equally maligned basketball team. A
second, separate threat actor has also been using PeckBirdy since at least
July 2024. With low confidence, Trend Micro's report linked the group it
labeled Shadow-Earth-045 to the one it tracks as Earth Baxia. This campaign
was more diverse in its methods, and its targeting, involving both Asian
private organizations and government entities. Chinese APTs habitually perform
cyberespionage against government agencies in the APAC region and beyond.
Trend Micro tells Dark Reading, "These two campaigns remind us that the
boundary between cybercrime and cyberespionage is increasingly blurred. One
tool used in different [kinds of] attacks is [becoming] more and more
popular." AI agent evaluations: The hidden cost of deployment
Agent evals can be complicated because they test for several possible metrics,
including agent reasoning, execution, data leakage, response tone, privacy,
and even moral alignment, according to AI experts. ... Most IT leaders budget
for obvious costs — including compute time, API calls, and engineering hours —
but miss the cost of human judgment in defining what Ferguson calls the
“ground truth.” “When evaluating whether an agent properly handled a customer
query or drafted an appropriate response, you need domain experts to manually
grade outputs and achieve consensus on what ‘correct’ looks like,” he adds.
“This human calibration layer is expensive and often overlooked.” ... The
sticker shock of agent evals rarely comes from the compute costs of the agent
itself, but from the “non-deterministic multiplier” of testing, adds Chengyu
“Cay” Zhang, founding software engineer at voice AI vendor Redcar.ai. He
compares training agents to training new employees, with both having moods.
“You can’t just test a prompt once; you have to test it 50 times across
different scenarios to see if the agent holds up or if it hallucinates,” he
says. “Every time you tweak a prompt or swap a model, you aren’t just running
one test; you’re rerunning thousands of simulations.” ... If an organization
wants to save money, the better alternative is to narrow the agent’s scope,
instead of cutting back on testing, Zhang adds. “If you skip the expensive
steps — like human review or red-teaming — you’re relying entirely on
probability,” he says.Social Engineering Hackers Target Okta Single Sign On
What makes these attacks unusual is how criminals engage in real-time
conversations as part of their trickery, using the latest generation of highly
automated phishing toolkits, which enable them to redirect users to
real-looking log-in screens as part of a highly orchestrated attack. "This
isn't a standard automated spray-and-pray attack; it is a human-led,
high-interaction voice phishing - 'vishing' - operation designed to bypass
even hardened multifactor authentication setups," said threat intelligence
firm Silent Push. The "live phishing panel" tools being used enable "a human
attacker to sit in the middle of a login session, intercepting credentials and
MFA tokens in real time to gain immediate, persistent access to corporate
dashboards," it said. Callers appear to be using scripts designed to walk
victims through an attacker-designated list of desired actions. ... At least
so far, the campaign appears to center only on Okta-using organizations.
ShinyHunters and similar groups have previously targeted a variety of SSO
providers, meaning hackers' focus may well expand, Pilling said. The single
best defense against live phishing attacks that don't exploit any flaws or
vulnerabilities in vendors' software, is strong MFA. "We strongly recommend
moving toward phishing-resistant MFA, such as FIDO2 security keys or passkeys
where possible, as these protections are resistant to social engineering
attacks in ways that push-based or SMS authentication are not," Mandiant's
Carmakal said.AI agents can talk to each other — they just can't think together yet
Current protocols handle the mechanics of agent communication — MCP, A2A, and
Outshift's AGNTCY, which it donated to the Linux Foundation, let agents discover
tools and exchange messages. But these operate at what Pandey calls the
"connectivity and identification layer." They handle syntax, not semantics. The
missing piece is shared context and intent. An agent completing a task knows
what it's doing and why, but that reasoning isn't transmitted when it hands off
to another agent. Each agent interprets goals independently, which means
coordination requires constant clarification and learned insights stay siloed.
For agents to move from communication to collaboration, they need to share three
things, according to Outshift: pattern recognition across datasets, causal
relationships between actions, and explicit goal states. "Without shared intent
and shared context, AI agents remain semantically isolated. They are capable
individually, but goals get interpreted differently; coordination burns cycles,
and nothing compounds. One agent learns something valuable, but the rest of the
multi-agent-human organization still starts from scratch," Outshift said in a
paper. Outshift said the industry needs "open, interoperable, enterprise-grade
agentic systems that semantically collaborate" and proposes a new architecture
it calls the "Internet of Cognition," where multi-agent environments work within
a shared system.
Building Software Organisations Where People Can Thrive
Trust builds over time through small interactions. When people know what to expect and how to interact with each other in tough moments, trust is formed, Card argued. Once trust is embedded, teams are more likely to take risks by putting themselves out there to be wrong and fail fast, and that is where the magic happens. You need to actively address bias and microaggressions. If left unchallenged, they quietly erode trust and belonging. Being proactive, fair, and consistent in addressing these behaviours signals your values clearly to the wider organisation, Card said. At the heart of it all is the belief that people-first leadership is performance leadership, Card said. When we take the time to build inclusive, resilient cultures, success follows, not just for the business, but for everyone within it, he concluded. ... psychological safety is the next level up from a trusting environment. Both are the foundations of any healthy, high-performing culture. Without them, people hold back; they’re less likely to share ideas, admit mistakes, or challenge the status quo. And that means your team won’t grow, innovate, or build strong relationships. If you want to build a culture that lasts, where people thrive, not just survive, then building trust and safety isn’t optional. It has to be intentional. And once it’s in place, it unlocks everything else: collaboration, resilience, accountability, and growth.Stop Delivering Change, Start Designing a Business That Can Actually Grow
Legacy and emerging technologies sit side by side, often competing for attention
and investment. Manual and systemised processes overlap in ways that only make
sense to the people living inside them. Long-standing roles carry deep, tacit
knowledge, while new-in-career roles arrive with different expectations, skills,
and assumptions about how work should flow. Each layer is changing, but rarely
in a deliberate, joined-up way. When leaders do not have a shared, design-level
understanding of how these layers interact, decisions are made in isolation. ...
Programme milestones become a proxy for progress. Technology capability becomes
a proxy for readiness. Productivity targets replace understanding. Designing the
next-generation business model requires a different kind of insight—one that
shows how people, process, data, and technology interact end to end. One that
makes visible where human judgement still matters, where automation genuinely
adds value, and where the handoffs between the two are quietly breaking down.
... Growth and productivity are not things you add through execution. They are
the result of deliberate design choices. A business model fit for today makes
explicit decisions about what is standardised and what is differentiated, what
is automated and what is augmented, what relies on experience and what demands
new capability. Those decisions cannot be delegated to programmes alone. They
sit squarely with leadership.
No comments:
Post a Comment