Quote for the day:
"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain
Security researchers caution app developers about risks in using Google Antigravity
“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to
the product rather than a conferral of privileges.” The problem, it pointed
out, is that a compromised workspace becomes a long-term backdoor into every
new session. “Even after a complete uninstall and re-install of Antigravity,”
says Mindgard, “the backdoor remains in effect. Because Antigravity’s core
intended design requires trusted workspace access, the vulnerability
translates into cross-workspace risk, meaning one tainted workspace can impact
all subsequent usage of Antigravity regardless of trust settings.” For anyone
responsible for AI cybersecurity, says Mindguard, this highlights the need to
treat AI development environments as sensitive infrastructure, and to closely
control what content, files, and configurations are allowed into them. ...
Swanda recommends that app development teams building AI agents with
tool-calling: assume all external content is adversarial. Use strong input and
output guardrails, including tool calling; Strip any special syntax before
processing; implement tool execution safeguards. Require explicit user
approval for high-risk operations, especially those triggered after handling
untrusted content or other dangerous tool combinations; not rely on prompts
for security. System prompts, for example, can be extracted and used by an
attacker to influence their attack strategy.
How AI Is Rewriting The Rules Of Work, Leadership, And Human Potential
When a CEO tells his team, "AI is coming for your jobs, even mine," you pay
attention. It is rare to hear that level of blunt honesty from any leader, let
alone the head of one of the world's largest freelance platforms. Yet this is
exactly how Fiverr co-founder and CEO Micha Kaufman has chosen to guide his
company through the most significant technological shift of our lifetimes. His
blunt assessment: AI is coming for everyone's jobs, and the only response is
to get faster, more curious, and fundamentally better at being human. ...
We're applying AI to existing workflows and platforms, seeing improvements,
but not yet experiencing the fundamental restructuring that's coming. "It is
mostly replacing the things we used to do as human beings, acting as robots,"
Kaufman observes. The repetitive tasks, the research gathering, the document
summarizing, these elements where humans brought judgment but little humanity
are being automated first. ... It's not enough to use the obvious AI tools in
obvious ways. The real value emerges from those who push boundaries, combine
systems creatively, or bring exceptional judgment to AI-assisted workflows.
Kaufman points to viral videos created with advanced AI tools, noting that
their quality stems not from the AI itself but from the operator's genius,
experience, creativity, and taste developed over years.
How ‘digital twins’ could help prevent cyber-attacks on the food industry
A digital twin is a virtual replica of any product, process, or service,
capturing its state, characteristics, and connections with other systems
throughout its life cycle. The digital twin will include the computer system
used by the company. It can help because conventional defences are increasingly
out of step with cyber-attacks. Monitoring tools tend to detect anomalies after
damage occurs. Complex computer systems can often obscure the origins of
breaches. A digital twin creates a bridge between the physical and digital
worlds. It allows organisations to simulate real-time events, predict what might
happen next, and safely test potential responses. It can also help analyse what
happened after a cyber-attack to help companies prepare for future incidents.
... A digital twin might be able to avert disaster under this scenario. By
combining operational data such as temperature, humidity, or the speed air of
flow with internal computing system data or intrusion attempts, digital twins
offer a unified view of both system performance and cybersecurity. They enable
organisations to simulate cyber-attacks or equipment failures in a safe,
controlled digital environment, revealing vulnerabilities before attackers can
exploit them. A digital twin can also detect abnormal temperature patterns,
monitor the system for malicious activity, and perform analysis after a
cyber-attack to identify the causes.Why password management defines PCI DSS success
When you dig into real incidents involving payment data, a surprising number
come down to poor password hygiene. PCI DSS v4.0 raised the bar for
authentication, and the responsibility sits with security leaders to turn those
requirements into workable daily habits for users and admins. ... Requirement 8
asks organizations to verify the identity of every user with strong
authentication, make sure passwords and passphrases meet defined strength rules,
prevent credential reuse, limit attempts, and store credentials securely.
Passwords need to be at least 12 characters long, or at least 8 characters when
a system cannot support longer strings. These rules line up with guidance from
NIST SP 800 63B, which recommends longer passphrases, resistance against common
word lists and hashing methods that protect stored secrets. ... PCI DSS requires
that access be traceable to an individual and that shared accounts be minimized
and controlled. When passwords live across multiple channels, it becomes nearly
impossible to show auditors reliable evidence of access history. Even if the
team is trying hard, the workflow itself creates gaps that no policy document
can fix. ... Some CISOs view password managers as convenience tools. PCI DSS
v4.0 shows that they are closer to compliance tools because they make it
possible to enforce identity controls across an organization.
AI fluency in the enterprise: Still a ‘horseless carriage’
Companies are tossing AI agents onto existing processes, but a transformative
change — where AI is the boss — is still far away. That was the view of IT
leaders at this year’s Microsoft Ignite conference who’ve been putting AI agents
to work, mostly with legacy processes. The IT leaders discussed their efforts
during a conference panel at the event earlier this month. “We’re probably
living in some version of the horseless carriage — we haven’t got to the car
yet,” said John Whittaker, director of AI platform and products at accounting
and consulting firm EY. ... Pfizer is very process-centric, he said, stressing
that the goal is not to reinvent processes right out of the gate. The company is
analyzing how AI works for them, gaining confidence in the technology before
reorganizing processes within the AI lens. “Where we’re definitely heading … is
thinking about, ‘I’ve solved this process, I’ve been following exactly the way
it exists today. Now let’s blow it up and reimagine it…’ — and that’s exciting,”
he said. ... Lumen is now looking at where it wants the business to be in 36
months and linking it to AI agents and AI-native plans. “We’re … working back
from that and ensuring that we have the right set of tools, the right set of
training, and the right set of agents in order to enable that,” he said. Every
new Lumen employee in Alexander’s connected ecosystem group gets a Copilot
license. The technology has helped speed up the process of understanding
acronyms and historical trends within the company.Creating Impactful Software Teams That Continuously Improve
When you are a person who prefers your job to be strictly defined, with clear boundaries, then you feel supported instead of stifled by a boss who checks in on you regularly. In the same culture, you will feel relaxed, happy, and content, which will in turn allow you to bring your best to your job and deliver to your strengths, Žabkar Nordberg said. You do not want to have employees who will be extensions of yourself, Žabkar Nordberg said. Instead, you want people who will bring their own thoughts, their own solutions, and in many ways be different and better than yourself. ... Provide guidance, step away, and let people have autonomy within those constraints. You might say something like "I would like you to focus on improving our customer retention. Be aware that legal regulations require all steps in our current onboarding journey to be present, but we have flexibility in how we execute them as the user experience is not prescribed". This gives people guidance and focuses them, but still gives them the autonomy to bring their own experiences and find their own solutions. ... We want people to show initiative and proactively bring their own thoughts, improvements, and worries. Clear communication and an understanding of how people work will help them do that, Žabkar Nordberg said. Psychological safety underlines trust, autonomy, and communication; it is required for them to work effectively, he concluded.Empathetic policy engineering: The secret to better security behavior and awareness
Insecure behavior is often blamed on users, when the problem often lies in the
measure itself. In IT security research, the focus is often on individual user
behavior — for example, on whether secure behavior depends on personality
traits. The question of how well security measures actually fit the reality of
work — that is, how likely they are to be accepted in everyday practice — is
neglected. For every threat, there are usually several available security
measures. But differences in effort, acceptance, compatibility, or complexity
are often not taken into account in practice. Instead, security or IT
departments often make decisions based solely on technical aspects. ... Safety
measures and guidelines are often communicated in a way that doesn’t resonate
with users’ work reality because they don’t aim to engage employees and motivate
them: for example, through instructions, standard online training, or overly
playful formats like comics that employees don’t take seriously. ... The limited
success of many security measures is not solely due to the users — often it’s
unrealistic requirements, a lack of involvement, and inadequate communication.
For security leaders, this means: Instead of relying on education and sanctions,
a strategic paradigm shift is needed. They should become a kind of empathetic
policy architect whose security strategy not only works technically but also
resonates on a human level.
Agentic AI is not ‘more AI’—it’s a new way of running the enterprise
Agentic AI marks a shift from simply predicting outcomes or offering recommendations to systems that can plan tasks, take actions and learn from the results within defined guardrails. In practical terms, this means moving beyond isolated, single-task copilots towards coordinated “swarms” of agents that continually monitor signals, trigger workflows across systems, negotiate constraints and complete loops with measurable outcomes. ... A major barrier is trust and control. Leaders remain cautious about allowing software to take autonomous actions. Graduated autonomy provides a path forward: beginning with assistive tools, moving to supervised autonomy with reversible actions and eventually deploying narrow, fully autonomous loops when KPIs and rollback mechanisms have been validated. Lack of clarity on value is another obstacle. Impressive demonstrations do not constitute a strategy. Organisations should use a jobs-to-be-done perspective and tie each agent to a specific financial or risk objective, such as days-sales-outstanding, mean time to resolution, inventory turns or claims leakage. Analysts have warned that many agentic initiatives will be cancelled if value remains vague, so clear scorecards and time-boxed proofs of value are essential. Data readiness is a further challenge. Weak lineage, uncertain ownership and inconsistent quality stop AI scaling efforts in their tracks.6 strategies for CIOs to effectively manage shadow AI
“Be clear which tools and platforms are approved and which ones aren’t,” he
says. “Also be clear which scenarios and use cases are approved versus not, and
how employees are allowed to work with company data and information when using
AI like, for example, one-time upload as opposed to cut-and-paste or deeper
integration.” ... “The most important thing is creating a culture where
employees feel comfortable sharing what they use rather than hiding it,” says
Fisher. His team combines quarterly surveys with a self-service registry where
employees log the AI tools they use. IT then validates those entries through
network scans and API monitoring. ... “Effective inventory management requires
moving beyond periodic audits to continuous, automated visibility across the
entire data ecosystem,” he says, adding that good governance policies ensure all
AI agents, whether approved or built into other tools, send their data in and
out through one central platform. ... “Risk tolerance should be grounded in
business value and regulatory obligation,” says Morris. Like Fisher, Morris
recommends classifying AI use into clear categories, what’s permitted, what
needs approval, and what’s prohibited, and communicating that framework
through leadership briefings, onboarding, and internal portals. ... Transparency
is the key to managing shadow AI well. Employees need to know what’s being
monitored and why.
It’s Time to Rethink Access Control for Modern Development Environments
When faced with the time-consuming complexity of managing granular permissions
across dozens of development tools, most VPs of Engineering and CTOs opt for the
path of least resistance, granting broad administrative privileges to entire
engineering teams. It’s understandable from a productivity standpoint; nobody
wants to be a bottleneck when a critical release is imminent, or explain to the
CEO why they missed a market window because a developer couldn’t access a
repository. However, when everyone has admin privileges, attackers who gain
access to just one set of credentials can do tremendous damage. They gain not
just access to sensitive code and data, but the ability to manipulate build
processes, insert malicious code, or establish persistent backdoors. This
problem becomes even more dangerous when combined with the prevalence of shadow
IT, non-human identities, and contractor relationships operating outside your
security perimeter. ... The answer to stronger security that doesn’t hinder
developer productivity lies in implementing just-in-time permissioning within
the SDLC, a concept successfully adopted from cloud infrastructure management
that can transform how we handle development access controls. The approach is
straightforward: instead of granting permanent administrative access to
everyone, take 90 days to observe what developers actually need to do their
jobs, then right-size their permissions accordingly.
No comments:
Post a Comment