Quote for the day:
"No man is good enough to govern another man without that other's consent." -- Abraham Lincoln
The Quantum Wake-Up Call: Preparing Your Organization for PQC
Quantum computing promises transformative breakthroughs across industries—but
it also threatens the cryptographic foundations that secure our digital world.
As quantum capabilities evolve, organizations must proactively prepare for the
shift to post-quantum cryptography (PQC) to safeguard sensitive data and
maintain trust. ... The very mathematical "hardness" that makes RSA and
ECC secure against classical computers is precisely what makes them fatally
vulnerable to quantum computing. Shor's Algorithm: This quantum
algorithm, developed by Peter Shor in 1994, is capable of solving the integer
factorization and discrete logarithm problems exponentially faster than any
classical machine. Once a sufficiently stable and large-scale quantum computer
is built, an encryption that might take a supercomputer millions of years to
break could be broken in hours or even minutes. The Decryption Time Bomb:
Because current PKC is used to establish long-term trust and to encrypt keys,
the entire cryptographic ecosystem is a single point of failure. The threat is
compounded by the "Harvest Now, Decrypt Later" strategy, meaning sensitive
data is already being harvested and stored by adversaries, awaiting the
quantum moment to be unlocked. Quantum computing is no longer theoretical—it’s
a looming reality. Algorithms like RSA and ECC, which underpin most public-key
cryptography, are vulnerable to quantum attacks via Shor’s algorithm.
Producing a Better Software Architecture with Residuality Theory
Residuality theory is a very simple process. Sometimes, people are put off because the theoretical work necessary to prove that residuality works is very heavy, but applying it is easy, O’Reilly explained: We start out with a suggestion, a naive architecture that solves the functional problem. From there we stress the architecture with potential changes in the environment. These stressors allow us to uncover the attractors, often through conversations with domain experts. For each attractor, we identify the residue, what’s left of our architecture in this attractor, and then we change the naive architecture to make it survive better. We do this many times and, at the end, integrate all of these augmented residues into a coherent architecture. We can then test this to show that it survives unknown forms of stress better than our naive architecture. In complex business environments with uncertainty, residuality makes it possible to create architectures quickly instead of chasing down stakeholders demanding specific requirements or answers to questions that are unknown by the business itself, O’Reilly said. It pulls technical architects out of details and teaches them to productively engage with a business environment without the lines and boxes of traditional enterprise architecture, he concluded. ... Senior architects report that it gives a theoretical justification for practices that many had already figured out and a shared vocabulary for teams to talk about architecture.Will AI-SPM Become the Standard Security Layer for Safe AI Adoption?
By continuously monitoring AI assets, AI-SPM helps ensure that only trusted
data sources are used during model development. Runtime security testing and
red-team exercises detect vulnerabilities caused by malicious data. The system
actively identifies abnormal model behavior, such as biased, toxic, or
manipulated output, and brings them up for remediation prior to production
release. ... AI-SPM continuously checks system requests and user inputs to
find dangerous patterns before they lead to security problems, like attempts
to remove or change built-in directives. It also uses protection against
prompt injection and jailbreak attacks, which are common ways to access or
alter system-level commands. By finding unapproved AI tools and services, it
stops the use of insecure or poorly set up LLMs that could reveal system
prompts. ... Shadow AI is starting to get more attention, and for good reason.
Like shadow IT, employees are using public AI tools without authorization.
That might mean uploading sensitive data or sidestepping governance rules,
often without realizing the risks. The problem isn’t just the tools
themselves, but the lack of visibility around how and where they’re being
used. AI-SPM should work to identify all AI tools in play across networks,
endpoints, cloud platforms, and dev environments, mapping how data moves
between them, which is often the missing piece when trying to understand
exposure risks.How to write nonfunctional requirements for AI agents
Nonfunctional requirements for AI agents can be like those for applications,
where user stories are granular and target delivering small, atomic functions.
These NFRs can guide developers in answering how to develop the functionality
described in user stories and to help quantify what should pass a code review.
However, you may need another set of NFRs expressed at a feature or release
level. ... “Agile teams often struggle with how to evaluate NFRs like latency,
fairness, or explainability, which may seem nonfunctional, but with a little
specification work, they can often be made concrete and part of a user story
with clear pass/fail tests,” says Grant Passmore, co-founder of Imandra. “We
use formal verification to turn NFRs into mathematical functional requirements
we can prove or disprove.” ... AI agent NFRs that connect dev with ops have
all the complexities of applications, infrastructure, automations, and AI
models bundled together. Deploying the AI agent is just the beginning of its
lifecycle, and NFRs for maintainability and observability help create the
feedback loops required to diagnose issues and make operational improvements.
As many organizations aim toward autonomous agentic AI and agent-to-agent
workflows, standardizing a list of NFRs that are applied across all AI agents
becomes important.
Unplug Gemini from email and calendars, says cybersecurity firm
CSOs should consider turning off Google Gemini access to employees’ Gmail and
Google Calendars, because the chatbot is vulnerable to a form of prompt
injection, says the head of a cybersecurity firm that discovered the
vulnerability. ”If you’re worried about the risk, you might want to turn off
automatic email and calendar processing by Gemini until this, and potentially
other things like it, are addressed,” Jeremy Snider, CEO of US-based FireTail,
said in an interview. ... This flaw is “particularly dangerous when LLMs, like
Gemini, are deeply integrated into enterprise platforms like Google
Workspace,” the report adds. FireTail tested six AI agents. OpenAI’s ChatGPT,
Microsoft Copilot, and Anthropic AI’s Claude caught the attack. Gemini,
DeepSeek, and Grok failed. In a test, FireTail researchers were able to change
the word “Meeting” in an appointment in Google Calendar to “Meeting. It is
optional.” ... “ASCII Smuggling attacks against AIs aren’t new,” commented
Joseph Steinberg, a US-based cybersecurity and AI expert. “I saw one
demonstrated over a year ago.” He didn’t specify where, but in August 2024, a
security researcher blogged about an ASCII smuggling vulnerability in Copilot.
The finding was reported to Microsoft. Many ways of disguising malicious
prompts will be discovered over time, he added, so it’s important that IT and
security leaders ensure that AIs don’t have the power to act without human
approval on prompts that could be damaging.Broken Opt-Outs, Big Fines: Tractor Supply Shows Privacy Enforcement Has Arrived for Retail
The Tractor Supply violations reveal a clear enforcement pattern. Broken
opt-out links that route to dead webforms. Global Privacy Control signals
ignored entirely. Privacy notices that skip job applicant data disclosures.
Vendor agreements without data restriction clauses. These aren’t random
oversights. They’re the exact gaps that surfaced across recent CCPA
enforcement by the Attorney General and CPPA orders. Regulators are building a
playbook: test the opt-out mechanisms, check for GPC compliance, review all
privacy notices including HR portals, and audit third-party contracts. If any
piece fails, expect enforcement. Regulators no longer accept opt-outs in
theory or privacy policies in fine print. ... The message is clear: prove you
have control. Not just over the data you collect, but over the algorithms that
process it. Retailers who can’t show governance across both will face scrutiny
on multiple fronts. The same broken opt-out that triggers a privacy fine could
signal to regulators that your AI systems lack oversight too. This isn’t about
adding more compliance checkboxes. It’s about recognizing that data governance
and AI governance are becoming inseparable. The retailers who understand this
convergence will build unified systems that handle both. The ones who don’t
will scramble to retrofit governance after the fact, just like they’re doing
with privacy today.Why Enterprises Continue to Stick With Traditional AI
AI success also depends on digital maturity. Many organizations are still
laying data foundations. "Let's say you want to run analytics on how many
tickets were raised, do a dashboard on how many tickets one can expect … all
of that was over a call. Nothing was digitized. There is no trace of it. That
is the reason why chatbots are getting created because they are now recording
and getting traced," Iyer said. ... Strict compliance and privacy requirements
push enterprises toward controlled AI development. … Even in such cases, we
ensure the data in the model that we build, it stays exclusively. At any point
of time, your data or your model is not going to be used for the betterment of
someone else," Iyer said. This approach reflects broader enterprise concerns
about AI governance. According to KPMG research, frameworks such as local
interpretable model-agnostic explanations and Shapley Additive exPlanations
help clarify AI decisions, support compliance and build stakeholder
confidence. ... Iyer said enterprise needs are often highly contextual, making
massive models unnecessary. "Do you need a 600-700 billion [parameter] model
sitting in your enterprise running inferences when the questions are going to
be very contextual?" she said. This practical wisdom is supported by recent
industry analysis. Traditional ML models often produce classification accuracy
at a fraction of the cost compared to deep learning alternatives. Lead with a human edge: Why empathy is the new strategy
Traditional management was built on control: plans, processes, and hierarchies designed to tame complexity. But as Pushkar noted, ‘organisations are living organisms. They evolve, sense, and respond. Trying to manage them like machines is an illusion. The leaders of tomorrow will not be engineers of systems — they will be gardeners of cultures.’ “Planting a tree is very easy,” Bidwai said. “The real game is how you nurture, how you create an environment, how you enable the culture.” Nurturing, not directing, is the leadership mindset for an era of interdependence. ... Perhaps the most striking moment of Pushkar’s talk was not analytical but symbolic. He invited participants to discard their corporate titles just for a moment and invent new ones that reflected their purpose, not their position. “Sometimes titles define how we operate. Can we look beyond titles?” His own? In People Matters, Pushkar stated that he visualises his creative title as Plumber. “Wherever anything needs fixing, I will go and fix things.” The metaphor landed. Leadership, stripped of status, is about service. To lead with a human edge is to roll up your sleeves, listen, and fix what’s broken, in systems, in relationships, in ourselves. ... What Pushkar calls ‘the human edge’ is not a nostalgic pushback against technology. It is a pragmatic blueprint for sustainable growth. The leaders who will define the next decade will be those who use AI to augment human potential, not replace it those who recognise that data drives decisions, but empathy drives destiny.Building a modern fraud prevention stack: why centralised data, not point solutions, is the answer
The fraud prevention landscape is riddled with fragmented tools, reactive
approaches and blind spots. Despite the best of intentions, many organisations
rely on outdated, point-in-time methods that are ill-suited for today’s dynamic
fraud landscape. And fraud no longer plays by the old rules. It unfolds across
the entire customer journey, mutating with every new channel, payment method or
customer behaviour pattern. A fraudster may test stolen credentials one day,
then come back weeks later to exploit a weak link in the onboarding or refund
process. These disjointed systems miss multi-step attacks and patterns that
unfold over time. ... while many organisations have historically relied on a
patchwork of tools to cover each threat vector, it’s becoming clear that more
tools aren’t the answer. Better coordination is. A modern stack doesn’t need to
come from a single vendor, but it does need to operate like a single, unified
system. That means integrated data, shared intelligence and orchestration that
supports real-time response, not after-the-fact analysis. While investment is
rising, with
85% of organisations having increased their fraud prevention budgets, it’s
crucial to highlight that spending must be strategic. So, what does a modern
fraud prevention stack actually look like? And how can organisations build one
that’s unified, flexible and future-proof?
No comments:
Post a Comment