Quote for the day:
"Real difficulties can be overcome; it
is only the imaginary ones that are unconquerable." --
Theodore N. Vail

Cybersecurity audits need to move away from a yearly or quarterly exercise to
continuous evaluation, says Security Scorecard's Cobb. As part of that,
organizations should look to work with their suppliers to build a relationship
that can help both companies be more resilient, he says. "Maybe you do an
on-site visit or maybe you do a specific evidence gathering with that supplier,
especially if they're a critical supplier based on their grade," Cobb says.
"That security rating is a great first step for assessment, and it also will
lead into further discussions with that supplier around what things can you do
better." And yes, artificial intelligence (AI) is making inroads into monitoring
third-party risk profiles as well. Consultancy EY imagines a future where
multiple automated agents track information about suppliers and when an event —
whether cyber, geopolitical, or meteorological — affects one or more supply
chains, will automatically develop plans to mitigate the risk. Pointing out the
repeated supply chain shocks from the pandemic, geopolitics, and climate change,
EY argues that an automated system is necessary to keep up. When a chemical
spill or a cybersecurity breach affects a supplier in Southeast Asia, for
example, the system would track the news, predict the impact on a company's
supply, and suggest alternate sources, if needed, the EY report stated.

To really get the benefits, businesses will need to redesign the way work is
done. The agent should be placed at the center of the task, with people stepping
in only when human judgment is required. There is also the issue of trust. If
the agent is only giving suggestions, a person can check the results. But when
the agent acts directly, the risks are higher. This is where safety rules,
testing systems, and clear records become important. Right now, these systems
are still being built. One unexpected problem is that agents often think they
are done when they are not. Humans know when a task is finished. Agents
sometimes miss that. ... Today, the real barrier goes beyond just technology. It
is also how people think about agents. Some overestimate what they can do;
others are hesitant to try them. The truth lies in the middle. Agents are strong
with goal-based and repeatable tasks. They are not ready to replace deep human
thinking yet. ... Still, the direction is clear. In the next two years, agents
will become normal in customer support and software development. Writing code,
checking it, and merging it will become faster. Agents will handle more of these
steps with less need for back-and-forth. As this grows, companies may create new
roles to manage agents, needing someone to track how they are used, make sure
they follow rules, and measure how much value they bring. This role could be as
common as a data officer in the future.

APIs and MCP servers are inherently more agent-friendly but less ubiquitous than
websites. They expose services in a structured, scalable way that's perfect for
agent consumption. The tradeoff is that you must find a way to allow verified
agents to get access to your APIs. This is where some payment processing
protocols can help by allowing verified agents to get access credentials that
leverage your existing authentication, rate-limiting and abuse-prevention
mechanisms to ensure access doesn’t lead to spam or scraping. In many
cases, the best path is a hybrid approach: Expand your existing website to allow
agent-compatible access and checkout while building key capabilities for agent
access via APIs or MCP servers. ... Agents work best with standardized checkouts
instead of needing to dodge botblockers and captchas while filling out forms via
screenscraping. They need an entirely programmatic checkout process. That means
you must move beyond more brittle browser autofill and instead accept tokenized
payments directly via API. These tokens can carry pre-authorized payment methods
such as tokenized credit cards, digital wallets (e.g., Apple Pay and PayPal),
stablecoins or on-chain assets and account-to-account transfers. When combined
with identity tokens, these payment tokens allow agents to present a complete,
scoped credential that you can inspect and charge instantly. Think Stripe
Checkout but for AI.

One of the biggest risks comes from what’s known as compounding errors. Even a
very accurate AI system – for example, 95% – becomes far less reliable when
it’s chained to a series of compounding and related decisions. By the fifth
hypothetical step, accuracy would drop to 77% or less. Unlike human teams,
these systems don’t raise flags or signal uncertainty. That’s what makes them
so risky: when they fail, they tend to do so silently and exponentially. ...
This opacity is particularly dangerous in the fight against fraud, which is
only getting more advanced. In 2025, fraudsters aren’t using fake passports
and bad Photoshop. They’re using AI-generated identities, videos, and
documents that are nearly impossible to distinguish from the real thing. Tools
like Google’s Veo 3 or open-source image generators allow anyone to produce
high-quality synthetic content at scale. ... Responsible and effective use of
AI means using multiple models to cross-check results to avoid the domino
effect of one error feeding into the next. It means assigning human reviewers
to the most sensitive or high-risk cases – especially when fraud tactics
evolve faster than models can be retrained. And it means having clear
escalation procedures and full audit trails that can stand up to regulatory
scrutiny. This hybrid model offers the best of both worlds: the speed and
scale of AI, combined with the judgment and flexibility of human experts. As
fraud becomes more sophisticated, this balance will be essential.

The agents can flag unsupported claims in students’ writing and explain why
evidence is needed and recommend the use of credible sources, Luke Behnke, vice
president of product management at Grammarly, said in an interview. “Colleges
recognize it’s their responsibility to prepare students for the workforce, and
that now includes AI literacy,” Behnke said. Universities are also implementing
AI in their own learning management systems and providing students and staff
access to Google’s Gemini, Microsoft’s Copilot and OpenAI’s ChatGPT. ... Cuo
asks students not to simply accept whatever results advanced genAI models spit
out, as they may be riddled with factual errors and hallucinations. “Students
need to select and read more by themselves to create something that people don’t
recognize as an AI product,” Cuo said. Some professors are trying to mitigate AI
use by altering coursework and assignments, while others prefer not to use it at
all, said Paul Shovlin, an assistant professor of AI and digital rhetoric at
Ohio University. But students have different requirements and use AI tools for
personalized learning, collaboration, and writing, as well as for coursework
workflow, Shovlin said. He stressed, however, that ethical considerations,
rhetorical awareness, and transparency remain important in demonstrating
appropriate use.

Decreasing the validity time for a certificate offers multiple benefits. As
previous certificate revocations have demonstrated, actually revoking every bad
certificate in a timely manner, across the broad ecosystem, is a challenge.
Having certificates simply expire more frequently helps address that. The
CA/Browser Forum also expects an ancillary benefit of "increased consistency of
quality, stability and availability of certificate lifecycle management
components which enable automated issuance, replacement and rotation of
certificates." While such automation won't fix every ill, the forum said that
"it certainly helps." ... When it comes to getting the so-called cryptographic
agility needed to manage both of those requirements, many organizations say
they're not yet there. "While awareness is high, execution is lagging," says a
new study from market researcher Omdia. "Many organizations know they need to
act but lack clear roadmaps or the internal alignment to do so." ... For
managing the much shorter certificate renewal timeframe, only 19% of surveyed
organizations say they're "very prepared," with 40% saying they're somewhat
prepared and another 40% saying they're not very prepared, and so far continue
to rely on manual processes. "Historically, organizations have been able to get
by with poor certificate hygiene because cryptography was largely static," said
Tim Callan

"Think of them as AI factories." But as data centers grow in size and number,
often drastically changing the landscape around them, questions are looming:
What are the impacts on the neighborhoods and towns where they're being built?
Do they help the local economy or put a dangerous strain on the electric grid
and the environment? ... As fast as the AI companies are moving, they want to be
able to move even faster. Smith, in that Commerce Committee hearing, lamented
that the US government needed to "streamline the federal permitting process to
accelerate growth." ... Even as big tech companies invest heavily in AI, they
also continue to promote their sustainability goals. Amazon, for example, aims
to reach net-zero carbon emissions by 2040. Google has the same goal but states
it plans to reach it 10 years earlier, by 2030. With AI's rapid advancement,
experts no longer know if those climate goals are attainable, and carbon
emissions are still rising. "Wanting to grow your AI at that speed and at the
same time meet your climate goals are not compatible," Good says. For its
Louisiana data center, Meta has "pledged to match its electricity use with 100%
clean and renewable energy" and plans to "restore more water than it consumes,"
the Louisiana Economic Development statement reads.
In security, it seems that we are constantly confronted by the next shiny
object, item du jour, and/or overhyped topic. Along with this seems to come an
endless supply of “experts” ready to instill fear in us around the
“revolutionized threat landscape” and the “new reality” we apparently now find
ourselves in and must come to terms with. Indeed, there is certainly no shortage
of distractions in our field. Some of us are likely aware of and conscious of
the near-constant tendency for distraction in our field. So how can we avoid
falling into the trap of succumbing to the temptation and running after every
distraction that comes along? Or, to pose it another way, how can we
appropriately invest our time and resources in areas where we are likely to see
value and return on that investment? ... All successful security teams are
governed by a solid security strategy. While the strategy can be adjusted from
time to time as risks and threats evolve, it shouldn’t drift wildly and
certainly not in an instant. If the newest thing demands radically altering the
security strategy, it’s an indicator that it may be overblown. The good news is
that a well-formed security strategy can be adapted to deal with just about
anything new that arises in a steady and systematic way, provided that new thing
is real.

Most notable advances come from qubits built with superconducting circuits, as
used in IBM and Google machines. These systems must operate near absolute zero
and are notoriously hard to control. Other approaches use trapped ions, neutral
atoms, or photons as qubits. While these approaches offer greater inherent
stability, scaling up and integrating large numbers of qubits remains a
formidable practical challenge. "The costs and technical challenges of trying to
scale will probably show which are more practical," said Sebastian Weidt, chief
executive at Universal Quantum, a startup developing trapped ions. Weidt
emphasized that government support in the coming years could play a decisive
role in determining which quantum technologies prove viable, ultimately
limiting the field to a handful of companies capable of bringing a system to
full scale. Widespread interest in quantum computing is attracting attention
from both investors and government agencies. ... These next-generation
technologies are still in their early stages, though proponents argue they could
eventually surpass today's quantum machines. For now, industry leaders continue
refining and scaling legacy architectures developed over years of lab research.
ML models are often “black boxes”, even to their creators, so there’s little
visibility into how they arrive at answers. For security pros, this means
limited ability to audit or verify behavior – traditionally a key aspect of
cybersecurity. There are ways to circumnavigate this opacity of AI and ML
systems: with Trusted Execution Environments (TEEs). These are secure enclaves
in which organizations can test models repeatedly in a controlled ecosystem,
creating attestation data. ... Models are not static and are shaped by the data
they ingest. Thus, data poisoning is a constant threat for ML models that need
to be retrained. Organizations must embed automated checks into the training
process to enforce a continuously secure pipeline of data. Using information
from the TEE and guidelines on how models should behave, AI and ML models can be
assessed for integrity and accuracy each time they are given new information.
... Risk assessment frameworks that work for traditional software will not be
applicable to the changeable nature of AI and ML programs. Traditional
assessments fail to account for tradeoffs specific to ML, e.g., accuracy vs
fairness, security vs explainability, or transparency vs efficiency. To navigate
this difficulty, businesses must be evaluating models on a case-by-case basis,
looking to their mission, use case and context to weigh their risks.
No comments:
Post a Comment