Quote for the day:
"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer
Evil Models and Exploits: When AI Becomes the Attacker
A more structured threat emerges with technologies like the Model Context
Protocol (MCP). Originally introduced by Anthropic, MCP allows large language
models (LLMs) to interact with host machines via JavaScript APIs. This enables
LLMs to perform sophisticated operations by controlling local resources and
services. While MCP is being embraced by developers for legitimate use cases,
such as automation and integration, its darker implications are clear. An
MCP-enabled system could orchestrate a range of malicious activities with ease.
Think of it as an AI-powered operator capable of executing everything from
reconnaissance to exploitation. ... The proliferation of AI models is both a
blessing and a curse. Platforms like Hugging Face host over a million models,
ranging from state-of-the-art neural networks to poorly designed or maliciously
altered versions. Amid this abundance lies a growing concern: model provenance.
Imagine a widely used model, fine-tuned by a seemingly reputable maintainer,
turning out to be a tool of a state actor. Subtle modifications in the training
data set or architecture could embed biases, vulnerabilities or backdoors. These
“evil models” could then be distributed as trusted resources, only to be
weaponized later. This risk underscores the need for robust mechanisms to verify
the origins and integrity of AI models.
The tipping point for Generative AI in banking
Advancements in AI are allowing banks and other fintechs to embed the technology
across their entire value chain. For example, TBC is leveraging AI to make 42%
of all payment reminder calls to customers with loans that are up to 30 days or
less overdue and is getting ready to launch other AI-enabled solutions.
Customers normally cannot differentiate the AI calls powered by our tech from
calls by humans, even as the AI calls are ten times more efficient for TBC’s
bottom line, compared with human operator calls. Klarna rolled out an AI
assistant, which handled 2.3 million conversations in its first month of
operation, which accounts for two-thirds of Klarna’s customer service chats or
the workload of 700 full-time agents, the company estimated. Deutsche Bank
leverages generative AI for software creation and managing adverse media, while
the European neobank Bunq applies it to detect fraud. Even smaller regional
players, provided they have the right tech talent in place, will soon be able to
deploy Gen AI at scale and incorporate the latest innovations into their
operations. Next year is set to be a watershed year when this step change will
create a clear division in the banking sector between AI-enabled champions and
other players that will soon start lagging behind.
Want to be an effective cybersecurity leader? Learn to excel at change management
Security should never be an afterthought; the change management process
shouldn’t be, either, says Michael Monday, a managing director in the security
and privacy practice at global consulting firm Protiviti. “The change management
process should start early, before changing out the technology or process,” he
says. “There should be some messages going out to those who are going to be
impacted letting them know, [otherwise] users will be surprised, they won’t know
what’s going on, business will push back and there will be confusion.” ... “It’s
often the CISO who now has to push these new things,” says Moyle, a former CISO,
founding partner of the firm SecurityCurve, and a member of the Emerging Trends
Working Group with the professional association ISACA. In his experience, Moyle
says he has seen some workers more willing to change than others and learned to
enlist those workers as allies to help him achieve his goals. ... When it comes
to the people portion, she tells CISOs to “feed supporters and manage
detractors.” As for process, “identify the key players for the security program
and understand their perspective. There are influencers, budget holders,
visionaries, and other stakeholders — each of which needs to be heard, and
persuaded, especially if they’re a detractor.”
Preparing financial institutions for the next generation of cyber threats
Collaboration between financial institutions, government agencies, and other
sectors is crucial in combating next-generation threats. This cooperative
approach enhances the ability to detect, respond to, and mitigate sophisticated
threats more effectively. Visa regularly works with international agencies of
all sizes to bring cybercriminals to justice. In fact, Visa regularly works
alongside law enforcement, including the US Department of Justice, FBI, Secret
Service and Europol, to help identify and apprehend fraudsters and other
criminals. Visa uses its AI and ML capabilities to identify patterns of fraud
and cybercrime and works with law enforcement to find these bad actors and bring
them to justice. ... Financial institutions face distinct vulnerabilities
compared to other industries, particularly due to their role in critical
infrastructure and financial ecosystems. As high-value targets, they manage
large sums of money and sensitive information, making them prime targets for
cybercriminals. Their operations involve complex and interconnected systems,
often including legacy technologies and numerous third-party vendors, which can
create security gaps. Regulatory and compliance challenges add another layer of
complexity, requiring stringent data protection measures to avoid hefty fines
and maintain customer trust.
Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025
Enterprises increasingly turned to AI-native security solutions, employing
continuous multi-factor authentication and identity verification tools. These
technologies monitor behavioral patterns or other physical world signals to
prove identity —innovations that can now help prevent incidents like the North
Korean hiring scheme. However, hackers may now gain another inside route to
enterprise security. The new breed of unregulated and offshore LLMs like
DeepSeek creates new opportunities for attackers. In particular, using
DeepSeek’s AI model gives attackers a powerful tool to better discover and take
advantage of the cyber vulnerabilities of any organization. ... Deepfake
technology continues to blur the lines between reality and fiction. ...
Organizations must combat the increasing complexity of identity fraud, hackers,
cyber security thieves, and data center poachers each year. In addition to all
of the threats mentioned above, 2025 will bring an increasing need to address
IoT and OT security issues, data protection in the third-party cloud and AI
infrastructure, and the use of AI agents in the SOC. To help thwart this year’s
cyber threats, CISOs and CTOs must work together, communicate often, and
identify areas to minimize risks for deepfake fraud across identity, brand
protection, and employee verification.
The Product Model and Agile
First, the product model is not new; it’s been out there for more than 20 years.
So I have never argued that the product model is “the next new thing,” as I
think that’s not true. Strong product companies have been following the product
model for decades, but most companies around the world have only recently been
exposed to this model, which is why so many people think of it as new. Second,
while I know this irritates many people, today there are very different
definitions of what it even means to be “Agile.” Some people consider SAFe as
Agile. If that’s what you consider Agile, then I would say that Agile plays no
part in the product model, as SAFe is pretty much the antithesis of the product
model. This difference is often characterized today as “fake Agile” versus “real
Agile.” And to be clear, if you’re running XP, or Kanban, or Scrum, or even none
of the Agile ceremonies, yet you are consistently doing continuous deployment,
then at least as far as I’m concerned, you’re running “real Agile.” Third, we
should separate the principles of Agile from the various, mostly project
management, processes that have been set up around those principles. ...
Finally, it’s also important to point out that there is one Agile principle that
might be good enough for custom or contract software work, but is not sufficient
for commercial product work. This is the principle that “working software is the
primary measure of progress.”
Next Generation Observability: An Architectural Introduction
It's always a challenge when creating architectural content, trying to capture
real-world stories into a generic enough format to be useful without revealing
any organization's confidential implementation details. We are basing these
architectures on common customer adoption patterns. That's very different from
most of the traditional marketing activities usually associated with generating
content for the sole purpose of positioning products for solutions. When you're
basing the content on actual execution in solution delivery, you're cutting out
the marketing chuff. This observability architecture provides us with a way to
map a solution using open-source technologies focusing on the integrations,
structures, and interactions that have proven to work at scale. Where those
might fail us at scale, we will provide other options. What's not included are
vendor stories, which are normal in most marketing content. Those stories that,
when it gets down to implementation crunch time, might not fully deliver on
their promises. Let's look at the next-generation observability architecture and
explore its value in helping our solution designs. The first step is always to
clearly define what we are focusing on when we talk about the next-generation
observability architecture.
AI SOC Analysts: Propelling SecOps into the future
Traditional, manual SOC processes already struggling to keep pace with existing
threats are far outpaced by automated, AI-powered attacks. Adversaries are using
AI to launch sophisticated and targeted attacks putting additional pressure on
SOC teams. To defend effectively, organizations need AI solutions that can
rapidly sort signals from noise and respond in real time. AI-generated phishing
emails are now so realistic that users are more likely to engage with them,
leaving analysts to untangle the aftermath—deciphering user actions and gauging
exposure risk, often with incomplete context. ... The future of security
operations lies in seamless collaboration between human expertise and AI
efficiency. This synergy doesn't replace analysts but enhances their
capabilities, enabling teams to operate more strategically. As threats grow in
complexity and volume, this partnership ensures SOCs can stay agile, proactive,
and effective. ... Triaging and investigating alerts has long been a manual,
time-consuming process that strains SOC teams and increases risk. Prophet
Security changes that. By leveraging cutting-edge AI, large language models, and
advanced agent-based architectures, Prophet AI SOC Analyst automatically triages
and investigates every alert with unmatched speed and accuracy.
Apple researchers reveal the secret sauce behind DeepSeek AI
The ability to use only some of the total parameters of a large language model
and shut off the rest is an example of sparsity. That sparsity can have a major
impact on how big or small the computing budget is for an AI model. AI
researchers at Apple, in a report out last week, explain nicely how DeepSeek and
similar approaches use sparsity to get better results for a given amount of
computing power. Apple has no connection to DeepSeek, but Apple does its own AI
research on a regular basis, and so the developments of outside companies such
as DeepSeek are part of Apple's continued involvement in the AI research field,
broadly speaking. In the paper, titled "Parameters vs FLOPs: Scaling Laws for
Optimal Sparsity for Mixture-of-Experts Language Models," posted on the arXiv
pre-print server, lead author Samir Abnar of Apple and other Apple researchers,
along with collaborator Harshay Shah of MIT, studied how performance varied as
they exploited sparsity by turning off parts of the neural net. ... Abnar and
team ask whether there's an "optimal" level for sparsity in DeepSeek and similar
models, meaning, for a given amount of computing power, is there an optimal
number of those neural weights to turn on or off? It turns out you can fully
quantify sparsity as the percentage of all the neural weights you can shut down,
with that percentage approaching but never equaling 100% of the neural net being
"inactive."
What Data Literacy Looks Like in 2025
“The foundation of data literacy lies in having a basic understanding of data.
Non-technical people need to master the basic concepts, terms, and types of
data, and understand how data is collected and processed,” says Li. “Meanwhile,
data literacy should also include familiarity with data analysis tools. ...
“Organizations should also avoid the misconception that fostering GenAI literacy
alone will help developing GenAI solutions. For this, companies need even
greater investments in expert AI talent -- data scientists, machine learning
engineers, data engineers, developers and AI engineers,” says Carlsson. “While
GenAI literacy empowers individuals across the workforce, building
transformative AI capabilities requires skilled teams to design, fine-tune and
operationalize these solutions. Companies must address both.” ... “Data literacy
in 2025 can’t just be about enabling employees to work with data. It needs to be
about empowering them to drive real business value,” says Jain. “That’s how
organizations will turn data into dollars and ensure their investments in
technology and training actually pay off.” ... “Organizations can embed data
literacy into daily operations and culture by making data-driven thinking a core
part of every role,” says Choudhary.
No comments:
Post a Comment