Flawed AI Tools Create Worries for Private LLMs, Chatbots
The research underscores that the rush to integrate AI into business processes
does pose risks, especially for companies that are giving LLMs and other
generative-AI applications access to large repositories of data. ... The risks
posed by the adoption of next-gen artificial intelligence and machine learning
(AI/ML) are not necessarily due to the models, which tend to have smaller attack
surfaces, but the software components and tools for developing AI applications
and interfaces, says Dan McInerney, lead AI threat researcher with Protect AI,
an AI application security firm. "There's not a lot of magical incantations that
you can send to an LLM and have it spit out passwords and sensitive info," he
says. "But there's a lot of vulnerabilities in the servers that are used to host
LLMs. The [LLM] is really not where you're going to get hacked — you're going to
get hacked from all the tools you use around the LLM." ... "Exploitation of this
vulnerability could affect the immediate functioning of the model and can have
long-lasting effects on its credibility and the security of the systems that
rely on it," Synopsys stated in its advisory.
Cyber resiliency is a key focus for us: Balaji Rao, Area VP – India & SAARC, Commvault
Referring to the classical MITRE framework, the recommendation is to “shift
right” – moving focus towards recovery. After thoroughly assessing risks and
implementing various tools, it’s crucial to have a solid recovery plan in place.
Customers are increasingly concerned about scenarios where both their primary
and disaster recovery (DR) systems are compromised by ransomware, and their
backups are unavailable. According to a Microsoft report, in 98% of successful
ransomware cases, backups are disabled. To address this concern, the strategy
involves building a cyber resilient framework that prioritises recovery. ... For
us, AI serves multiple purposes, primarily enhancing efficiency, scanning for
threats, and addressing customer training and enablement needs. From a security
perspective, we leverage AI extensively to detect ransomware-related risks. Its
rapid data processing capabilities allow for thorough scanning across vast
datasets, enabling pattern matching and identifying changes indicative of
potential threats. We’ve integrated AI into our threat scanning solutions,
strengthening our ability to detect and mitigate malware by leveraging
comprehensive malware databases.
The importance of developing second-line leaders
Developing second-line leaders helps your business unit or function succeed at
a whole new level: When your teams know that leadership development is a
priority, they start preparing for future roles. The top talent will cultivate
their skills and equip themselves for leadership positions, enhancing overall
team performance. As the cascading effect builds, this proactive development
has a multiplicative impact, especially if competition within the team remains
healthy. It's also important for your personal growth as a leader: The most
fulfilling aspect is the impact on yourself. Measuring your leadership success
by contribution, attribution, and legacy, developing capable successors
fulfils all three criteria. It ensures you contribute effectively, gain
recognition for building strong teams, and leave a lasting legacy through the
leaders you've developed. ... It starts with the self. Begin with delegation
without abdication or evasion of accountability. This skill is a cornerstone
of effective leadership, involving the entrusting of responsibilities to
others while empowering them to assume ownership and make informed
decisions.
Navigating The AI Revolution: Balancing Risks And Opportunities
Effective trust management requires specific approaches, such as robust
monitoring systems, rigorous auditing processes and well-defined incident
response plans. More importantly, in order for any initiative to address AI
risks to be successful, we as an industry need to build a workforce of trained
professionals. Those operating in the digital trust domain, including
cybersecurity, privacy, assurance, risk and governance of digital technology,
need to understand AI before building controls around it. The ISACA AI survey
revealed that 85% of digital trust professionals say they will need to
increase their AI skills and knowledge within two years to advance or retain
their jobs. This highlights the importance of continuous learning and
adaptation for cybersecurity professionals in the era of AI. Gaining a deeper
understanding of how AI-powered attacks are altering the threat landscape,
along with how AI can be effectively utilized by security practitioners, will
be essential. As security professionals learn more about AI, they need to
ensure that the methods being deployed align with an enterprise’s overarching
need to maintain trust with its stakeholders.
CISO‘s Guide to 5G Security: Risks, Resilience and Fortifications
A strong security posture requires granular visibility into 5G traffic and
automated security enforcement to effectively thwart attackers, protect
critical services, and safeguard against potential threats to assets and the
environment. This includes a focus on detecting and preventing attacks at all
layers, interface and threat vector — from equipment (PEI) and subscriber
(SUPI) identification, applications, signaling, data, network slices, malware,
ransomware and more. ... To accomplish the task at hand brought about by 5G,
CISOs must be prepared to provide a swift response to known and unknown
threats in real time with advanced AI and machine learning, automation and
orchestration tools. As connotation shifts from viewing 4G as a more
consumer-focused mobile network to the power of private 5G when embedded
across enterprise infrastructure, any kind of lateral network movement can
bring about damage. ... Strategy and solution start with zero trust and can go
as far as an entire 5G SOC dedicated to the nuances brought about by the
next-gen network. The change and progress 5G promises is only as significant
as our ability to protect networks and infrastructure from malicious actors,
threats, and attacks.
Cloud access security brokers (CASBs): What to know before you buy
CASBs sit between an organization’s endpoints and cloud resources, acting as a
gateway that monitors everything that goes in or out, providing visibility
into what users are doing in the cloud, enforcing access control policies, and
looking out for security threats. ... The original use case for CASBs was to
address shadow IT. When security execs deployed their first CASB tools, they
were surprised to discover how many employees had their own personal cloud
storage accounts, where they squirreled away corporate data. CASB tools can
help security teams discover and monitor unauthorized or unmanaged cloud
services being used by employees. ... Buying a CASB tool can be complex.
There’s a laundry list of possible features that fall within the broad CASB
definition (DLP, SWG, etc.) And CASB tools themselves are part of a larger
trend toward SSE and SASE platforms that include features such as ZTNA or
SD-WAN. Enterprises need to identify their specific pain points — whether
that’s regulatory compliance or shadow IT — and select a vendor that meets
their immediate needs and can also grow with the enterprise over time.
What is model quantization? Smaller, faster LLMs
Why do we need quantization? The current large language models (LLMs) are
enormous. The best models need to run on a cluster of server-class GPUs; gone
are the days where you could run a state-of-the-art model locally on one GPU
and get quick results. Quantization not only makes it possible to run a LLM on
a single GPU, it allows you to run it on a CPU or on an edge device. ... As
you might expect, accuracy may be an issue when you quantize a model. You can
evaluate the accuracy of a quantized model against the original model, and
decide whether the quantized model is sufficiently accurate for your purposes.
For example, TensorFlow Lite offers three executables for checking the
accuracy of quantized models. You might also consider MQBench, a benchmark and
framework for evaluating quantization algorithms under real-world hardware
deployments that uses PyTorch. If the degradation in accuracy from
post-training quantization is too high, then one alternative is to use
quantization aware training.
Europe Declares War on Tech Spoofing
In the new Payment Services Regulation, members of the European Parliament
argued that messaging services such as WhatsApp, digital platforms such as
Facebook, or marketplaces such as Amazon and eBay could be liable for scams
that originate on their platforms, on a par with banks and other payment
service providers. ... Europe’s new payment regulations are now up for
negotiation in Brussels. Large US tech firms and messaging apps are pushing to
lower the liability risk. They argue banks, not them, should be responsible.
With spoofing or impersonation scams, the fraudulent transaction occurs on
banking service portals, not the platforms. And so, banks themselves should
enhance their security measures or pay the price. Banks, not surprisingly,
disagree. They cannot control the entry points that fraudsters use to reach
consumers, whether it is by phone, messaging apps, online ads, or the dark
web. Why shouldn’t telecom network operators, messaging, and other digital
platforms also be obliged to avoid fraudsters from reaching consumers and if
they fail, be held liable?
Process mining helps IT leaders modernize business operations
Process mining provides the potential to enable organizations make quicker,
more informed decisions when overhauling business processes by leveraging data
for insights. By using the information gleaned from process mining, companies
can better streamline workflows, enhance resource allocation, and automate
repetitive tasks. ... Successful deployment and maintenance of process mining
requires a clear vision from the management team and board, Mortello says, as
well as commitment and persistence. “Process mining doesn’t usually yield
immediate, tangible results, but it can offer unique insights into how a
company operates,” he says. “A leadership team with a long-term vision is
crucial to ensure the technology is utilized to its full potential.” It’s also
important to thoroughly analyze processes prior to “fixing” them. “Make sure
you have a good handle on the process you think you have and the ones you
really have,” Constellation Research’s Wang says. “What we see across the
board is a quick realization that what’s assumed and what’s done is very
different.”
Could the Next War Begin in Cyberspace?
In a cyberwar, disinformation campaigns will likely be used to spread
misinformation and collect data that can be leveraged to sway public opinion
on key issues, Janzen says. "We can build very sophisticated security systems,
but so long as we have people using those systems, they will be targeted to
willingly or unwillingly allow malicious actors into those systems." ... How
long a cyberspace war might last is inherently unpredictable, characterized by
its persistent and ongoing nature, Menon says. "In contrast to conventional
wars, marked by distinct start and end points, cyber conflicts lack
geographical constraints," he notes. "These battles involve continuous
attacks, defenses, and counterattacks." The core of cyberspace warfare lies in
understanding algorithms, devising methods to breach them, and inventing new
technologies to dismantle legacy systems, Menon says. "These factors, coupled
with the relatively low financial investment required, contribute to the
sporadic and unpredictable nature of cyberwars, making it challenging to
anticipate when they may commence."
Quote for the day:
"It's fine to celebrate success but it
is more important to heed the lessons of failure." --
Bill Gates
No comments:
Post a Comment