How the new deepfake reality will impact cyber insurance
Deepfakes can ruin a company's reputation, bypass biometric controls, phish
unsuspecting users into clicking malicious links, and convince financial
agents to transfer money to offshore accounts. Attacks leveraging deepfakes
can happen over many channels from social media to fake person-to-person
video calls over Zoom. Voicemail, Slack channels, email, mobile messaging, and
metaverses are all fair game for distributing deepfake scams to businesses and
personal users. Cyber liability insurers are beginning to take notice, and as
they do, their security requirements are beginning to adjust to the new 'fake'
reality. This includes, but is not limited to, better hygiene across the
enterprise, renewed focus on home worker systems, enforced
multifactor authentication, out-of-band confirmation to avoid falling for
deepfake phishing attempts, user and partner education, and third-party
context-based verification services or tools. ... For the most part,
organizations will need to focus on requirements that are in their cyber
insurance policies.
Confronting Financial Fraud in Payments with the Help of AI
AI is being considered for fraud protection efforts in different ways by these
organizations. Schmiedl said JPMorgan Chase has evolved from algorithms to
machine learning and neural nets to look at fraudulent card activity,
examining unstructured data, and entity extraction. “There’s an inherent
signal in every email,” he said. “Actors that are trying to create fraudulent
emails tend to basically use different patterns and you can learn those
patterns through AI/ML.” JPMorgan Chase is assessing the use of large language
models, Schmiedl said, for fraud, risk, and other possible areas. Such efforts
have focused on in-house data and resources, he said, focusing on the firm's
own ecosystem rather than looking externally. “If you start using these models
and outside data, you start to see things that are presented like facts that
aren’t facts,” Schmiedl said. Swift is building a new AI platform, Bhatia
said, with tech players such as Google, Microsoft, and others. “We really
believe that this is going to help us add on to the rule-based engines that we
already have today and really bring a higher success rate in helping with
fraud,” she said.
Recovery options: Copy-on write vs redirect-on-write snapshots
Consider a copy-on-write system, which duplicates blocks before overwriting
them with new data. In essence, when a block within the protected entity needs
to be changed, the system copies that block to a separate snapshot area before
it is overwritten. This approach uses three I/O operations for each write: one
read and two writes. Prior to overwriting a block, its previous value must be
read and written to a different location, followed by the write of the new
data. Should a process attempt to access the snapshot at a later time, it does
so through the snapshot system, which is aware of which blocks have been
changed since the snapshot was created. ... In contrast, a redirect-on-write
system utilizes pointers to represent all protected entities. When a block
needs to be changed, the storage system simply redirects the pointer
associated with that block to another block and writes the data there. The
snapshot system maintains a record of all block locations constituting a given
snapshot, which is essentially a list of pointers that correspond to the block
locations.
Five Ways AI is Likely to Change How Organizations Approach Information Risk and Security
Many security events and incidents are the result of insecure application
code, applications that are misconfigured or applications that have been
manipulated by adversaries and used as part of their attack activities. The
volume of security-related software patches and updates that are produced by
application vendors on an ongoing basis has provided clear evidence that
current approaches to application security must be enhanced to be effective.
AI is likely to accelerate these enhancements by integrating application
security-based LMMs into application development and security testing and
protective tools such as static application security testing (SAST) and
dynamic application security testing (DAST), software composition analysis
(SCA), web application firewalls (WAFs), application programming interface
(API) security gateways, and quality assurance and penetration testing.
These LMMs can ensure that application source code and running applications
are tested against—and are resilient to—variations and permutations of known
and expected attacker methods and tactics in a highly efficient and
risk-based testing environment.
Svelte vs Angular: Pros and Cons of Modern Web Development
Introduced to the world in 2016, Svelte emerged as the unlikely hero in the
tangled saga of JavaScript frameworks. Its mission? To revolutionize the way
we think about reactivity in web apps. Svelte has a sort of "wax on, wax
off" philosophy: rather than doing all the heavy lifting in the browser, it
does its magic in the build step. While other frameworks are trying to build
a luxurious skyscraper complete with a rooftop pool and a helipad, Svelte is
content with constructing a cozy, energy-efficient home that fulfills all
your needs. In the fast-paced, ever-changing universe of web development,
sometimes less is more. Think of Svelte as a stealthy web ninja – it's
lightweight, fast, and packs a powerful punch. It's reactive - change the
state, and the DOM updates automatically. It’s like having a little elf
inside your code, waiting patiently to sweep away any unnecessary work. ...
Don't just take my word for it - look around! Angular is powering everything
from IBM's online support pages to Delta Airlines' booking platform. It's as
versatile as it is powerful, and it's up for whatever challenge you're ready
to throw its way.
State of the API: Microservices Gone Macro and Zombie APIs
Engineers and developers ranked zombie APIs as a higher concern than
executives did, who placed “loss of institutional memory” as slightly more
concerning than loss of maintenance, aka zombie APIs. ... “That’s the
emergence of zombie APIs, because a lot of institutional knowledge lies with
the people who built it,” Sobti told The New Stack. “Once the people
transition out, the change management is complex, and that’s where
cataloging your API has internal APIs, in particular, becomes very
critical.” API catalogs can keep track of internal APIs in one place, he
added. There are dedicated teams that are now responsible for not just
building the underlying infrastructure that allows the catalogs to exist,
but also managing the catalog and creating the practices on building to get
those APIs into the catalogs. That is where reuse becomes critical, he
added. As further proof of the need for better documentation, the survey
found that a lack of documentation was cited as the primary obstacle to
consuming an API.
Taking IT outsourcing to the next level
When two parties enter a complex IT outsourcing deal, they need to work
collaboratively, communicate effectively, and build trust. This is where
relational contracts come in. Unlike transactional contracts focusing on
legal obligations and penalties, relational contracts emphasize
collaboration, communication, and problem-solving by specifying mutual goals
and establishing governance structures to keep the parties’ expectations and
interests aligned over the long term. Formally, a relational contract is
defined as “A legally enforceable written contract establishing a commercial
partnership within a flexible contractual framework based on social norms
and jointly defined objectives, prioritizing a relationship with the
continuous alignment of interests before the commercial transactions.”
Complex relationships in which it is impossible to predict every what-if
scenario are tailor-made for relational contracts. Large IT outsourcing
projects provide a strong example of this, due to the technical complexity
of the work and the number of stakeholders involved.
AI-led business processes – getting the balance right between business impact and staff satisfaction
While seen in parts today, it is showing signs of potential scale that would
create larger impacts on business processes. In addition, businesses across
multiple industries are predicted to focus more on the value add that can
only be contributed by human employees. According to research from Boston
Consulting Group, just 30 per cent of AI investment is spent on algorithms
and technologies, while the remaining 70 per cent has gone towards embedding
AI into business processes and agile ways of working. ... The expertise
within, and alongside that of partner companies of AIM Reply has been vital
in helping organisations across retail, consumer packaged goods,
manufacturing, logistics, financial services and insurance drive value from
evolving AI capabilities. Focused on serving as a boutique for AI and
hyperautomation platforms and solutions, the company encourages its clients
to adopt an end-to-end approach that focuses on business goals, enabling the
business make informed decisions based on data, and business benefits,
rather than use cases.
AI requirements exceed infrastructure capabilities for many IT teams, study finds
Companies lacking in the proper hardware to do AI training have two options:
make a massive investment in hardware or turn to cloud service providers for
AI-as-a-service, which most of the top cloud service providers now offer.
Rather than make the million-dollar investment in hardware, an enterprise
could upload the data to be processed and the cloud service provider can do
the heavy lifting. The enterprise could take the trained data models back
when the processing is done. Customers often will opt for end-to-end
solutions from AI vendors in the cloud, especially initially, “because they
make it easy for the customers with a simple button,” Voruganti said. But
variable cloud costs – which enterprises incur with each read or write to
cloud-based data, or with every data extraction, for example – may
cause IT teams to reconsider that approach.Voruganti said he’s seeing
companies choose to place foundation models with different cloud service
providers based on their areas of expertise.
Risk Management of Human and Machine Identity in a Zero Trust Security Context
While humans and their associated accounts are often the primary targets of
security measures, they merely represent the activity of the machines they
interact with. In a Zero Trust deployment, embracing the concept of "machine
as proxy human" becomes crucial. This approach allows organizations to apply
security rules and surveillance to all devices, treating them like a
malicious human is operating behind them. By considering machines as proxy
humans within the context of Zero Trust, organizations can extend security
measures to encompass all devices and systems within their environment. This
includes user devices, servers, IoT devices, and other interconnected
components. Organizations can enforce strict access controls by treating
machines as potential threat actors, applying behavioral analytics, and
continuously monitoring for suspicious activities or deviations from
expected behavior. This shift in mindset enables organizations to
proactively detect and respond to potential security threats, regardless of
whether they originate from human actors or compromised machines.
Quote for the day:
"Leadership is absolutely about inspiring action, but it is also about
guarding against mis-action." -- Simon Sinek
No comments:
Post a Comment