What is generative AI and why is it so popular?
All it refers to is AI algorithms that generate or create an output, such as
text, photo, video, code, data, and 3D renderings, from data they are trained
on. The premise of generative AI is to create content, as opposed to other
forms of AI, which might be used for other purposes, such as analysing data or
helping to control a self-driving car. ... Machine learning refers to the
subsection of AI that teaches a system to make a prediction based on data it's
trained on. An example of this kind of prediction is when DALL-E is able to
create an image based on the prompt you enter by discerning what the prompt
actually means. Generative AI is, therefore, a machine-learning framework. ...
Generative AI is used in any algorithm/model that utilizes AI to output a
brand new attribute. Right now, the most prominent examples are ChatGPT and
DALL-E, as well as any of their alternatives. Another example is MusicLM,
Google's unreleased AI text-to-music generator. An additional in-development
project is Google's Bard.
openIDL: The first insurance Open Governance Network and why the industry needs It
To date, openIDL’s member community includes carrier premiere members:
Travelers, The Hartford, The Hanover, and Selective Insurance; state regulator
and DOI members; infrastructure partners; associate members; and other
non-profit organizations, government agencies, and research/academic
institutions. openIDL’s network is built on Hyperledger Fabric, an LF
distributed ledger software project. Hyperledger Fabric is intended as a
foundation for developing applications or solutions with a modular
architecture. The technology allows components, such as consensus and
membership services, to be plug-and-play. Its modular and versatile design
satisfies a broad range of industry use cases and offers a unique approach to
consensus that enables performance at scale while preserving privacy. For the
last few years, a running technology joke has been “describe your problem, and
someone will tell you blockchain is the solution.” As funny as this is, what’s
not funny is the truth behind the joke, and the insurance industry is
certainly one that fell head over heels for the blockchain hype.
Self-healing endpoints key to consolidating tech stacks, improving cyber-resiliency
Just as enterprises trust silicon-based zero-trust security over quantum
computing, the same holds for self-healing embedded in an endpoint’s silicon.
Forrester analyzed just how valuable self-healing in silicon is in its report,
The Future of Endpoint Management. Forrester’s Andrew Hewitt, the report’s
author, says that “self-healing will need to occur at multiple levels: 1)
application; 2) operating system; and 3) firmware. Of these, self-healing
embedded in the firmware will prove the most essential because it will ensure
that all the software running on an endpoint, even agents that conduct
self-healing at an OS level, can effectively run without disruption.” Forrester
interviewed enterprises with standardized self-healing endpoints that rely on
firmware-embedded logic to reconfigure themselves autonomously. Its study found
that Absolute’s reliance on firmware-embedded persistence delivers a secured,
undeletable digital tether to every PC-based endpoint. Organizations told
Forrester that Absolute’s Resilience platform is noteworthy in providing
real-time visibility and control of any device, on a network or not, along with
detailed asset management data.
How enterprises can use ChatGPT and GPT-3
It is not possible to customize ChatGPT, since the language model on which it
is based cannot be accessed. Though its creator company is called OpenAI,
ChatGPT is not an open-source software application. However, OpenAI has made
the GPT-3 model, as well as other large language models (LLMs) available. LLMs
are machine learning applications that can perform a number of natural
language processing tasks. “Because the underlying data is specific to the
objectives, there is significantly more control over the process, possibly
creating better results,” Gartner said. "Although this approach requires
significant skills, data curation and funding, the emergence of a market for
third-party, fit-for-purpose specialized models may make this option
increasingly attractive." ... ChatGPT is based on a smaller text model,
with a capacity of around 117 million parameters. GPT-3, which was trained on
a massive 45TB of text data, is significantly larger, with a capacity of 175
billion parameters, Muhammad noted. ChatGPT is also not connected to the
internet, and it can occasionally produce incorrect answers.
Flaws in industrial wireless IoT solutions can give attackers deep access into OT networks
While many of these flaws are still in the process of responsible disclosure,
one that has already been patched impacts Sierra Wireless AirLink routers and
is tracked CVE-2022-46649. This is a command injection vulnerability in the IP
logging feature of ACEManager, the web-based management interface of the
router, and is a variation of another flaw found by researchers from Talos in
2018 and tracked as CVE-2018-4061. It turns out that the filtering put in
place by Sierra to address CVE-2018-4061 did not cover all exploit scenarios
and researchers from Otorio were able to bypass it. In CVE-2018-4061,
attackers could attach additional shell commands to the tcpdump command
executed by the ACEManager iplogging.cgi script by using the -z flag. This
flag is supported by the command-line tcpdump utility and is used to pass
so-called postrotate commands. Sierra fixed it by enforcing a filter that
removes any -z flag from the command passed to the iplogging script if it's
followed by a space, tab, form feed or vertical tab after it, which would
block, for example, "tcpdump -z reboot".
Are Your Development Practices Introducing API Security Risks?
APIs are a prime target for such attacks because cybercriminals can overload
the API endpoint with unwanted traffic. Ultimately, the attacker’s goal is to
use the API as a blueprint to find internal objects or database structures to
exploit. For example, a vulnerable API endpoint backend that connects to a
frontend service can expose end users to risk. One researcher even discovered
a way to abuse automobiles’ APIs and telematics systems to execute various
tasks remotely, such as to lock the vehicle. In the past, bot management
technologies, like CAPTCHA, were developed to block bots’ access to web pages
that were intended only for human users. However, that approach to security
assumes that all automated traffic is malicious. As application environments
have matured and multiplied, automation became essential for executing simple
functions. Thus, it means organizations cannot rely on simplistic web
application firewall rules that block all traffic from automated sources by
default. Instead, they need to quickly identify and differentiate good and bad
bot traffic.
Zero-shot learning and the foundations of generative AI
One application of few-shot learning techniques is in healthcare, where
medical images with their diagnoses can be used to develop a classification
model. “Different hospitals may diagnose conditions differently,” says Talby.
“With one- or few-shot learning, algorithms can be prompted by the clinician,
using no code, to achieve a certain outcome.” But don’t expect fully automated
radiological diagnoses too soon. Talby says, “While the ability to
automatically extract information is highly valuable, one-, few-, or even
zero-shot learning will not replace medical professionals anytime soon.”
Pandurang Kamat, CTO at Persistent, shares several other potential
applications. “Zero-shot and few-shot learning techniques unlock opportunities
in areas such as drug discovery, molecule discovery, zero-day exploits, case
deflection for customer-support teams, and others where labeled training data
may be hard.” Kamat also warns of current limitations.
PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023
“Many of the interesting business use cases emerge when you consider that you
can further train (fine-tune) generative AI models with your own content,
documentation and assets so it can operate on the unique capabilities of your
business, in your context. In this way, a business can extend generative AI in
the ways they work with their unique IP and knowledge. “This is where security
and privacy become important. For a business, the ways you prompt generative
AI to generate content should be private for your business. Fortunately, most
generative AI platforms have considered this from the start and are designed
to enable the security and privacy of prompts, outputs and fine-tuning
content. ... “Using generative AI to innovate the audit has amazing
possibilities! Sophisticated generative AI has the ability to create responses
that take into account certain situations while being written in simple,
easy-to-understand language.
What leaders get wrong about responsibility
One way of demonstrating responsibility is through the process of asking and
answering questions. Many get at least one part of the process right: by
responding to the questions received from their employees, leaders believe
that they are showing themselves to be reliable and trustworthy. This isn’t
too far off base. The word responsibility, after all, stems from the Latin
respons, meaning respond or answer to. Unfortunately, by not asking questions
themselves, leaders prevent employees from demonstrating the same kind of
reliable and trustworthy behavior—and that makes it harder to embed the
locally owned responsibility that they are looking for. ... When leaders use
questions to assume responsibility themselves, they think, talk, and behave in
a way that puts them at the center of attention (see the left side of the
figure above). The questions they ask are quiz or test questions designed to
confirm that the respondents see the world in the same way the leader
does—e.g., “What are the components of a good marketing campaign?”
OT Network Security Myths Busted in a Pair of Hacks
In one set of findings, a research team from Forescout Technologies was able
to bypass safety and functional guardrails in an OT network and move laterally
across different network segments at the lowest levels of the network: the
controller level (aka Purdue level 1), where PLCs live and run the physical
operations of an industrial plant. The researchers used two newly disclosed
Schneider Modicon M340 PLC vulnerabilities that they found — a remote code
execution (RCE) flaw and an authentication bypass vulnerability — to breach
the PLC and take the attack to the next level by pivoting from the PLC to its
connected devices in order to manipulate them to perform nefarious physical
operations. "We are trying to dispel the notion that you hear among asset
owners and other parties that Level 1 devices and Level 1 networks are somehow
different from regular Ethernet networks and Windows [machines] and that you
cannot move through them in very similar ways," says Jos Wetzels
Quote for the day:
"To have long term success as a coach
or in any position of leadership, you have to be obsessed in some way." --
Pat Riley
No comments:
Post a Comment