'FraudGPT' Malicious Chatbot Now for Sale on Dark Web
Both WormGPT and FraudGPT can help attackers use AI to their advantage when
crafting phishing campaigns, generating messages aimed at pressuring victims
into falling for business email compromise (BEC), and other email-based scams,
for starters. FraudGPT also can help threat actors do a slew of other bad
things, such as: writing malicious code; creating undetectable malware; finding
non-VBV bins; creating phishing pages; building hacking tools; finding hacking
groups, sites, and markets; writing scam pages and letters; finding leaks and
vulnerabilities; and learning to code or hack. Even so, it does appear that
helping attackers create convincing phishing campaigns is still one of the main
use cases for a tool like FraudGPT, according to Netenrich. ... As phishing
remains one of the primary ways that cyberattackers gain initial entry onto an
enterprise system to conduct further malicious activity, it's essential to
implement conventional security protections against it. These defenses can still
detect AI-enabled phishing, and, more importantly, subsequent actions by the
threat actor.
Key factors for effective security automation
A few factors generally drive the willingness to automate security. One factor
is if the risk of not automating exceeds the risk of an automation going wrong:
If you conduct business in a high-risk environment, the potential for damage
when not automating can be higher than the risk of triggering an automated
response based on a false positive. Financial fraud is a good example, where
banks routinely automatically block transactions they find to be suspicious,
because a manual process would be too slow. Another factor is when the damage
potential of an automation going wrong is low. For example, there is no
potential damage when trying to fetch a non-existent file from a remote system
for forensic analysis. But what really matters most is how reliable automation
is. For example, many threats actors today use living-off-the-land techniques,
such as using common and benign system utilities like PowerShell. From a
detection perspective, there are no uniquely identifiable characteristics like a
file hash, or a malicious binary to inspect in a sandbox.
API-First Development: Architecting Applications with Intention
More traditionally, tech companies often started with a particular user
experience in mind when setting out to develop a product. The API was then
developed in a more or less reactive way to transfer all the necessary data
required to power that experience. While this approach gets you out the door
fast, it isn’t very long before you probably need to go back inside and
rethink things. Without an API-first approach, you feel like you’re moving
really fast, but it’s possible that you’re just running from the front door to
your driveway and back again without even starting the car. API-first
development flips this paradigm by treating the API as the foundation for the
entire software system. Let’s face it, you are probably going to want to power
more than one developer, maybe even several different teams, all possibly even
working on multiple applications, and maybe there will even be an unknown
number of third-party developers. Under these fast-paced and highly
distributed conditions, your API cannot be an afterthought.
What We Can Learn from Australia’s 2023-2030 Cybersecurity Strategy
One of the challenges facing enterprises in Australia today is a lack of
clarity in terms of cybersecurity obligations, both from an operational
perspective and as organizational directors. Though there are a range of
implicit cybersecurity obligations designated to Australian enterprises and
nongovernment entities, it is the need of the hour to have more explicitly
stated obligations to increase national cyberresilience.There are also
opportunities to simplify and streamline existing regulatory frameworks to
ensure easy adoption of those frameworks and cybersecurity obligations. ...
Another important aspect of the upcoming Australian Cybersecurity Strategy
is to strengthen international cyberleaders to enable them to seize
opportunities and address challenges presented by the shifting
cyberenvironment. To keep up with new and emerging technologies, this
cybersecurity strategy aims to take tangible steps to shape global thinking
about cybersecurity.
Is your data center ready for generative AI?
Generative AI applications create significant demand for computing power in
two phases: training the large language models (LLMs) that form the core of
generate AI systems, and then operating the application with these trained
LLMs, says Raul Martynek, CEO of data center operator DataBank. “Training
the LLMs requires dense computing in the form of neural networks, where
billions of language or image examples are fed into a system of neural
networks and repeatedly refined until the system ‘recognizes’ them as well
as a human being would,” Martynek says. Neural networks require tremendously
dense high-performance computing (HPC) clusters of GPU processors running
continuously for months, or even years at a time, Martynek says. “They are
more efficiently run on dedicated infrastructure that can be located close
to the proprietary data sets used for training,” he says. The second phase
is the “inference process” or the use of these applications to actually make
inquiries and return data results.
Siloed data: the mountain of lost potential
Given AI’s growing capabilities for handling customer service are only made
possible through data, the risk of not breaking down internal data siloes is
sizeable, not just in terms of missing opportunities. Companies could also
see a decline in the speed and quality of their customer service as contact
centre agents need to spend longer navigating multiple platforms and
dashboards to find the information needed to help answer customers’ queries.
Eliminating data siloes requires educating everyone in the business to
understand the necessity of sharing data through an open culture and
encouraging the data sides of operations to co-ordinate efforts, align
visions and achieve goals. The synchronisation of business operations with
customer experience, alongside adopting a data-driven approach, can produce
significant benefits such as increased customer spending. ... Data, working
for and with AI, must be placed at the centre of the business model. This
means getting board buy-in to establish a data factory run by qualified data
engineers and analysts who are capable of driving the collection and use of
data within the organisation.
An Overview of Data Governance Frameworks
Data governance frameworks are built on four key pillars that ensure the
effective management and use of data across an organization. These pillars
ensure data is accurate, can be effectively combined from different sources,
is protected and used in compliance with laws and regulations, and is stored
and managed in a way that meets the needs of the organization. ...
Furthermore, a lack of governance can lead to confusion and duplication of
effort, as different departments or individual users try to manage data with
their own methods. A well-designed data governance framework ensures all
users understand the rules for managing data and that there is a clear
process for making changes or additions to the data. It unifies teams,
improving communication between different teams and allowing different
departments to share best practices. In addition, a data governance
framework ensures compliance with laws and regulations. From HIPAA to GDPR,
there are a multitude of data privacy laws and regulations all over the
world. Running afoul of these legal provisions is expensive in terms of
fines and settlement costs and can damage an organization’s reputation.
Governance — the unsung hero of ESG
What's interesting is that for the most part, they're all at different
stages of transformation and managing the risks of transformation. A board
has four responsibilities, observing performance, approving, and providing
resources to fund the strategy, hiring and developing the succession plan,
and risk management. Depending on where you are in a normal cycle of a
business or the market, the board is involved in these 4. Also, I take
lessons that I've learned at other boards and apply them possibly to Baker
Hughes' situation and vice versa: take some of the lessons that I'm learning
and the things that I'm hearing in the Baker Hughes situation —
unattributed, of course — and bring it into other boards. Sometimes there's
a nice element of sharing. As you know, Baker Hughes has a very strong Board
and I am a good student at taking down good and thoughtful questions from
board members and bringing that to other company boards, if appropriate.
Why whistleblowers in cybersecurity are important and need support
“Governments should have a whistleblower program with clear instructions on
how to disclose information, then offer the resources to enable procedures
to encourage employees to come forward and guarantee a safe reporting
environment,” she says. Secondly, nations need to upgrade their legislation
to include strong anti-retaliation protection against tech workers, making
it unlawful for various entities to engage in reprisal. This includes
job-related pressure, harassment, doxing, blacklisting, and retaliatory
investigations. ... To further increase chances, employees can be offered
regular training sessions in which they are informed about the importance of
coming forward on cybersecurity issues, the ways to report wrongdoing, and
the protection mechanisms they could access. Moreover, leadership should
explain that it has zero tolerance for retaliation. “Swift action should be
taken if any instances of retaliation come to light,” according to Empower
Oversight. The message leadership should convey is that issues are taken
seriously and that C-level executives are open for conversation if the
situation requires such an action.
Cloud Optimization: Practical Steps to Lower Your Bills
Optimization is always an iterative process, requiring continual adjustment
as time goes on. However, there are many quick wins and strategies that you
can implement today to refine your cloud footprint:Unused virtual machines
(VMs), storage and bandwidth can lead to unnecessary expenses. Conducting
periodic evaluations of your cloud usage and identifying such underutilized
resources can effectively minimize costs. Check your cloud console now. You
might just find a couple of VMs sitting there idle, accidentally left behind
after the work was done. Temporary backup resources, such as VMs and
storage, are frequently used for storing data and application backups.
Automate the deletion process of these temporary backup resources to save
money. Selecting the appropriate tier entails choosing the cloud resource
that aligns best with your requirements. For instance, if you anticipate a
high volume of traffic and demand, opting for a high-end VM would be
suitable. Conversely, for smaller projects, a lower-end VM might
suffice.
Quote for the day:
"Your job gives you authority. Your
behavior gives you respect." -- Irwin Federman
No comments:
Post a Comment