5 Ways That AI Is Set To Transform Cybersecurity
Cybersecurity has long been notoriously siloed, with organizations installing
many different tools and products, often poorly interconnected. No matter how
hard vendors and organizations work to integrate tools, coalescing all relevant
cybersecurity information into one place remains a big challenge. But AI offers
a way to combine multiple data sets from many disparate sources and provide a
truly unified view of an organization’s security posture, with actionable
insights. And with generative AI, gaining those insights is so easy, a matter of
simply asking the system questions such as “What are the top three things I
could do today to reduce risk?” or “What would be the best way to respond to
this incident report?” AI has the potential to consolidate security feeds in a
way the industry has never been able to quite figure out. Generative AI will
blow up the very nature of data infrastructure. Think about it: All the
different tools that organizations use to store and manage data are built for
humans. Essentially, they’re designed to segment information and put it in
various electronic boxes for people to retrieve later. It’s a model based on how
the human mind works.
Microservices Resilient Testing Framework
Resilience in microservices refers to the system's ability to handle and recover
from failures, continue operating under adverse conditions, and maintain
functionality despite challenges like network latency, high traffic, or the
failure of individual service components. Microservices architectures are
distributed by nature, often involving multiple, loosely coupled services that
communicate over a network. This distribution often increases the system's
exposure to potential points of failure, making resilience a critical factor. A
resilient microservices system can gracefully handle partial failures, prevent
them from cascading through the system, and ensure overall system stability and
reliability. For resilience, it is important to think in terms of positive and
negative testing scenarios. The right combination of positive and negative
testing plays a crucial role in achieving this resilience, allowing teams to
anticipate and prepare for a range of scenarios and maintaining a robust,
stable, and trustworthy system. For this reason, the rest of the article will be
focusing on negative and positive scenarios for all our testing activities.
Skynet Ahoy? What to Expect for Next-Gen AI Security Risks
From a cyberattack perspective, threat actors already have found myriad ways
to weaponize ChatGPT and other AI systems. One way has been to use the models
to create sophisticated business email compromise (BEC) and other phishing
attacks, which require the creation of socially engineered, personalized
messages designed for success. "With malware, ChatGPT enables cybercriminals
to make infinite code variations to stay one step ahead of the malware
detection engines," Harr says. AI hallucinations also pose a significant
security threat and allow malicious actors to arm LLM-based technology like
ChatGPT in a unique way. An AI hallucination is a plausible response by the AI
that's insufficient, biased, or flat-out not true. "Fictional or other
unwanted responses can steer organizations into faulty decision-making,
processes, and misleading communications," warns Avivah Litan, a Gartner vice
president. Threat actors also can use these hallucinations to poison LLMs and
"generate specific misinformation in response to a question," observes Michael
Rinehart, vice president of AI at data security provider Securiti.
Cybersecurity teams need new skills even as they struggle to manage legacy systems
To stay ahead, though, security leaders should incorporate prompt engineering
training for their team, so they can better understand how generative AI
prompts function, the analyst said. She also underscored the need for
penetration testers and red teams to include prompt-driven engagements in
their assessment of solutions powered by generative AI and large language
models. They need to develop offensive AI security skills to ensure models are
not tainted or stolen by cybercriminals seeking intellectual property. They
also have to ensure sensitive data used to train these models are not exposed
or leaked, she said. In addition to the ability to write more convincing
phishing email, generative AI tools can be manipulated to write malware
despite limitations put in place to prevent this, noted Jeremy Pizzala, EY's
Asia-Pacific cybersecurity consulting leader. He noted that researchers,
including himself, have been able to circumvent ethical restrictions that
guide platforms such as ChatGPT and prompt them to write malware.
The relationship between cloud FinOps and security
Established FinOps and cybersecurity teams should annually evaluate their
working relationship as part of continuous improvement. This collaboration
helps ensure that, as practices and tools evolve, the correct FinOps data is
available to cybersecurity teams as part of their monitoring, incident
response and post-incident forensics. The FinOps Foundation doesn't mention
cybersecurity in its FinOps Maturity Model. But, in all rights, FinOps and
cybersecurity collaboration indicates a maturing organization in the model's
Run phase. Ideally, moves to establish such collaboration should show
themselves in the Walk stage. ... Building a relationship between the FinOps
and cybersecurity teams should start early when an organization chooses a
FinOps tool. A FinOps team can better forecast expenses, plan budget
allocation and avoid unnecessary costs by understanding security requirements
and constraints. These forecasts result in a more cost-effective and
financially efficient cloud operation, so plan for some level of
cross-training between the teams.
What is GRC? The rising importance of governance, risk, and compliance
Like other parts of enterprise operations, GRC comprises a mix of people,
process, and technology. To implement an effective GRC program, enterprise
leaders must first understand their business, its mission, and its objectives,
according to Ameet Jugnauth, the ISACA London Chapter board vice president and
a member of the ISACA Emerging Trends Working Group. Executives then must
identify the legal and regulatory requirements the organization must meet and
establish the organization’s risk profile based on the environment in which it
operates, he says. “Understand the business, your business environment
(internal and external), your risk appetite, and what the government wants you
to achieve. That all sets your GRC,” he adds. The roles that lead these
activities vary from one organization to the next. Midsize to large
organizations typically have C-level executives — namely a chief governance
officer, chief risk officer, and chief compliance officer — to oversee these
tasks, McKee says. These executive lead risk or compliance departments with
dedicated teams.
Revolutionising Fraud Detection: The Role of AI in Safeguarding Financial Systems
Conventional fraud detection methods, primarily rule-based systems, and human
analysis, have proven increasingly inadequate in the face of evolving fraud
tactics. Rule-based systems, while effective in identifying simple patterns,
often struggle to adapt to the ever-changing landscape of fraud. Fraudsters
have stronger motivation and they evolve faster than the rules in the rules
engine. ... The same volumes of data that are overwhelming for traditional
fraud detection systems are fuel for AI. With its ability to learn from vast
amounts of data and identify complex patterns, AI is poised to revolutionize
the fight against fraud. ... While AI offers immense potential, it’s crucial
to acknowledge the challenges associated with its adoption. Data privacy
concerns, ethical considerations around algorithmic bias, and the need for
robust security measures are all critical aspects that demand careful
attention. As AI opens new frontiers in fraud prevention, unregulated AI
technology such as deepfake in the wrong hands could also enable sophisticated
impersonation scams. However, the benefits of AI far outweigh the
challenges.
API security in 2024: Predictions and trends
The rapid rate of change of APIs means organizations will always have
vulnerabilities that need to be remediated. As a result, 2024 will usher in a
new era where visibility will be a priority for API security strategies.
Preventing attackers from entering the perimeter is not a 100% foolproof
strategy. Whereas having real-time visibility into a security environment will
enable rapid responses from security teams that neutralize threats before they
impact operations or extract valuable data. ... With the widespread use of
APIs, especially in sectors such as financial services, regulators are looking
to encourage transparency in APIs. This means data privacy concerns and
regulations will continue to impact API use in 2024. In response,
organizations are becoming weary of having third parties hold and access their
data to conduct security analyses. We expect to see a shift in 2024 where
organizations will demand running security solutions locally within their own
environments. Self-managed solutions (either on-premise or private cloud),
eliminate the need to filter, redact, and anonymize data before it’s
stored.
The Terrapin Attack: A New Threat to SSH Integrity
Microsoft’s logic is that the impact on Win32-OpenSSH is limited This is a
major mistake. Microsoft’s decision allows unknown server-side implementation
bugs to remain exploitable in a Terrapin-like attack, even if the server got
patched to support “strict kex.” As one Windows user noted, “This puts
Microsoft customers at risk of avoidable Terrapin-style attacks targeting
implementation flaws of the server.” Exactly so. You see, for this protection
to be effective, both client and server must be patched. If one or the other
is vulnerable, the entire connection can still be attacked. So to be safe, you
must patch and update both your client and server SSH software. So, if you’re
Windows and you haven’t manually updated your workstations, their connections
are open to attack. While patches and updates are being released, the
widespread nature of this vulnerability means that it will take time for all
clients and servers to be updated. Because you must already have an MITM
attacker in place to be vulnerable, I wouldn’t go spend the holiday season
worrying myself sick. I mean, you’re sure you don’t already have a hacker
inside your system, right? Right!?
Supporting Privacy, Security and Digital Trust Through Effective Enterprise Data Management Programs
Those professionals responsible for supporting privacy efforts should
therefore prioritize effective enterprise data management because it is
integral to safeguarding individual’s privacy. A well-structured data
management framework works to ensure that personal information is handled
ethically and compliant with regulations, while fostering a culture of
responsible data stewardship within organizations. When done right, this
reinforces trust with stakeholders, serves as a differentiator in the
marketplace, improves visibility into data ecosystems, expands reliability of
data, and optimizes scalability and innovative go to market efforts. ... Most,
if not all, of the global data privacy laws and regulations require data to be
managed effectively. To comply with these laws and regulations, organizations
must first understand the data they collect, the purposes for its collection,
how it is used, how it is shared, how it is stored, how it is destroyed, and
so on. Only after organizations have a full understanding of their data
ecosystem can they begin to implement effective controls to both protect data
and preserve the ability of the data to achieve intended operational goals.
Quote for the day:
"Too many of us are not living our
dreams because we are living our fears." -- Les Brown
No comments:
Post a Comment