What is identity fabric immunity? Abstracting identity for better security
An identity fabric becomes an attractive option when it is merited, but the adoption of it before it is really called for adds unnecessary complexity. The key is knowing the tipping point. If it is doing the job with minimal friction, a simple identity provider framework is sufficient. When the infrastructural complexity begins to cause serious difficulty within the organization, the security abstraction layer described by IFI offers a way forward, says Dmitry Sotnikov, chief product officer at Cayosoft. “Applications are now highly distributed, and users, partners, and customers log into systems from wherever they are, leaving security teams without an easily defined network and physical boundary to protect.” Signs that identity solutions are inadequate include difficulty in managing user access, account provisioning, and response to security incidents both real and simulated. Managers may find that it is very hard to gain an overhead perspective on the security disposition of an enterprise and taking actions that affect security as a whole is cumbersome or extremely challenging.
Cyber attacks on critical infrastructure show advanced tactics and new capabilities
The interconnectedness of critical infrastructure assets, devices, and systems with third parties throughout the software supply chain has made identifying attack paths more complex than ever before. This interconnectedness creates numerous potential entry points for attackers to exploit. Additionally, cyber adversaries now possess a range of new tactics. ... Recent attacks on entities like Colonial Pipeline and water treatment plants demonstrate the potential for malicious actors to cause real-world impacts with just a few clicks. Ransomware criminals are increasingly targeting industries that rely heavily on operational systems, knowing that downtime can result in significant financial losses. Ransomware-as-a-Service (RaaS) has further fueled the proliferation of ransomware attacks, making these attacks more accessible to a wider range of threat actors. It’s important to note that criminal ransomware operators don’t typically use the zero-days that make headlines, or cyberwarfare-level capabilities; they exploit known vulnerabilities that have been unpatched for years.
Feds Ask Telcos: How Are You Combating Location Tracking?
The problems stem from the trust-based approach underpinning SS7, which is used
to secure 3G and earlier networks, and Diameter, which is used to secure 4G. As
detailed in a white paper from Swedish telecommunications giant Ericsson, both
protocols take a trust-based approach, assuming that any network elements
communicating with each other should be doing so. Even though Diameter is a
newer protocol, it lacks security capabilities. "Diameter does not encrypt
originating IP addresses during transport, which increases the risk of network
spoofing, where an attacker poses as a legitimate roaming partner on a network
to gain access to the network," the FCC said. Since SS7 and Diameter still serve
as "the foundation for mobile telephone networks, especially for roaming
capabilities to be able to interconnect networks," as networks expand their
coverage and new networks and more users appear, "the opportunity for a bad
actor to exploit SS7 and Diameter has increased," the FCC said. While the use of
protocols such as SS7 and Diameter can be restricted to secure tunnels, thus
making them more secure, the use of secure tunneling isn't mandatory, Ericsson
said.
Avoiding the dangers of AI-generated code
As the adoption of AI tools to create code increases, organizations will have to
put in place the proper checks and balances to ensure the code they write is
clean—maintainable, reliable, high-quality, and secure. Leaders will need to
make clean code a priority if they want to succeed. Clean code—code that is
consistent, intentional, adaptable, and responsible—ensures top-quality software
throughout its life cycle. With so many developers working on code concurrently,
it’s imperative that software written by one developer can be easily understood
and modified by another at any point in time. With clean code, developers can be
more productive without spending as much time figuring out context or correcting
code from another team member. When it comes to mass production of code assisted
by AI, maintaining clean code is essential to minimizing risks and technical
debt. Implementing a “clean as you code” approach with proper testing and
analysis is crucial to ensuring code quality, whether the code is
human-generated or AI-generated. Speaking of humans, I don’t believe developers
will go away, but the manner in which they do their work every day will
certainly change.
Biggest problems and best practices for generative AI rollouts
The first step in the genAI journey is to determine the AI ambition for the
organization and conduct an exploratory dialogue on what is possible, according
to Gartner. The next step is to solicit potential use cases that can be piloted
with genAI technologies. Unless genAI benefits translate into immediate
headcount reduction and other cost reduction, organizations can expect financial
benefits to accrue more slowly over time depending on how the generated value is
used. For example, Chandrasekaran said, an organization being able to do more
with less as demand increases, to use fewer senior workers, to lower use of
service providers, and to improve customer and employee value, which leads to
higher retention, are all financial benefits that grow over time. Most
enterprises are also customizing pre-built LLMs, as opposed to building out
their own models. Through the use of prompt engineering and retrieval-augmented
generation (RAG), firms can fine-tune an open-source model for their specific
needs. RAG creates a more customized and accurate genAI model that can greatly
reduce anomalies such as hallucinations.
Digital Transformation: What Should be Next on Your Agenda?
Paying close attention to disruptive emerging technologies will help to
future-proof strategies, Buchholz says. "Have you accounted for the impact of
quantum computing?" he asks. "Within several years, it's likely to go from lab
curiosity to useful tool." How about digital twins or the spatial web? "Not all
of these [technologies] will come to pass but investing a few days up front can
save years of pain down the road." Focus on digital transformation initiatives
that have the highest potential to create value for the organization and its
stakeholders, Bakalar advises. "Avoid wasting time, money, and effort on
projects with low strategic value, feasibility or urgency." The best way to
prioritize a digital transformation strategy is by defining precisely what
transformation means to your organization, says Jed Cawthorne, modern work
practice lead with IT consulting firm Creospark, via email. "Develop the
strategy appropriately and, from there, prioritize the projects that will form
your transformation plan." If one takes the approach of focusing on smaller,
easier to digest transformation projects, you can reassess your prioritization
at the completion of each project, Cawthorne says.
User privacy must come first with biometrics
Use cases have expanded to airports with biometric boarding, mobile banking and
e-commerce to facilitate and authenticate transactions, and even with various
branches of law enforcement using it for surveillance purposes. The benefits of
AI-powered facial recognition technology are off the charts, with potential for
dramatic increases in efficiency, security and ease of use across industries.
But with upside comes an equally compelling downside, as organizations need to
consider the
privacy
risks and concerns associated with collecting and using biometric data at scale.
... As biometrics continues to go mainstream, data discovery, data
classification and the handling of sensitive information will become mainstay on
IT task lists. But the key to not overwhelming IT is to incorporate data privacy
principles and tactics at the start of development, so problems can be tackled
proactively rather than reactively. This will be tech’s main challenge in the
coming years. With AI fever everywhere, users will soon expect to access facial
recognition services and products in a more personalized, efficient way, without
compromising on the privacy front.
Be the change: Leveraging AI to fast track next-gen cyber defenders
With AI, enterprises can detect and prevent threats with speed and efficiency
and secure a broader range of assets better than humans alone. They’re no longer
limited by how many people are in their Security Operations Center or the
expertise of their team. Instead, they are empowered to see things in real time
and defend their environment against attacks in an infinitely scalable way. But
AI can’t act alone and automation can only go so far. Humans will always be
needed in the loop to decide what to do with the data and insights it provides.
AI can be used to support these people and supercharge their capabilities.
Consider the following: The job of a threat hunter is to translate these
concepts into queries. But this requires knowledge of complex languages and
coding skills that are in short supply. AI-based platforms allow security teams
to ask complex threat and adversary-hunting questions using natural language,
and within seconds provide insights and recommended response actions that can be
immediately executed. Entry-level threat hunters once limited in what they could
solve can move to the next level and veterans can become more efficient,
effective, and strategic.
Why A Bad LLM Is Worse Than No LLM At All
For an LLM to return a useful output, it needs to have interpreted the user’s
query or prompt the way it was intended. There is a lot of nuance in language
that can lead to misunderstandings and no solution exists yet that has
guardrails to ensure consistent—and accurate—results that meet expectations.
... LLMs, including ChatGPT, have been known to simply make up data to fill in
the gaps in their knowledge just so that they can answer the prompt. They are
designed to produce answers that feel right, even if they aren’t. If you work
with vendors supplying LLMs within their products or as standalone tools, it’s
critical to ask them how their LLM is trained and what they’re doing to
mitigate inaccurate results. ... The majority of LLMs on the market are
available publicly online, which makes it incredibly challenging to safeguard
any sensitive information or queries you input. It’s very likely that this
data is visible to the vendor, who will almost certainly be storing and using
it to train future versions of their product. And if that vendor is hacked or
there’s a data leak, expect even bigger headaches for your
organization.
Building Resilient Cybersecurity Into Supply Chain Operations: A Technical Approach
One of the key challenges in supply chain cybersecurity is the interdependent
nature of the supply chain. A single weak link in the chain can compromise the
entire operation. For example, a cyberattack on a supplier could disrupt
production, leading to delays, financial loss, and damage to the company's
reputation. Moreover, the growing trend of digital transformation has led to
an increase in the use of technologies such as Internet of Things (IoT)
devices, cloud computing, and artificial intelligence in supply chain
operations. While these technologies offer numerous benefits, they also
increase the surface area for potential cyberattacks. ... The digital
transformation of supply chains has led to the integration of various
technologies such as IoT devices, cloud platforms, and AI-based systems. While
these technologies have enhanced efficiency and productivity, they have also
increased the complexity of the cybersecurity landscape. Ensuring the security
of these diverse technologies, each with its own set of vulnerabilities, is a
significant technical challenge.
Quote for the day:
"The distance between insanity and
genius is measured only by success." -- Bruce Feirstein
No comments:
Post a Comment