‘Orgs need to be ready’: AI risks and rewards for cybersecurity in 2025
“In 2025, we expect to see more AI-driven cyberthreats designed to evade
detection, including more advanced evasion techniques bypassing endpoint
detection and response (EDR), known as EDR killers, and traditional defences,”
Khalid argues. “Attackers may use legitimate applications like PowerShell and
remote access tools to deploy ransomware, making detection harder for standard
security solutions.” On a more frightening note, Michael Adjei, director of
systems engineering at Illumio, believes that AI will offer somewhat of a field
day for social engineers, who will trick people into actually creating breaches
themselves: “Ordinary users will, in effect, become unwitting participants in
mass attacks in 2025. ... “With greater adoption of AI will come increased
cyberthreats, and security teams need to remain nimble, confident and
knowledgeable.” Similarly, Britton argues that teams “will need to undergo a
dedicated effort around understanding how [AI] can deliver results”. “To do
this, businesses should start by identifying which parts of their workflows are
highly manual, which can help them determine how AI can be overlaid to improve
efficiency. Key to this will be determining what success looks like. Is it
better efficiency? Reduced cost?”
Will we ever trust robots?
The chief argument for robots with human characteristics is a functional one:
Our homes and workplaces were built by and for humans, so a robot with a
humanlike form will navigate them more easily. But Hoffman believes there’s
another reason: “Through this kind of humanoid design, we are selling a story
about this robot that it is in some way equivalent to us or to the things that
we can do.” In other words, build a robot that looks like a human, and people
will assume it’s as capable as one. In designing Alfie’s physical appearance,
Prosper has borrowed some aspects of typical humanoid design but rejected
others. Alfie has wheels instead of legs, for example, as bipedal robots are
currently less stable in home environments, but he does have arms and a head.
The robot will be built on a vertical column that resembles a torso; his
specific height and weight are not yet public. He will have two emergency stop
buttons. Nothing about Alfie’s design will attempt to obscure the fact that he
is a robot, Lewis says. “The antithesis [of trustworthiness] would be designing
a robot that’s intended to emulate a human … and its measure of success is based
on how well it has deceived you,” he told me. “Like, ‘Wow, I was talking to that
thing for five minutes and I didn’t realize it’s a robot.’ That, to me, is
dishonest.”
My Personal Reflection on DevOps in 2024 and Looking Ahead to 2025
As we move into 2025, the big stories that dominated 2024 will continue to
evolve. We can expect AI—particularly generative AI—to become even more deeply
ingrained in the DevOps toolchain. Prompt engineering for AI models will likely
emerge as a specialized skill, just as writing Docker files was a skill set that
distinguished DevOps engineers a decade ago. Agentic AI will become the norm
with teams of agents taking on the tasks that lower level workers once
performed. On the policy side, escalating regulatory demands will push
enterprises to adopt more stringent compliance frameworks, integrating AI-driven
compliance-as-code tools into their pipelines. Platform engineering will mature,
focusing on standardization and the creation of “golden paths” that offer best
practices out of the box. We may also see a consolidation of DevOps tool vendors
as the market seeks integrated, end-to-end platforms over patchwork solutions.
The focus will be on usability, quality, security and efficiency—attributes that
can only be realized through cohesive ecosystems rather than fragmented
toolchains. Sustainability will also factor into 2025’s narrative. As
environmental concerns shape global economic policies and public sentiment,
DevOps teams will take resource optimization more seriously.
From Invisible UX to AI Governance: Kanchan Ray, CTO, Nagarro Shares his Vision for a Connected Future
Vision and data derived from videos have become integral to numerous industries,
with machine vision playing a crucial role in automating business processes. For
instance, automatic inventory management, often supported by robots, is
transitioning from experimental to mainstream. Machine vision also enhances
security and safety by replacing human monitoring with machines that operate
around the clock, offering greater accuracy at a lower cost. On the consumer
front, virtual try-ons and AI-assisted mirrors have become standard features in
reputable retail outlets, both in physical stores and online platforms. ...
Traditional boundaries of security, which once focused on standard data
security, governance, and IT protocols, are now fluid and dynamic. The
integration of AI, data analytics, and machine learning has created diverse
contexts for output consumption, resulting in new business operations around
model simulations and decision-making related to model pipelines. These
operations include processes like model publishing, hyperparameter
observability, and auditing model reasoning, all of which push the boundaries of
AI responsibility.
If your AI-generated code becomes faulty, who faces the most liability exposure?
None of the lawyers, though, discussed who is at fault if the code generated by
an AI results in some catastrophic outcome. For example: The company delivering
a product shares some responsibility for, say, choosing a library that has known
deficiencies. If a product ships using a library that has known exploits and
that product causes an incident that results in tangible harm, who owns that
failure? The product maker, the library coder, or the company that chose the
product? Usually, it's all three. ... Now add AI code into the mix. Clearly,
most of the responsibility falls on the shoulders of the coder who chooses to
use code generated by an AI. After all, it's common knowledge that the code may
not work and needs to be thoroughly tested. In a comprehensive lawsuit, will
claimants also go after the companies that produce the AIs and even the
organizations from which content was taken to train those AIs (even if done
without permission)? As every attorney has told me, there is very little case
law thus far. We won't really know the answers until something goes wrong,
parties wind up in court, and it's adjudicated thoroughly. We're in uncharted
waters here.
5 Signs You’ve Built a Secretly Bad Architecture (And How to Fix It)
Dependencies are the hidden traps of software architecture. When your system
is littered with them — whether they’re external libraries, tightly coupled
modules, or interdependent microservices — it creates a tangled web that’s
hard to navigate. They make the system difficult to debug locally. Every
change risks breaking something else. Deployments take more time,
troubleshooting takes longer, and cascading failures are a real threat. The
result? Your team spends more time toiling and less time innovating. ...
Reducing dependencies doesn’t mean eliminating them entirely or splitting
your system into nanoservices. Overcorrecting by creating tiny,
hyper-granular services might seem like a solution, but it often leads to
even greater complexity. In this scenario, you’ll find yourself managing
dozens — or even hundreds — of moving parts, each requiring its own
maintenance, monitoring, and communication overhead. Instead, aim for
balance. Establish boundaries for your microservices that promote cohesion,
avoiding unnecessary fragmentation. Strive for an architecture where
services interact efficiently but aren’t overly reliant on each other, which
increases the flexibility and resilience of your system.
The 4 key aspects of a successful data strategy
Without a data strategy to structure various efforts, the value added from
data in any organization of a certain size or complexity falls far short of
the possibilities. In such cases, data is only used locally or aggregated
along relatively rigid paths. The result? The company’s agility in terms of
necessary changes remains inhibited. In the absence of such a strategy,
technical concepts and architectures can hardly increase this value either.
A well-thought-out data strategy can be formulated in various ways. It
encompasses several different facets, such as availability, searchability,
security, protection of personal data, cost control, etc. However, four key
aspects that form the basis for a data strategy can be identified from a
variety of data-related projects: identity, bitemporality, networking and
federalism. ... A data strategy also determines how companies encode the
knowledge about their products, services, processes and business models.
This makes solutions possible that also allow for automated decision
support. To sell glasses online, a lot of specialized optician knowledge
must be encoded so that the customer does not make serious mistakes when
configuring their glasses. The optimal size of the progressive lenses
depends, among other things, on the visual acuity and the lens
geometry.
Maximizing the impact of cybercrime intelligence on business resilience
An intelligence capability is only as effective as its coverage of the
adversary. A robust program ensures historical coverage for context,
near-real-time coverage for timely responses to immediate threats, and depth
of coverage for sufficient understanding. Cybercrime intelligence coverage
encompasses both human and technical data. Valuable sources of information
include any platforms where cybercriminals gather to communicate,
coordinate, or trade, such as social networks, chatrooms, forums and direct
one-on-one interactions. Technical coverage requires visibility into the
tools used by adversaries. This coverage can be obtained through
programmatic malware emulation across the full spectrum of malware families
deployed by cybercriminals, ensuring comprehensive insights into their
activities in a timely and ongoing manner. ... Adversary Intelligence is
produced from a focused collection, analysis and exploitation capability and
curated from where threat actors collaborate, communicate and plan cyber
attacks. Obtaining and utilizing this Intelligence provides proactive and
groundbreaking insights into the methodology of top-tier cybercriminals –
target selection, assets and tools used, associates and other enablers that
support them.
Large language overkill: How SLMs can beat their bigger, resource-intensive cousins
LLMs are incredibly powerful, yet they are also known for sometimes “losing
the plot,” or offering outputs that veer off course due to their generalist
training and massive data sets. That tendency is made more problematic by
the fact that OpenAI’s ChatGPT and other LLMs are essentially “black boxes”
that don’t reveal how they arrive at an answer. This black box problem is
going to become a bigger issue going forward, particularly for companies and
business-critical applications where accuracy, consistency and compliance
are paramount. ... Fortunately, SLMs are better suited to address many of
the limitations of LLMs. Rather than being designed for general-purpose
tasks, SLMs are developed with a narrower focus and trained on
domain-specific data. This specificity allows them to handle nuanced
language requirements in areas where precision is paramount. Rather than
relying on vast, heterogeneous datasets, SLMs are trained on targeted
information, giving them the contextual intelligence to deliver more
consistent, predictable and relevant responses. This offers several
advantages. First, they are more explainable, making it easier to understand
the source and rationale behind their outputs. This is critical in regulated
industries where decisions need to be traced back to a source.
Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother
Even though AI brings great productivity, Shadow AI introduces different
risks ... Studies show employees are frequently sharing legal documents, HR
data, source code, financial statements and other sensitive information with
public AI applications. AI tools can inadvertently expose this sensitive
data to the public, leading to data breaches, reputational damage and
privacy concerns. ... Feeding data into public platforms means that
organizations have very little control over how their data is managed,
stored or shared, with little knowledge of who has access to this data and
how it will be used in the future. This can result in non-compliance with
industry and privacy regulations, potentially leading to fines and legal
complications. ... Third-party AI tools could have built-in vulnerabilities
that a threat actor could exploit to gain access to the network. These tools
can lack security standards compared to an organization’s internal security
systems. Shadow AI can also introduce new attack vectors making it easier
for malicious actors to exploit weaknesses. ... Without proper governance or
oversight, AI models can spit out biased, incomplete or flawed outputs. Such
biased and inaccurate results can bring harm to organizations.
Quote for the day:
“Success is most often achieved by
those who don't know that failure is inevitable.” --
Coco Chanel
No comments:
Post a Comment