Quote for the day:
"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine
Two ways AI hype is worsening the cybersecurity skills crisis

Another critical factor in the AI-skills shortage discussion is that attackers
are also leveraging AI, putting defenders at an even greater disadvantage.
Cybercriminals are using AI to generate more convincing phishing emails,
automate reconnaissance, and develop malware that can evade detection.
Meanwhile, security teams are struggling just to keep up. “AI exacerbates what’s
already going on at an accelerated pace,” says Rona Spiegel, cyber risk advisor
at GroScale and former cloud governance leader at Wells Fargo and Cisco. “In
cybersecurity, the defenders have to be right all the time, while attackers only
have to be right once. AI is increasing the probability of attackers getting it
right more often.” ... “CISOs will have to be more tactical in their approach,”
she explains. “There’s so much pressure for them to automate, automate,
automate. I think it would be best if they could partner cross-functionality and
focus on things like policy and urge the unification and simplification of how
polices are adapted… and make sure how we’re educating the entire environment,
the entire workforce, not just the cybersecurity.” Appayanna echoes this
sentiment, arguing that when used correctly, AI can ease talent shortages rather
than exacerbate them.
Data mesh vs. data fabric vs. data virtualization: There’s a difference

“Data mesh is a decentralized model for data, where domain experts like
product engineers or LLM specialists control and manage their own data,” says
Ahsan Farooqi, global head of data and analytics, Orion Innovation. While data
mesh is tied to certain underlying technologies, it’s really a shift in
thinking more than anything else. In an organization that has embraced data
mesh architecture, domain-specific data is treated as a product owned by the
teams relevant to those domains. ... As Matt Williams, field CTO at Cornelis
Networks, puts it, “Data fabric is an architecture and set of data services
that provides intelligent, real-time access to data — regardless of where it
lives — across on-prem, cloud, hybrid, and edge environments. This is the
architecture of choice for large data centers across multiple applications.”
... Data virtualization is the secret sauce that can make that happen. “Data
virtualization is a technology layer that allows you to create a unified view
of data across multiple systems and allows the user to access, query, and
analyze data without physically moving or copying it,” says Williams. That
means you don’ t have to worry about reconciling different data stores or
working with data that’s outdated. Data fabric uses data virtualization to
produce that single pane of glass: It allows the user to see data as a unified
set, even if that’s not the underlying physical reality.
Biometrics adoption strategies benefit when government direction is clear

Part of the problem seems to be the collision of private and public sector
interests in digital ID use cases like right-to-work checks. They would fall
outside the original conception of Gov.uk as a system exclusively for public
sector interaction, but the business benefit they provide is strictly one of
compliance. The UK government’s Office for Digital Identities and Attributes
(OfDIA), meanwhile, brought the register of digital identity and attribute
services to the public beta stage earlier this month. The register lists
services certified to the digital identity and attributes trust framework to
perform such compliance checks, and the recent addition of Gov.uk One Login
provided the spark for the current industry conflagration. Age checks for
access to online pornography in France now require a “double-blind”
architecture to protect user privacy. The additional complexity still leaves
clear roles, however, which VerifyMy and IDxLAB have partnered to fill. Yoti
has signed up a French pay site, but at least one big international player
would rather fight the age assurance rules in court. Aviation and border
management is one area where the enforcement of regulations has benefited from
private sector innovation. Preparation for Digital Travel Credentials is
underway with Amadeus pitching its “journey pass” as a way to use biometrics
at each touchpoint as part of a reimagined traveller experience.
Will AI replace software engineers? It depends on who you ask

Effective software development requires "deep collaboration with other
stakeholders, including researchers, designers, and product managers, who are
all giving input, often in real time," said Callery-Colyne. "Dialogues around
nuanced product and user information will occur, and that context must be
infused into creating better code, which is something AI simply cannot do."
The area where AIs and agents have been successful so far, "is that they don't
work with customers directly, but instead assist the most expensive part of
any IT, the programmers and software engineers," Thurai pointed out. "While
the accuracy has improved over the years, Gen AI is still not 100% accurate.
But based on my conversations with many enterprise developers, the technology
cuts down coding time tremendously. This is especially true for junior to
mid-senior level developers." AI software agents may be most helpful "when
developers are racing against time during a major incident, to roll out a
fixed code quickly, and have the systems back up and running," Thurai added.
"But if the code is deployed in production as is, then it adds to tech debt
and could eventually make the situation worse over the years, many incidents
later."
Protected NHIs: Key to Cyber Resilience
We live where cyber threats is continually evolving. Cyber attackers are getting
smarter and more sophisticated with their techniques. Traditional security
measures no longer suffice. NHIs can be the critical game-changer that
organizations have been looking for. So, why is this the case? Well, cyber
attackers, in the current times, are not just targeting humans but machines as
well. Remember that your IT includes computing resources like servers,
applications, and services that all represent potential points of attack.
Non-Human Identities have bridged the gap between human identities and machine
identities, providing an added layer of protection. NHIs security is of utmost
importance as these identities can have overarching permissions. One single
mishap with an NHI can lead to severe consequences. ... Businesses are
significantly relying on cloud-based services for a wide range of purposes, from
storage solutions to sophisticated applications. That said, the increasing
dependency on the cloud has elucidated the pressing need for more robust and
sophisticated security protocols. An NHI management strategy substantially
supports this quest for fortified cloud security. By integrating with your cloud
services, NHIs ensure secured access, moderated control, and streamlined data
exchanges, all of which are instrumental in the prevention of unauthorized
accesses and data violations.
Job seekers using genAI to fake skills and credentials

“We’re seeing this a lot with our tech hires, and a lot of the sentence
structure and overuse of buzzwords is making it super obvious,” said Joel Wolfe,
president of HiredSupport, a California-based business process outsourcing (BPO)
company. HiredSupport has more than 100 corporate clients globally, including
companies in the eCommerce, SaaS, healthcare, and fintech sectors. Wolfe, who
weighed in on the topic on LinkedIn, said he’s seeing AI-enhanced resumes
“across all roles and positions, but most obvious in overembellished developer
roles.” ... In general, employers generally say they don’t have a problem with
applicants using genAI tools to write a resume, as long as it accurately
represents a candidate’s qualifications and experience. ZipRecruiter, an online
employment marketplace, said 67% of 800 employers surveyed reported they are
open to candidates using genAI to help write their resumes, cover letters, and
applications, according to its Q4 2024 Employer Report. Companies, however, face
a growing threat from fake job seekers using AI to forge IDs, resumes, and
interview responses. By 2028, a quarter of job candidates could be fake,
according to Gartner Research. Once hired, impostors can then steal data, money,
or install ransomware. ... Another downside to the growing flood of AI deep fake
applicants is that it affects “real” job applicants’ chances of being hired.
How Will the Role of Chief AI Officer Evolve in 2025?

For now, the role is less about exploring the possibilities of AI and more about
delivering on its immediate, concrete value. “This year, the role of the chief
AI officer will shift from piloting AI initiatives to operationalizing AI at
scale across the organization,” says Agarwal. And as for those potential
upheavals down the road? CAIO officers will no doubt have to be nimble, but
Martell doesn’t see their fundamental responsibilities changing. “You still have
to gather the data within your company to be able to use with that model and
then you still have to evaluate whether or not that model that you built is
delivering against your business goals. That has never changed,” says Martell.
... AI is at the inflection point between hype and strategic value. “I think
there's going to be a ton of pressure to find the right use cases and deploy AI
at scale to make sure that we're getting companies to value,” says Foss. CAIOs
could feel that pressure keenly this year as boards and other executive leaders
increasingly ask to see ROI on massive AI investments. “Companies who have set
these roles up appropriately, and more importantly the underlying work
correctly, will see the ROI measurements, and I don't think that chief AI
officers [at those] organizations should feel any pressure,” says Mohindra.
Cybercriminals blend AI and social engineering to bypass detection

With improved attack strategies, bad actors have compressed the average time
from initial access to full control of a domain environment to less than two
hours. Similarly, while a couple of years ago it would take a few days for
attackers to deploy ransomware, it’s now being detonated in under a day and even
in as few as six hours. With such short timeframes between the attack and the
exfiltration of data, companies are simply not prepared. Historically, attackers
avoided breaching “sensitive” industries like healthcare, utilities, and
critical infrastructures because of the direct impact to people’s lives.
... Going forward, companies will have to reconcile the benefits of AI with its
many risks. Implementing AI solutions expands a company’s attack surface and
increases the risk of data getting leaked or stolen by attackers or third
parties. Threat actors are using AI efficiently, to the point where any AI
employee training you may have conducted is already outdated. AI has allowed
attackers to bypass all the usual red flags you’re taught to look for, like
grammatical errors, misspelled words, non-regional speech or writing, and a lack
of context to your organization. Adversaries have refined their techniques,
blending social engineering with AI and automation to evade detection.
AI in Cybersecurity: Protecting Against Evolving Digital Threats

As much as AI bolsters cybersecurity defenses, it also enhances the tools
available to attackers. AI-powered malware, for example, can adapt its
behavior in real time to evade detection. Similarly, AI enables cybercriminals
to craft phishing schemes that mimic legitimate communications with uncanny
accuracy, increasing the likelihood of success. Another alarming trend is the
use of AI to automate reconnaissance. Cybercriminals can scan networks and
systems for vulnerabilities more efficiently than ever before, highlighting
the necessity for cybersecurity teams to anticipate and counteract AI-enabled
threats. ... The integration of AI into cybersecurity raises ethical questions
that must be addressed. Privacy concerns are at the forefront, as AI systems
often rely on extensive data collection. This creates potential risks for
mishandling or misuse of sensitive information. Additionally, AI’s
capabilities for surveillance can lead to overreach. Governments and
corporations may deploy AI tools for monitoring activities under the guise of
security, potentially infringing on individual rights. There is also the risk
of malicious actors repurposing legitimate AI tools for nefarious purposes.
Clear guidelines and robust governance are crucial to ensuring responsible AI
deployment in cybersecurity.
AI workloads set to transform enterprise networks

As AI companies leapfrog each other in terms of capabilities, they will be able
to handle even larger conversations — and agentic AI may increase the bandwidth
requirements exponentially and in unpredictable ways. Any website or app could
become an AI app, simply by adding an AI-powered chatbot to it, says F5’s
MacVittie. When that happens, a well-defined, structured traffic pattern will
suddenly start looking very different. “When you put the conversational
interfaces in front, that changes how that flow actually happens,” she says.
Another AI-related challenge that networking managers will need to address is
that of multi-cloud complexity. ... AI brings in a whole host of potential
security problems for enterprises. The technology is new and unproven, and
attackers are quickly developing new techniques for attacking AI systems and
their components. That’s on top of all the traditional attack vectors, says Rich
Campagna, senior vice president of product management at Palo Alto Networks. At
the edge, devices and networks are often distributed which leads to visibility
blind spots,” he adds. That makes it harder to fix problems if something goes
wrong. Palo Alto is developing its own AI applications, Campagna says, and has
been for years. And so are its customers.
No comments:
Post a Comment