Mass adoption of generative AI tools is derailing one very important factor, says MIT
Many companies "were caught off guard by the spread of shadow AI use across the
enterprise," Renieris and her co-authors observe. What's more, the rapid pace of
AI advancements "is making it harder to use AI responsibly and is putting
pressure on responsible AI programs to keep up." They warn the risks that come
from ever-rising shadow AI are increasing, too. For example, companies' growing
dependence on a burgeoning supply of third-party AI tools, along with the rapid
adoption of generative AI -- algorithms (such as ChatGPT, Dall-E 2, and
Midjourney) that use training data to generate realistic or seemingly factual
text, images, or audio -- exposes them to new commercial, legal, and
reputational risks that are difficult to track. The researchers refer to the
importance of responsible AI, which they define as "a framework with principles,
policies, tools, and processes to ensure that AI systems are developed and
operated in the service of good for individuals and society while still
achieving transformative business impact."
From details to big picture: how to improve security effectiveness
Benjamin Franklin once wrote: “For the want of a nail, the shoe was lost; for
the want of a shoe the horse was lost; and for the want of a horse the rider was
lost, being overtaken and slain by the enemy, all for the want of care about a
horseshoe nail.” It’s a saying with a history that goes back centuries, and it
points out how small details can lead to big consequences. In IT security, we
face a similar problem. There are so many interlocking parts in today’s IT
infrastructure that it’s hard to keep track of all the assets, applications and
systems that are in place. At the same time, the tide of new software
vulnerabilities released each month can threaten to overwhelm even the best
organised security team. However, there is an approach that can solve this
problem. Rather than looking at every single issue or new vulnerability that
comes in, how can we look for the ones that really matter? ... When you look at
the total number of new vulnerabilities that we faced in 2022 – 25,228 according
to the CVE list – you might feel nervous, but
only 93 vulnerabilities were actually exploited by malware.
3 downsides of generative AI for cloud operations
While we’re busy putting finops systems in place to monitor and govern cloud
costs, we could see a spike in the money spent supporting generative AI systems.
What should you do about it? This is a business issue more than a technical one.
Companies need to understand how and why cloud spending is occurring and what
business benefits are being returned. Then the costs can be included in
predefined budgets. This is a hot button for enterprises that have limits on
cloud spending. The line-of-business developers would like to leverage
generative AI systems, usually for valid business reasons. However, as explained
earlier, they cost a ton, and companies need to find either the money, the
business justification, or both. In many instances, generative AI is what the
cool kids use these days, but it’s often not cost-justifiable. Generative AI is
sometimes being used for simple tactical tasks that would be fine with more
traditional development approaches. This overapplication of AI has been an
ongoing problem since AI was first around; the reality is that this technology
is only justifiable for some business problems.
Pros and cons of managed SASE
If a company decides to deploy SASE by going directly through SASE vendors,
they’ll have to configure and implement the service themselves, says Gartner’s
Forest. “The benefits of a managed service provider are a single source for all
setup and management, the ability to redeploy internal resources for other
tasks, and the ability to access skills and capabilities that don’t exist
internally,” he says. Getting in-house IT staff with the right expertise to
handle SASE can be a real challenge, particularly in today’s hiring climate: 76%
of IT employers say they’re having difficulty finding the hard and soft skills
they need, and one in five organizations globally is having trouble finding
skilled tech talent, according to a 2023 survey by ManpowerGroup. The access to
outside experts is particularly appealing to companies that don’t have the
resources to manage SASE themselves. Managed SASE providers have specialized
expertise in deploying and managing SASE infrastructure, says Ilyoskhuja
Ikromkhujaev, software engineer at software developer Nipendo. “Which can help
ensure that your system is set up correctly and stays up to date with the latest
security features and protocols,” he says.
The security interviews: Exploiting AI for good and for bad
AI has moved beyond automation. Looking at large language models, which some
industry experts see as representing the tipping point that ultimately leads to
wide-scale AI adoption, Heinemeyer believes that an AI capable of writing code
offers attackers the opportunity to develop much more bespoke and tailored,
sophisticated attacks. Imagine, he says, highly personalised phishing messages
that have error-free grammar and no spelling mistakes. For its customers, he
says Darktrace uses machine learning to learn what normal looks like in business
email data: “We learn exactly how you communicate, what syntax you use in your
emails, what attachments you receive, who you talk to, and when this is internal
or external.We can detect if somebody sends an email that is unusual for you.” A
large language model like ChatGPT reads everything that is on the public
internet. The implication is that it will be reading people’s social media
profiles, seeing who they interact with, their friends, what they like and do
not like. Such AI systems have the ability to truly understand someone, based on
the publicly available information that can be gleaned across the web.
Switching the Blame for a More Enlightened Cybersecurity Paradigm
The “blame the user” mentality is a cognitive bias that ignores the complexities
of human-computer interaction. Research in cognitive psychology and human
factors engineering has shown that humans are not designed to be perfect digital
operators. Mistakes are a natural part of our interaction with systems,
especially those that are complex and non-intuitive. Moreover, our
susceptibility to scams and manipulation is not just a personal failing, but a
product of millennia of evolution. For instance, social engineering attacks
exploit our natural tendency to trust and cooperate, which have been crucial to
human survival and societal development. To put the onus on the individual is to
ignore the broader context. Shifting the blame is an easy way out. It absolves
organizations of the responsibility to address systemic issues and allows them
to maintain the status quo. This is underpinned by the “just-world hypothesis,”
a cognitive bias which propounds that people get what they deserve. When an
employee falls for a scam, it's easy to assume that they were careless or
ill-prepared.
Standardized information sharing framework 'essential' for improving cyber security
Security experts have called for improvements in how private sector
organizations share threat intelligence data with the wider industry. It’s
believed that better cross-organizational collaboration would improve cyber
resiliency in the face of cyber attacks that continue to rise in frequency and
develop ever more sophisticated. “I think this is one of the ways in which the
private sector can work with governments around the world, and each other across
sectors, industries, and regions,” said Jen Ellis, co-chair at the Institute for
Science and Technology’s Ransomware Task Force. Government agencies such as the
UK’s Information Commissioner’s Office (ICO) or the US’ Cybersecurity and
Infrastructure Security Agency (CISA) enforce strict reporting deadlines around
data breaches, but companies often report the minimum required information. The
designated cyber security authorities in the UK and US enforce strict reporting
deadlines around data breaches and this is seen as a positive
step. However, victims often report the minimum required information which
in turn reduces other organizations’ ability to learn from, and potentially
prevent, follow-on attacks.
Hybrid Microsoft network/cloud legacy settings may impact your future security posture
Often in large organizations, there are users in your network who have the
equivalent of Domain administrative rights and are not even aware of this. Your
firm may have even inherited the setup of the domain with original accounts and
permissions set for a Novell network that was migrated from years before. Often
the difference between a firm with better security and one with poor security is
having a staff that takes the additional time to test and confirm that there
will be no side effects in the network if changes are made. Take the example of
unconstrained delegation; this is a setting that many web applications need to
function, including those that are internal only to the organization. But this
setting can expose the domain to excessive risk. Delegation allows a computer or
server to save the Kerberos authentication tickets. Then these saved tickets are
used to act on the user’s behalf. Attackers love to grab these tickets, as they
can then interact with the server and impersonate the identity and in particular
the privileges of those users.
Why we don't have 128-bit CPUs
You might think 128-bit isn't viable because it's difficult or even impossible
to do, but that's actually not the case. Lots of parts in processors, CPUs and
otherwise, are 128-bit or larger, like memory buses on GPUs and SIMDs on CPUs
that enable AVX instructions. We're specifically talking about being able to
handle 128-bit integers, and even though 128-bit CPU prototypes have been
created in research labs, no company has actually launched a 128-bit CPU. The
answer might be anticlimactic: a 128-bit CPU just isn't very useful. A 64-bit
CPU can handle over 18 quintillion unique numbers, from 0 to
18,446,744,073,709,551,615. By contrast, a 128-bit CPU would be able to handle
over 340 undecillion numbers, and I guarantee you that you have never even seen
"undecillion" in your entire life. Finding a use for calculating numbers with
that many zeroes is pretty challenging ... Ultimately, the key reason why we
don't have 128-bit CPUs is that there's no demand for a 128-bit
hardware-software ecosystem. The industry could certainly make it if it wanted
to, but it simply doesn't.
Data sovereignty and security driving hybrid IT adoption in Australia
According to Nutanix’s fifth global Enterprise cloud index survey, data
sovereignty was the top driver of infrastructure decisions in Australia, with
15% of local respondents citing that as the most important criteria when
considering infrastructure investments. Data sovereignty was also one of the top
three considerations for over a third (37%) of enterprises in Australia.
“Control and security are the biggest factors Australian organisations are
weighing up when transforming their IT infrastructure,” said Jim Steed, managing
director of Nutanix Australia and New Zealand. “While public cloud was seen as a
panacea for many years, it’s becoming increasingly clear that cloud is a tool –
not a destination. Some workloads and applications are perfectly suited to a
public cloud, but Australian organisations are moving their most sensitive and
business-critical workloads back home to their on-premises infrastructure.”
According to the study, over half of Australian organisations are planning to
repatriate some applications from the public cloud to on-premise datacentres in
the next 12 months due to data sovereignty concerns.
Quote for the day:
"Effective team leaders adjust their
style to provide what the group can't provide for itself." --
Kenneth Blanchard
No comments:
Post a Comment