AI and privacy: safeguarding your personal information in an age of intelligent systems
AI models, including chatbots and generative AI, rely on vast quantities of
training data. The more data an AI system can access, the more accurate its
models should be. The problem is that there are few, if any, controls over how
data is captured and used in these training models2. With some AI tools
connecting directly to the public internet, that could easily include your data.
Then there is the question of what happens to queries from generative AI tools.
Each service has its own policy for how it collects, and stores, personal data,
as well as how they store query results. Anyone who uses a public AI service
needs to be very careful about sharing either personal information, or sensitive
business data. New laws will control the use of AI; the European Union, for
example, plans to introduce its AI Act by the end of 20233. And individuals are,
to an extent, protected from the misuse of their data by the GDPR and other
privacy legislation. But security professionals need to take special care of
their confidential information.
Are LLMs Leading DevOps Into a Tech Debt Trap?
It depends on how we use the expertise in the models. Instead of asking it to
generate new code, we could ask it to interpret and modify existing code. For
the first time, we have tools to take down the “not invented here” barriers
we’ve created because of the high cognitive load of understanding code. If we
can help people work more effectively with existing code, then we can actually
converge and reuse our systems. By helping us expand and operate within our
working systems base, LLMs could actually help us maintain less code. Imagine if
the teams in your organization were invested in collaborating around shared
systems! We haven’t done this well today because it takes significantly more
time and effort. Today, LLMs have thrown out those calculations. Taking this
just one more step, we can see how improved reuse paves the way for reduction of
the number of architectural patterns. If we improve our collaboration and
investment in sharing code, then there is increased ROI in making shared
patterns and platforms work. I see that as a tremendous opportunity for LLMs to
improve operations in a meaningful way.
EU-US Data Transfer Framework will be overturned within five years, says expert
The European Commission has adopted the adequacy decision for the EU-US Data
Privacy Framework after years of talks, but experts have indicated it will
struggle to uphold it in court. In its decision announced on 10 July, the
Commission found that the US upholds a level of protection comparable to that of
the EU when it comes to the transfer of personal data. Companies that comply
with the extensive requirements of the framework can access a streamlined path
for transferring data from the EU to the US without the need for extra data
protection measures. The framework is likely to face legal action and be
overturned, according to Nader Henein, research VP of privacy and data
protection at Gartner. “It takes one step closer to what the European Court of
Justice needs, but it takes one where the Court of Justice needs it to take
five, or ten steps,” Henein told ITPro. “Maximilian Schrems already said he was
going to do it, and if not him someone else will like the EFF or multiple
privacy groups. What we’re telling our clients is two to five years, depending
on who raises the request, when they raise it, and who they use.”
What Does the Patchless Cisco Vulnerability Mean for IT Teams, CIOs?
The lack of patch and workaround for the vulnerability is not typical, and it
likely indicates a complex issue, according to Guenther. “It signifies that the
vulnerability may be deeply rooted in the design or implementation of the
affected feature,” she says. With no workarounds or forthcoming patch, what can
IT teams do in response to this vulnerability? Before taking a specific action,
IT teams need to consider whether this vulnerability impacts their organization.
“I have seen companies go into a panic, only to find out that a particular issue
didn’t really affect them,” says Alan Brill, senior managing director in the
Kroll Cyber Risk Practice and fellow of the Kroll Institute, a risk and
financial advisory solutions company. When determining potential impact, it is
important for IT teams to take a broad view. The vulnerability may not directly
impact an organization, but what about its supply chain? Third-party risk is an
important consideration. If an IT team determines that the vulnerability does
impact their organization, what is the risk level? How likely is threat actor
exploitation?
Internet has Become An AI Dumping Ground, No Solution in Sight
After realising the potential of generative AI models like GPT, people have
taken a step ahead and started filling websites with junk generated by AI to get
the attention of advertisers. This content aims to attract paying advertisers
according to a report from the media research organisation NewsGuard. The
companies behind the models generating this content have been vocal about the
measures they are taking to deal with the issue but no concrete plan has yet
been executed. According to the report, more than 140 major brands are currently
paying for advertisements that end up on unreliable AI-written sites, likely
without their knowledge. The report further clarifies that the websites in
question are presented in a way that a reader could assume that it’s produced by
human writers, because the site has a generic layout and content typical to news
websites. Furthermore, these websites do not clearly disclose that its content
is AI produced. Hence, it is high time authorities step in and take charge of
not just keeping an eye on false but also non-human generated content.
Train AI models with your own data to mitigate risks
To be successful in their generative AI deployments, organizations should
finetune the AI model with their own data, Klein said. Companies that take the
effort to do this properly will move faster forward with their implementation.
Using generative AI on its own will prove more compelling if it is embedded
within an organization's data strategy and platform, he added. Depending on the
use case, a common challenge companies face is whether they have enough data of
their own to train the AI model, he said. He noted, however, that data quantity
did not necessarily equate data quality. Data annotation is also important, as
is applying context to AI training models so the system churns out responses
that are more specific to the industry the business is in, he said. With data
annotation, individual components of the training data are labeled to enable AI
machines to understand what the data contains and what components are important.
Klein pointed to a common misconception that all AI systems are the same, which
is not the case.
DevOps Has Won, Long Live the Platform Engineer
A decade ago, DevOps was a cultural phenomenon, with developers and operations
coming together and forming a joint alliance to break through silos. Fast
forward to today and we’ve seen DevOps further formalized with the emergence of
platform engineering. Under the platform-engineering umbrella, DevOps now has a
budget, a team and a set of self-service tools so developers can manage
operations more directly. The platform engineering team provides benefits that
can make Kubernetes a self-service tool, enhancing efficiency and speed of
development for hundreds of users. It’s another sign of the maturity and
ubiquity of Kubernetes. ... When a technology becomes ubiquitous, it starts to
become more invisible. Think about semiconductors, for example. They are
everywhere. They’ve advanced from micrometers to nanometers, from five
nanometers down to three. We use them in our remote controls, phones and cars,
but the chips are invisible and as end users, we just don’t think about them.
How Google Keeps Company Data Safe While Using Generative AI Chatbots
“We approach AI both boldly and responsibly, recognizing that all customers have
the right to complete control over how their data is used,” Google Cloud’s Vice
President of Engineering Behshad Behzadi told TechRepublic in an email. Google
Cloud makes three generative AI products: the contact center tool CCAI Platform,
the Generative AI App Builder and the Vertex AI portfolio, which is a suite of
tools for deploying and building machine learning models. Behzadi pointed out
that Google Cloud works to make sure its AI products’ “responses are grounded in
factuality and aligned to company brand, and that generative AI is tightly
integrated into existing business logic, data management and entitlements
regimes.” ... In late June 2023, Google announced a competition for something a
bit different: machine unlearning, or making sure sensitive data can be removed
from AI training sets to comply with global data regulation standards such as
the GDPR.
Understanding the Benefits of Computational Storage
The Storage Networking Industry Association (SNIA) defines computational storage
as “architectures that provide computational storage functions (CSFs) coupled to
storage, offloading host processing or reducing data movement.” The advantage of
computational storage over traditional storage, LaChapelle notes, is that it
pushes the computational requirement to handle data queries and processing
closer to the data, thereby reducing network traffic and offloading work from
compute CPUs. There are two general categories of computational storage: fixed
computational storage services (FCSS); and programmable computational storage
services (PCSS). “FCSS are optimized for specific, computationally intensive
tasks such as inline compression of encryption at the drive,” LaChapelle says.
... There are several different approaches to computational storage, such as the
integration of processing power into individual drives (in-situ processing), and
accelerators that sit on the storage bus at the storage controller, not in the
drives themselves.
Sustainable IT: A crisis needing leadership and change
IT leaders play a crucial role in spearheading sustainability initiatives
within their organizations, yet according to the non-profit SustainableIT.org,
one in four IT organizations are not supporting any ESG mandates. Why is this?
Implementation challenges could present a roadblock. A lack of standards to
follow to evaluate a company’s carbon footprint also presents challenges. In
fact, 50% of firms surveyed in the Capgemini report say they have an
enterprise-wide sustainability strategy, but only 18% have a strategy with
defined goals and target timelines. ... This is where IT leadership needs to
step up. IT leaders have the right relationships and are best positioned to
pioneer and champion this change. These leaders have the power to ask the
right questions, initiate process changes, and implement strategies that
foster a more environmentally-friendly business environment. For instance, IT
leaders can improve employee awareness surrounding sustainability and can
streamline data processes to optimize efficiency to reduce electric
consumption.
Quote for the day:
"A good leader can't get too far ahead
of his followers" -- Franklin D. Roosevelt
No comments:
Post a Comment