How to navigate the co-management conundrum in MSP engagements
Ironically, enterprises can often suppress innovation by using MSPs
transactionally. If the enterprise team has active roles in the delivery of
services, it can help mitigate against thinking transactionally and foster a
more cooperative style from both parties. If the enterprise team behaves
transactionally, because they don’t work alongside the MSP but focus only on
inputs and outputs or reported results, then, eventually, the MSP team can also
tend to behave more transactionally. This places an unwanted governor on good
ideas and flexibility from within the established collective resources. This
doesn’t mean that there isn’t the need to have a robust management framework,
including a statement of work (SOW) where commitments are clearly articulated.
However, even if obligations are ultimately with the MSP, co-management of some
of the task inputs or signoffs under a SOW can sometimes lead to more pragmatic,
dispute-avoiding working practices.
ChatGPT and data protection laws: Compliance challenges for businesses
ChatGPT is not exempt from data protection laws, such as the General Data
Protection Regulation (GDPR), the Health Insurance Portability and
Accountability Act (HIPAA), the Payment Card Industry Data Security Standard
(PCI DSS), and the Consumer Privacy Protection Act (CPPA). Many data protection
laws require explicit user consent for the collection and use of personal data.
... By utilizing ChatGPT and sharing personal information with a third-party
organization like OpenAI, businesses relinquish control over how that data is
stored and used. This lack of control increases the risk of non-compliance with
consent requirements and exposes businesses to regulatory penalties and legal
consequences. Additionally, data subjects have the right to request the erasure
of their personal data under the GDPR’s “right to be forgotten.” When using
ChatGPT without the proper safeguards in place, businesses lose control of their
information and no longer have mechanisms in place to promptly and thoroughly
respond to such requests and delete any personal data associated with the data
subject.
Navigating Cloud Costs and Egress: Insights on Enterprise Cloud Conversations
One of the things that we’ve seen at the enterprise scale is not just cloud
egress cost, but the combination of cloud spend and being able to predict spend
has been a constant topic of conversation. With the economic downturn, one of
the things that we’re seeing is definitely more control over where money is
being spent. I wouldn’t say it’s specifically about egress costs. ... The point
that I’m trying to make is it kind of goes both ways. Some businesses extended
the effect of the economic downturn – and just looking at the trend over a
longer period of time, not just now in the last one or two years – is that the
more sophisticated the organization is in terms of their capability of operating
multiple environments, like an on-prem and the cloud or two clouds, the more
likely they are to not buy into the “all-in” cloud. ... A lot of times what
we heard from our clients was “I want to be on a cloud. On-prem data centers are
done.” But I think about two or three years back is when we saw a wave of
conversations in between. [They said] “Okay, I realize that all-in on cloud is
not going to be my future.”
Hijacked S3 buckets used in attacks on npm packages
This latest threat is part of a growing trend of groups looking at the software
supply chain as an easy way to deploy their malware and quickly have it reach a
broad base of potential victims. Through attacks on npm and other repositories
like GitHub, Python Package Index (PyPI), and RubyGems, miscreants look to place
their malicious code in packages that are then downloaded by developers and used
in their applications. In this case, they found their way in via the abandoned
S3 buckets, part of AWS object storage services that enable organizations to
store and retrieve huge amounts of data – files, documents, and images, among
other digital content – in the cloud. They're accessed via unique URLs and used
for such jobs as hosting websites and backing up data. The bignum package used
node-gyp, a command-line tool written in Node.js, for downloading a binary file
that initially was hosted on a S3 bucket. If the bucket couldn't be accessed,
the package was prompted to look for the binary locally. "However, an
unidentified attacker noticed the sudden abandonment of a once-active AWS
bucket," Nachshon wrote.
Ending the ‘forever war’ against shadow IT
First, CIOs should establish a quick-reaction team (QRT) that deals only with
these small projects that user departments are looking to achieve — especially
when it comes to leveraging AI. The QRT needs to be an elite group within IT
comprising members who understand the risks of data manipulation, are well
versed in security pitfalls, and follow developments in AI enough to know its
opportunities and pitfalls. It would be the mission of this group to analyze the
requirements and assure that data access is secure and that the user understands
the nature of the data being accessed. The QRT would also need to analyze the
parameters of the work to be done to assure that the results are not already
available from another existing source. They would also determine whether the
software is compatible with the existing corporate network. This becomes even
more critical if, at some point, the company wishes to scale the application to
serve the entire corporation. Second, the shadow IT policy must be understood
and enforced by the IT steering committee.
Your AI coding assistant is a hot mess
As testified by Reeve’s wasted hours of bug-hunting, AI tools certainly aren’t
foolproof. They’re often trained on open-source code, which frequently contains
bugs – mistakes that the assistant is prone to replicating. They’re also
notoriously prone to wild delusions, a fact, says Desrosiers, that
cybercriminals can use to their advantage. AI coding assistants are liable to
occasionally make up the existence of entire coding libraries. “Malicious actors
can detect these hallucinations and launch malicious libraries with these
names,” he says, “putting at risk people who let these hallucinated libraries
execute in their production environment.” Careful oversight, says Desrosiers, is
the only solution. That, too, can be facilitated by AI. “To de-risk this and
other potential issues [at Visceral], we build single-purpose autonomous coding
assistants to monitor for such threats,” says Desrosiers. David Mertz says it’s
always important to not be too trusting. “From a security perspective, you just
can’t trust code,” says the author and long-time Python programmer.
Apple beefs up enterprise identity, device management
It’s important to note that account driven user enrollment was largely
designed as a way for users to enroll their personal devices into MDM, while
corporate devices are typically managed with a more traditional profile-based
enrollment that gives IT more access and management options. Apple is now
offering account driven device enrollment that offers added capabilities for
IT with a user experience similar to account-driven user enrollment. ... Along
with improving the enrollment options, Managed Apple IDs will get more
management capabilities. There are two major additions. The first is to
control which types of managed devices a user is allowed to access: any device
regardless of ownership, only managed devices enrolled via MDM, or only
devices that are Supervised. Supervised devices are company-owned and have
stringent management controls. The next biggest of these features is the
ability to control which iCloud services a user can access on a managed
device. Each sync service can be enabled or disabled for a user’s Managed
Apple ID.
Prime minister Rishi Sunak faces pressure from banks to force tech firms to pay for online fraud
In response to TSB CEO’s letter last week, a Meta spokesperson said in a
statement: “This is an industry-wide issue and scammers are using increasingly
sophisticated methods to defraud people in a range of ways, including email,
SMS and offline. We don’t want anyone to fall victim to these criminals, which
is why our platforms already have systems to block scams, financial services
advertisers now have to be FCA authorised to target UK users and we run
consumer awareness campaigns on how to spot fraudulent behaviour. People can
also report this content in a few simple clicks and we work with the police to
support their investigations.” But, in the letter to Sunak, banks said they
want the tech companies to stop fraud on their platforms and to contribute to
refunds for victims. They also called for a public register showing the
failure of tech giants to stop scams. The letter warned that the high level of
fraud was “having a material impact on how attractive the wider UK financial
sector is perceived by inward investors, which as we know, is critical for the
health of the City of London and wider UK economy”.
Why assessing third parties for security risk is still an unsolved problem
The challenge that TPRM companies have is rather simple: Provide a mechanism
for companies that do business with other companies to evaluate the risk that
their vendors present to them, from a cybersecurity perspective.
SecurityScorecard and its primary competitor, BitSight, use a similar
methodology: Create a risk score (sort of like your credit score), evaluate
companies, and score them. ... The credit reporting agencies, for better or
worse, have much more data than the TPRM scoring companies. They’re embedded
throughout our financial system, collecting a lot of information that
shouldn’t be publicly available. The TPRM scoring companies, on the other
hand, are doing the equivalent of drive-by appraisals. They look at the
outside of businesses on the internet and decide how reputable they are based
on their external appearances. Of course, certain business types will look
more secure than others. The alternative to TPRM scoring is, sadly, the TPRM
questionnaire industry, which is only marginally less unhelpful. This is an
industry focused on shipping massive questionnaires to vendors, which take
huge efforts to fill out.
Debugging Production: eBPF Chaos
Tools and platforms based on eBPF provide great insights, and help debugging
production incidents. These tools and platforms will need to prove their
strengths and unveil their weaknesses, for example, by attempting to break or
attack the infrastructure environments and observe the tool/platform behavior.
At a first glance, let’s focus on Observability and chaos engineering. The
Golden Signals (Latency, Traffic, Errors, Saturation) can be verified using
existing chaos experiments that inject CPU/Memory stress tests, TCP delays,
DNS random responses, etc. ... Continuous Profiling with Parca uses eBPF to
auto-instrument code, so that developers don’t need to modify the code to add
profiling calls, helping them to focus. The Parca agent generates profiling
data insights into callstacks, function call times, and generally helps to
identify performance bottlenecks in applications. Adding CPU/Memory stress
tests influences the application behavior, can unveil race conditions and
deadlocks, and helps to get an idea of what we are actually trying to
optimize.
Quote for the day:
"Uncertainty is a permanent part of
the leadership landscape. It never goes away." -- Andy Stanley
No comments:
Post a Comment