How critical infrastructure operators can secure OT data
OT data is foundational to critical areas of operations – a breach to OT systems
can risk core business process operations and expose critical data. There is
still some maturity required among organisations in prioritising backup and data
protection as part of their organisation’s security posture and planned response
to a cyber attack. Based on research we did in April 2022 across the UK, US and
Australia of over 2,000 IT decision-makers and SecOps professionals, only 54% of
IT decision-makers said backup and data protection was a top priority and a
crucial capability, while only 38% of SecOps respondents said the same. Many
organisations focus on “protect controls” to reduce the likelihood of a breach,
but they also need to look at security controls that limit the impact of a
breach. This means ensuring your recovery capabilities can meet aggressive
recovery time and point objectives, so that you can resume business operations
while minimising the impact of a ransomware attack.
Uber Open-Sourced Its Highly Scalable and Reliable Shuffle as a Service for Apache Spark
Spark is shuffling data on local machines by default. It causes challenges while
the scale is getting very large (about 10,000 nodes on Uber Scale). At this
scale of operation, major reliability and scalability problems happen. One main
challenging area in using Spark at Uber scale is system reliability. Machines
are generating terabytes of data to shuffle every day. This causes disk SSDs to
wear out faster while they are not designed and optimized for high IO workloads.
SSDs are designed to work generally for 3 years but in heavy Spark shuffling
operations, they are working for about 6 months. Also, lots of failures happen
for shuffling operations which decreases system reliability. The other challenge
in this area is scalability. Applications could produce lots of data that could
not be fitted on a single machine. It causes a full disk exception problem.
... To resolve the mentioned issues, engineers at Uber architected and
designed Remote Shuffle Service (RSS) as shown in the following diagrams. It
solves the mentioned reliability and scalability problems in the common Spark
shuffling operation.
SMS-Based Multi-Factor Authentication: What Could Go Wrong? Plenty
“We call it smishmash because it’s a mashup of techniques,” explains Olofsson.
“SMS for two-factor authentication [2FA] is broken. This is not news; it’s
been broken since the inception. It was never intended for this use. We’ve
been spoofing text messages since as long as we’ve been hacking. It’s just
that now we’re seeing weaponization.” Text messages have a higher implicit
trust than email scams, and hence a higher success rate, he notes. Olofsson
reviewed several newsworthy breaches involving smishing and 2FA, including a
major theft of NFTs from OpenSea. “We see a huge increase in the number of
smishing attacks,” he says. “How many of you have got an unsolicited text in
the last week? Your phone numbers are increasingly being leaked.” "What we
have done [is combine] a search of the clear-net and darknet to create a huge
database," says Byström. "Doing this research, we got so much spam,” adds
Olofsson. "Even ‘do you want to buy the Black Hat attendee list?’ We got the
price down below $100."
Sloppy Use of Machine Learning Is Causing a ‘Reproducibility Crisis’ in Science
Kapoor and Narayanan warn that AI’s impact on scientific research has been
less than stellar in many instances. When the pair surveyed areas of science
where machine learning was applied, they found that other researchers had
identified errors in 329 studies that relied on machine learning, across a
range of fields. Kapoor says that many researchers are rushing to use machine
learning without a comprehensive understanding of its techniques and their
limitations. Dabbling with the technology has become much easier, in part
because the tech industry has rushed to offer AI tools and tutorials designed
to lure newcomers, often with the goal of promoting cloud platforms and
services. “The idea that you can take a four-hour online course and then use
machine learning in your scientific research has become so overblown,” Kapoor
says. “People have not stopped to think about where things can potentially go
wrong.” Excitement around AI’s potential has prompted some scientists to bet
heavily on its use in research. Tonio Buonassisi, a professor at MIT who
researches novel solar cells, uses AI extensively to explore novel
materials.
Why edge is eating the world
The edge is a distributed system. And when dealing with data in a distributed
system, the laws of the CAP theorem apply. The idea is that you will need to
make tradeoffs if you want your data to be strongly consistent. In other
words, when new data is written, you never want to see older data anymore.
Such a strong consistency in a global setup is only possible if the different
parts of the distributed system are joined in consensus on what just happened,
at least once. That means that if you have a globally distributed database, it
will still need at least one message sent to all other data centers around the
world, which introduces inevitable latency. Even FaunaDB, a brilliant new SQL
database, can’t get around this fact. Honestly, there’s no such thing as a
free lunch: if you want strong consistency, you’ll need to accept that it
includes a certain latency overhead. Now you might ask, “But do we always need
strong consistency?” The answer is: it depends. There are many applications
for which strong consistency is not necessary to function. One of them is, for
example, this petite online shop you might have heard of: Amazon.
How To Protect Yourself With A More Secure Kind Of Multi-Factor Authentication
According to the Cybersecurity and Infrastructure Security Agency,
“Multi-factor authentication is a layered approach to securing data and
applications where a system requires a user to present a combination of two or
more credentials to verify a user’s identity for login.” When we log into an
online account, we’re often aiming to thwart an attacker or hacker using extra
layers of verification — or locks. ... First, let’s talk about the marketing
of MFA. If your MFA provider touts itself as unhackable or 99% unhackable,
they are spouting multi-factor B.S. and you should find another provider. All
MFA is hackable. The goal is to have a less hackable, more phishing resistant,
more resilient MFA. Registering a phone number leaves the MFA vulnerable to
SIM-swapping. If your MFA does not have a good backup mechanism, then that MFA
option is vulnerable to loss. ... Multi-factor authentication is more securely
accomplished with an authenticator app, smart card or hardware key, like a
Yubikey. So if you have an app-based or hardware MFA, you’re good, right?
Well, no.
Met Police ramps up facial recognition despite ongoing concerns
Russell acknowledges that there are exceptional circumstances in which LFR could
be reasonably deployed – for instance, under the threat of an imminent terrorist
attack – but says the technology is ripe for abuse, especially in the context of
poor governance combining with concerns over the MPS’s internal culture raised
by the policing inspectorate, which made the “unprecedented” decision to place
the force on “special measures” in June 2022 over a litany of systemic failings.
“While there are many police officers who have public service rippled through
them, we have also seen over these last months and years of revelations about
what’s been going on in the Met, that there are officers who are racist, who
have been behaving in ways that are completely inappropriate, with images [and]
WhatsApp messages being shared that are racist, misogynist, sexist and
homophobic,” she said, adding that the prevalence of such officers continuing to
operate unidentified adds to the risks of the technology being abused when it is
deployed.
Many ZTNA, MFA Tools Offer Little Protection Against Cookie Session Hijacking Attacks
The researchers recently examined technologies from Okta, Slack, Monday, GitHub,
and dozens of other companies to see what protection they offered against
attackers using stolen session cookies to take over accounts, impersonate
legitimate users, and move laterally in compromised environments. ... Okta
described such attacks as an issue for which it was not directly responsible.
"As a web application, Okta relies on the security of the browser and operating
system environment to protect against endpoint attacks such as malicious browser
plugins or cookie stealing," Mesh quoted Okta as saying. Most of the other
vendors that Mesh contacted about the issue similarly distanced themselves from
any responsibility for cookie theft, reuse, and session-hijacking attacks, says
Netanel Azoulay, co-founder and CEO of Mesh Security. "We believe that this
issue is the complete responsibility of the vendors on our list — including IdP
and ZTNA solutions," Azoulay insists.
Edge computing: 4 pillars for CIOs and IT leaders
By definition, edge computing sort of takes the notion of a centralized IT
network environment and shatters it into hundreds or even thousands (or more) of
smaller environments. Picture the classic image of a room full of servers, but
now every server on every rack sits in its own room – or in many cases no room
at all, but on an oil rig or manufacturing floor or cell tower. Almost
regardless of your edge use cases, it’s going to entail moving lots of the stuff
that has long been the domain of IT – infrastructure/compute, devices,
applications, data – away from your IT environment, however that’s currently
defined. Properly managing all of that stuff requires some forethought. “You’re
probably going to have a lot of devices out on the edge and there probably isn’t
much in the way of local IT staff there,” says Gordon Haff, technology
evangelist, Red Hat. “So automation and management are essential for tasks like
mass configuration, taking actions in response to events, and centralized
application updates.”
CIOs Turn to the Cloud as Tech Budgets Come Under Scrutiny
Although investment in cloud tech is booming, CIOs should also be keeping a
critical eye on managing cloud costs, which can quickly spiral out of control.
To ensure that cloud costs are properly controlled, it is important for CIOs to
have tools that enable them to tightly monitor and act on unused resources --
there are no cost benefits if these idle resources remain on the cloud balance
sheet. JupiterOne CISO Sounil Yu says the engineering team should shut down
these resources soon after they become idle and rebuild the resources through
automation when they are needed again. “CIOs should enforce this routine because
in addition to reducing costs, it improves the overall resiliency of the
organization to unexpected failures since it forces engineers to practice
rebuilding regularly,” he says. Dennis Monner, chief commercial officer at
Aryaka, agrees cloud investment is going up, and points out there are two parts
of this. “First, CIOs need to understand their true cloud costs versus bringing
it back in-house, which also introduces risk and expenses,” he said. “This needs
to be a true apples-to-apples comparison.”
Quote for the day:
"Leadership is a matter of having
people look at you and gain confidence, seeing how you react. If you're in
control, they're in control." -- Tom Landry
No comments:
Post a Comment