Why Phishing-Resistant MFA Is on US Government Fast Track
Many government agencies employ some type of MFA. But the Biden administration's
guidelines call for all agencies to implement stronger security. While legacy
MFA is more secure than using a username and password, it assumes that using a
second device and adding a second factor improves security. It's not that simple
Most legacy MFA uses a combination of a password and a "something you have"
factor. That "something you have" comes into play when implementing the second
factor - a one-time code presented by either a physical token, a text message,
or an email sent to the user. But adding a secondary device or channel is, at
best, much harder to secure and, at worst, impossible to secure. Phishing
campaigns can often phish the additional codes or conduct a man-in-the-middle
attack on the authentication sequences, as made clear by recent breaches of the
companies Uber and Cisco. The biggest issue, however, is that most MFA solutions
rely on shared secrets, like passwords, and provide no security context that
ties back to the end user and their device.
AI's true goal may no longer be intelligence
To be sure, the question of genuine intelligence does still matter to a handful
of thinkers. In the past month, ZDNET has interviewed two prominent scholars who
are very much concerned with that question. Yann LeCun, chief AI scientist at
Facebook owner Meta Properties, spoke at length with ZDNET about a paper he put
out this summer as a kind of think piece on where AI needs to go. LeCun
expressed concern that the dominant work of deep learning today, if it simply
pursues its present course, will not achieve what he refers to as "true"
intelligence, which includes things such as an ability for a computer system to
plan a course of action using common sense. LeCun expresses an engineer's
concern that without true intelligence, such programs will ultimately prove
brittle, meaning, they could break before they ever do what we want them to do.
... The field of AI is undergoing a shift in attitude. It used to be the case
that every achievement of an AI program, no matter how good, would be received
with the skeptical remark, "Well, but that doesn't mean it's
intelligent."
Building the Future of TensorFlow
We see the growth of TensorFlow not just as an achievement to celebrate, but as
an opportunity to go further and deliver more value for the machine learning
community. Our goal is to provide the best machine learning platform on the
planet. Software that will become a new superpower in the toolbox of every
developer. Software that will turn machine learning from a niche craft into an
industry as mature as web development. To achieve this, we listen to the needs
of our users, anticipate new industry trends, iterate on our APIs, and work to
make it increasingly easy for you to innovate at scale. In the same way that
TensorFlow originally helped the rise of deep learning, we want to continue to
facilitate the evolution of machine learning by giving you the platform that
lets you push the boundaries of what's possible. Machine learning is evolving
rapidly, and so is TensorFlow. Today, we're excited to announce we've started
working on the next iteration of TensorFlow that will enable the next decade of
machine learning development. We are building on TensorFlow's class-leading
capabilities, and focusing on four pillars.
Europe Prepares to Rewrite the Rules of the Internet
Next week, a law takes effect that will change the internet forever—and make it
much more difficult to be a tech giant. On November 1, the European Union’s
Digital Markets Act comes into force, starting the clock on a process expected
to force Amazon, Google, and Meta to make their platforms more open and
interoperable in 2023. That could bring major changes to what people can do with
their devices and apps, in a new reminder that Europe has regulated tech
companies much more actively than the US. “We expect the consequences to be
significant,” says Gerard de Graaf, a veteran EU official who helped pass the
DMA early this year. Last month, he became director of a new EU office in San
Francisco, established in part to explain the law’s consequences to big tech
companies. De Graaf says they will be forced to break open their walled gardens.
“If you have an iPhone, you should be able to download apps not just from the
App Store [but] from other app stores or from the internet,” de Graaf says, in a
conference room with emerald green accents at the Irish consulate in San
Francisco where the EU’s office is initially located.
Data analytics pipeline best practices: Data governance
It's not surprising that all-in-one pipeline automation has become a holy grail
for some platform providers. Many enterprises share the same cloud providers,
the same department-level SaaSes, and the same types of de facto-standard
databases. The clear logic behind an all-in-one platform like Gathr, for
example, is that companies will often need the same connectors or "operators,"
much of the same drag-and-drop machine learning process assembly, and the same
sorts of choices between, ETL, ELT and ingestion capabilities. Unifying all this
functionality could mean less work for data and analytics teams. But enterprises
should remember that the compulsion to subscribe to yet another SaaS extends to
these platforms. Engineers in one business unit might gravitate to a Gathr,
while others might favor an Alteryx to map together sources a BI platform might
need, or a super SaaS like OneSaaS that allows simplified mixing and matching
within the OneSaaS environment.
Study Shows Cybersecurity Hype Complicates the Security Stack, Expands the Attack Surface
According to the cybersecurity hype report, confusing marketing strategies by
vendors confused most security leaders. Subsequently, 91% of decision-makers
found it difficult to select cybersecurity vendors due to unclear marketing
about their specific offerings. Additionally, 49% of security leaders said their
organization suffers from vendor sprawl, resulting in an increased attack
surface. Consequently, 92% of organizations implement a defense-in-depth
strategy and have to manage between 10 and 30 different security products.
Defense-in-depth aims to create more technological layers to detect, prevent,
contain, remediate, and recover from attacks. In a noisy marketplace filled with
unsubstantiated claims, users cannot accurately predict the effectiveness of the
hyped solutions, nor do they have the time to do so. ... “Buyers are faced
with a crowded and complex market, needing to continually layer new security
products into their environment to achieve defense-in-depth, assess new and
emerging AI technologies, and continually re-invest in SA&T.”
The Power of Independent Thinking in Leading
The first step in thinking for oneself is self-awareness. When you understand
your values, motives, and aspirations, thinking becomes automatic. Knowing your
strengths and weaknesses, you can selectively apply the knowledge you gained by
reading or the wisdom of others. Thinking for oneself doesn’t mean you ignore
all the knowledge you have gained on the subject. Instead, you question what
your current knowledge tells you. Cultivate your thinking using mental models,
which explain how things work. James Clear, the author of the best-seller,
Atomic Habits, describes many mental models in his blog “Mental Models: Learn
How to Think Better and Gain a Mental Edge.” One of these mental models is
inversion. An example of the application of inversion is to assume your most
crucial project has failed six months from now and ask yourself how it could
have failed. Such an exercise gives you all the things you need to look out for
and plan to mitigate them for the project’s success. Thinking and doing go hand
in hand. Put your thinking into action. Take the learning and refine your
knowledge.
Keeping the cloud secure with sovereignty in mind
Being able to secure your cloud service supply not only requires data controls,
but also access to legal controls. As such, hyperscalers have started adapting
how they deploy cloud services to give nation states assurance — essentially
meaning that cloud services are deployed in partnership with a local
organisation. This has given a rise to sovereign partnerships that license the
hyperscaler technology, and are delivered by suppliers under the local legal
framework. This pragmatic approach has slowly become more common in recent
months, and helps overcome many of the risks associated with using cloud,
particularly its assurance of service supply. Despite this, one of the biggest
barriers to cloud is the current regulatory landscape surrounding how certain
sectors need to control data sovereignty and how that data is securely
processed. This often requires a long list of requirements that must be
fulfilled to shift services onto the cloud, which is unique for each
industry.
The Arguments for Open Source in Mainframes
The arguments for OSS on the mainframe are in many cases the same as for OSS on
any other platform -- more accessible, often more secure, easier to develop.
“These arguments are from the same development teams who push for OSS elsewhere
in the environment,” says Mike Parkin, senior technical engineer at Vulcan
Cyber. “The major differences are when the implementation is specific to the
mainframe environment.” ... Parkin adds there has been a trend to use mainframe
platforms for virtualization, essentially replacing a rack of commodity class
servers with a single Big Iron machine that can do the job more efficiently and
effectively. “Those are ideal use cases for open-source software at multiple
levels, from the guest operating systems to the application layers,” he says.
Boris Cipot, senior security engineer at Synopsys Software Integrity Group, a
provider of integrated software solutions, agrees that open source can bring
fresher and better integrations into today’s working processes and tools, and
enable companies to focus on their work and not re-create existing software
functionality.
Why We Need A Cyber Intelligence Revolution
Unfortunately, the challenges many organizations face include narrowing down
which intelligence sources they’re pulling from, how many can be leveraged at a
time, and how they’re integrated into firewalls and other security solutions. No
one source of threat intelligence or existing security control can successfully
cover the entirety of the threat landscape. It is critical for organizations to
deploy threat intelligence from multiple sources, even those that traditionally
would compete with one another. These can include commercial providers, open
source intelligence data, government agencies and industry sources—all working
together to provide organizations with visibility into the traffic affecting
their networks. The data is in and the results are clear: What we don't know in
the cybersecurity world can hurt us. Thankfully, there are steps your
organization—regardless of size—can take to help ensure your network, users and
data are protected.
Quote for the day:
"You may be good. You may even be better than
everyone esle. But without a coach you will never be as good as you could be."
-- Andy Stanley
No comments:
Post a Comment