Attackers find new ways to deliver DDoSes with “alarming” sophistication
The newer methods attempt to do two things: (1) conceal the maliciousness of
the traffic so defenders don’t block it and (2) deliver ever-larger traffic
floods that can overwhelm targets even when they have DDoS mitigations in
place. ... Another method on the rise is the exploitation of servers running
unpatched software for the Mitel MiCollab and MiVoice Business Express
collaboration systems, which act as a gateway for transferring PBX phone
communications to the Internet and vice versa. A vulnerability tracked as
CVE-2022-26143 stems from an unauthenticated UDP port the unpatched software
exposes to the public Internet. By flooding a vulnerable system with requests
that appear to come from the victim, the system in turn pummels the victim
with a payload that can be 4 billion times bigger. This amplification method
works by issuing what’s called a “startblast” debugging command, which
simulates a flurry of calls to test systems. “As a result, for each test call,
two UDP packets are sent to the issuer, enabling an attacker to direct this
traffic to any IP and port number to amplify a DDoS attack,” the Cloudflare
researchers wrote.
Overcoming user resistance to passwordless authentication
A passwordless platform can replace these siloed mechanisms with a single
experience that encompasses both biometric-based identity verification and
authentication. During initial on-boarding, the system validates the integrity
of the device, captures biometric data (selfie, live selfie, fingerprint,
etc.) and can even verify government documents (driver’s license, passport,
etc.), which creates a private, reusable digital wallet that is stored in the
device TPM / secure enclave. ... For legacy systems that an organization can’t
or won’t migrate to passwordless, some passwordless platforms use facial
matching to reset or change passwords. This eliminates the friction associated
with legacy password reset tools that are often targeted by cybercriminals.
Some passwordless authentication platforms even support offline access when
internet access is not available or during a server outage. They can also
replace physical access tokens – such as building access cards – by allowing
users to authenticate via the same digital wallet that provides access to the
IT network.
Apple eyes a late arrival to the generative AI party
Privacy isn’t just an advantage in consumer makets; it also matters within the
enterprise. Anxious to protect company data, major enterprises including
Apple, Samsung, and others have banned employees from using ChatGPT or GitHub
Copilot internally. The desire to use these tools exists, but not at the cost
of enterprise privacy. Within the context of Apple’s growing status in
enterprise IT, the eventual introduction of LLM services that can deliver
powerful results while also having privacy protection built in means the
company will be able to provide tools enterprise employees might be permitted
to use. Not only this, but those tools could end up displaying a degree of
personal contextual relevance that isn’t available elsewhere — without sharing
key personal data with others. So, there’s a lot of optimism; it is, after
all, not the first time Apple has appeared to be late to a party and then
delivered a better experience than available elsewhere. This optimism was
reflected swiftly by investors. While warning that the next iPhone may not
ship until October, Bank of America raised its Apple target to $210 per share
from $190
Why — and how — high-performance computing technology is coming to your data center
Not long ago, conventional thinking was that high-performance computing was
only required for exceptionally data-intensive applications within select
industries — aerospace, oil and gas, and pharmaceuticals, for example, in
addition to supercomputing centers dedicated to solving large, complex
problems. This is no longer the case. As data volumes have exploded, many
organizations are tapping into these technology and techniques to perform
essential functions. In a relatively short timeframe, they’ve gone from
believing they would never need anything beyond routine compute performance
capabilities, to depending on high-performance computing to fuel their
business success. ... In conjunction with AI and data analytics,
high-performance computing is powering entire industries that depend for their
existence on performing large-scale, mathematically intensive computations for
a variety of needs, including faster business insights and results to drive
improved decision-making.
Backup in the age of cloud
While it originated at a time when 30GB hard drives and CD backups were
prevalent, it has adapted to the present era of 18TB drives and widespread
cloud storage. The strategy's simplicity and effectiveness in safeguarding
valuable information, Sia says, has contributed to its popularity among data
protection experts. Many enterprises today have embraced the 3-2-1 concept,
with primary backups stored in a datacentre for quick recovery, and a second
copy kept on a different infrastructure to avoid a single point of failure,
says Daniel Tan, head of solution engineering for ASEAN, Japan, Korea and
Greater China at Commvault. “In addition, the same data could be uploaded to
an offsite cloud on a regular basis as the third online copy, which can be
switched offline if required, to provide an air gap that effectively protects
data from being destroyed, accessed, or manipulated in the event of a cyber
security attack or system failure.” Indeed, the cloud, with its geographical
and zone redundancy, flexibility, ease of use, and scalability, is an
increasingly important part of an organisation’s 3-2-1 backup strategy, which
remains relevant today
Megatrend alert: The rise of ubiquitous computing
First, I get that cloud computing is also ubiquitous in architecture. However,
we use these resources as if they are centrally located, at least virtually.
Moving to a more ubiquitous model means we can leverage any connected platform
at any time for any purpose. This means processing and storage occur across
public clouds, your desktop computer, smartwatch, phone, or car. You get the
idea—anything that has a processor and/or storage. With a common abstracted
platform, we push applications and data out on an abstracted space, and it
finds the best and most optimized platform to run on or across platforms as
distributed applications. For instance, we develop an application, design a
database on a public cloud platform, and push it to production. The
application and the data set are then pushed out to the best and most
optimized set of platforms. This could be the cloud, your desk computer, your
car, or whatever, depending on what the application does and needs. Of course,
this is not revolutionary; we’ve been building complex distributed systems for
years.
MIT Makes Probability-Based Computing a Bit Brighter
At the heart of the team’s p-bit is a component called an optical parametric
oscillator (OPO), which is essentially a pair of mirrors that bounce light
back and forth between them. The light does not travel in a physical vacuum,
however, in the same sense that outer space is a vacuum. “We do not actually
pump a vacuum,” Roques-Carmes says. “In principle...it’s in the dark. We’re
not sending in any light. And so that’s what we call the vacuum state in
optics. There’s just no photon, on average, in the cavity.” When a laser is
pumped into the cavity, the light oscillates at a specific frequency. But each
time the device is powered up, the phase of the oscillation can take on one of
two states. Which state it settles on depends on quantum phenomena known as
vacuum fluctuations, which are inherently random. This
quantum effect is behind such well-observed phenomena as the Lamb shift of
atomic spectra and the Casimir and van der Waals forces found in nanosystems
and molecules, respectively. OPOs have previously been used to generate random
numbers, but for the first time the MIT team showed they could exert some
control over the randomness of the output.
5 ways CIOs can help eliminate a culture of busyness
As leaders, it’s crucial to prioritize outcomes achieved, especially in the
world of hybrid and remote work, adds Constantinides. “Rather than fixating on
the process, we should concentrate on the results,” she says. “An
outcome-based model provides employees with the confidence and autonomy to
excel in their work.” For her, this entails establishing clear expectations
and objectives, communicating them effectively, empowering teams with
accountability, measuring outcomes, and offering clear feedback. I don’t think
this is only a CIO issue; it’s a leadership issue, says Thaver. In many
business environments, perceptions of busyness have existed for years.
Eliminating these ideas demands that leaders push a culture of learning,
unlearning and relearning so an environment is created where it’s possible,
and encouraged, for people to change bad habits. According to Naren
Gangavarapu, CIO at the Northern Beaches Council, CIOs must partner with the
leadership and other important business stakeholders to manage expectations
and make sure that outcomes are the most important metric for success.
Sophisticated HTTP and DNS DDoS attacks on the rise
The internet’s domain name system (DNS) that is responsible for translating
domain names into IP addresses has also been a frequent target for DDoS
attacks. In fact, over the last quarter over 32% of all DDoS attacks observed
and mitigated by Cloudflare were over the DNS protocol. There are two types of
DNS servers: authoritative DNS servers that hold the collection of records for
a domain name and all its subdomains (known as a DNS zone) and recursive DNS
resolvers, which take DNS queries from end-users, look up which is the
authoritative server for the requested domain, query it and return the
response back to the requesting user. To make this process more efficient, DNS
resolvers cache the records they obtain from authoritative servers for a
period of time, so they don’t have to query the authoritative servers too
often for the same information. The time before cached records expire is
configurable and admins must strike a balance, because a long expiry time
means the DNS resolver might end up with outdated information about record
changes made on the authoritative server negatively impacting the experience
for users that rely on it.
How Will the New National Cybersecurity Strategy Be Implemented?
The National Cybersecurity Strategy is buttressed by five pillars. The first
focuses on defending critical infrastructure. Increasing public-private
collaboration is a big part of this strategic pillar. Joshua Corman, vice
president of cyber safety strategy at Claroty and former CISA chief
strategist, notes that this push is being met with pushback in some cases.
“After a decade plus of largely voluntary practices, like the NIST CSF
[National Institute of Standards and Technology Cybersecurity Framework], some
sectors are unhappy with the … more muscular rebalancing of public good and
increased use of regulation,” he explains. Yet, the value of collaboration
among federal agencies, the private sector, and international partners is
clear. “This can lead to information sharing, knowledge exchange, and
coordinated efforts to combat cyber threats effectively,” says Nicole
Montgomery, cyber operations lead at IT service management company Accenture
Federal Services. Jeff Williams, co-founder and CTO of Contrast Security,
points out that this implementation plan represents a more proactive approach
to cybersecurity.
Quote for the day:
“If you're not prepared to be wrong, you'll never come up with anything
original.” -- Sir Ken Robinson
No comments:
Post a Comment