Researchers Identify ‘Master Problem’ Underlying All Cryptography
In the absence of proofs, cryptographers simply hope that the functions that
have survived attacks really are secure. Researchers don’t have a unified
approach to studying the security of these functions because each function
“comes from a different domain, from a different set of experts,” Ishai said.
Cryptographers have long wondered whether there is a less ad hoc approach. “Does
there exist some problem, just one master problem, that tells us whether
cryptography is possible?” Pass asked. Now he and Yanyi Liu, a graduate student
at Cornell, have shown that the answer is yes. The existence of true one-way
functions, they proved, depends on one of the oldest and most central problems
in another area of computer science called complexity theory, or computational
complexity. This problem, known as Kolmogorov complexity, concerns how hard it
is to tell the difference between random strings of numbers and strings that
contain some information. ... The finding suggests that instead of looking far
and wide for candidate one-way functions, cryptographers could just concentrate
their efforts on understanding Kolmogorov complexity. “It all hinges on this
problem,” Ishai said.
4 Reasons Decentralized Business Management Is Booming
Organizations face employee churn all the time, whether due to a lack of
challenging work or dissatisfaction with the company's overall direction. Both
of these reasons are interconnected. An inflexible organizational hierarchy
leaves employees fighting to impress their managers instead of creating
revenue-generating assets. With power consolidated in the hands of a few,
leadership skills are scarce. Thus, when top-level executives move on, the
company faces a tough time replacing those who departed and must engage
resources to locate and vet suitable leadership. Promoting from within is ideal
because long-term employees understand the company and its products well.
They've witnessed the company's processes from the ground up, which makes them
ideal leaders. However, centralized organizations don't provide low-level
employees with the opportunity to ascend to leadership roles. A decentralized
organization forces employees to act as leaders. Thanks to greater autonomy and
priority on responsiveness, employees must act decisively. Intrapreneurship
increases, promoting creativity, and the organization is energized.
DeFi can breathe new life into traditional assets
Tokenization of commodities enables blockchain-based ownership of a physical
asset, which is essentially just a decentralized version of an already-existing
practice in traditional finance. Tokenized precious metals are somewhat similar
conceptually to a share in a gold exchange-traded fund (ETF), as they represent
the investor’s stake in physical gold stored elsewhere and largely work toward
the same purpose. Projects like VNX offer digital ownership of tokenized
commodities that are backed by physical assets including gold, giving the
investor the same benefits as investing in physical gold but have the
versatility of a crypto asset on top of that. Stablecoins are also a viable
option, allowing investors to reap the benefits of decentralization while
maintaining the security of traditional finance. Backing from fiat and other
real-world assets removes the common fear that crypto has no basis. Stablecoins
like TrustToken (TUSD) grant investors more certainty and flexibility, lowering
the stakes for any user by enabling easy redeeming of their funds at any given
moment.
Chinese APT Targets Global Firms in Monthslong Attack
The campaign, which began in October 2019, targeted Japanese firms and their
subsidiaries in 17 locations across the world, Symantec said in its report. The
focus of the campaign was to exfiltrate data, particularly from automotive
organizations, as part of an industrial cyberespionage effort. The APT
group was then using a custom malware variant called Backdoor.Hartup as well as
"living off the land" tools to target its victims. Once the victim's network was
compromised, the hackers remained active for up to a year to exfiltrate data.
Cicada then used a Dynamic Link Library side-loading technique to compromise the
victims' domain controllers and file servers. "Various tools (were) deployed in
this campaign, and Cicada’s past activity indicates that the most likely goal of
this campaign is espionage. Cicada activity was linked by U.S. government
officials to the Chinese government in 2018," the latest report says. Upon
successfully gaining access to victim machines, the Symantec researchers
observed APT actors deploying a custom loader and the SodaMaster backdoor.
First malware targeting AWS Lambda serverless platform disclosed
The researchers have dubbed the malware “Denonia” — the name of the domain
that the attackers communicated with — and say that it was utilized to enable
cryptocurrency mining. But the arrival of malware targeting AWS Lambda
suggests that cyberattacks against the service that bring greater damage are
inevitable, as well. Cado Security said it has reported its findings to AWS.
In a statement in response to an inquiry about the reported malware discovery,
AWS said that “Lambda is secure by default, and AWS continues to operate as
designed.” ... Cado Security cofounder and CTO Chris Doman said that
businesses should expect that serverless environments will follow a similar
threat trajectory to that of container environments, which he noted are now
commonly impacted by malware attacks. Among other things, that means that
threat detection in serverless environments will need to catch up, Doman said.
“The new way of running code in serverless environments requires new security
tools, because the existing ones simply don’t have that visibility. They won’t
see what’s going on,” Doman said. “It’s just so different.”
Why We’re Porting Our Database Drivers to Async Rust
Similar to the way Python relies on modules compiled in C to make other
modules less unbearably slow faster, our CQL drivers could benefit from a Rust
core. A lightweight API layer would ensure that the drivers are still backward
compatible with their previous versions, but the new ones will delegate as
much work as possible straight to the Rust driver, trusting that it’s going to
perform the job faster and safer. Rust’s asynchronous model is a great fit for
implementing high-performance, low-latency database drivers because it’s
scalable and allows high concurrency in your applications. Contrary to what
other languages implement, Rust abstracts away the layer responsible for
running asynchronous tasks. This layer is called runtime. Being able to
select, or even implement, your own runtime is a powerful tool for developers.
After careful research, we picked Tokio as our runtime due to its active open
source community, focus on performance; rich feature set, including complete
implementation for network streams, timers, etc., and lots of fantastic
utilities like tokio-console.
How David Chaum Went From Inventing Digital Cash to Pioneering Digital Privacy
Shocked by the surveillance operations exposed by Edward Snowden, Chaum
refined the mixing technologies developed at the end of the 1970s to provide
untraceable message sending, using sophisticated cryptography not only to
encrypt the content of message but to hide the identity of the user by
eliminating the "metadata" of who sends messages to whom, how often and from
where. Chaum is horrified by the promises of “end-to-end” message content
encryption offered by companies such as Meta (formerly Facebook.) It leaves
user metadata intact, which means it can still be harvested and sold, he
warns. “It's criminal. It's exploitative of the public in the worst way,” says
Chaum. “Because the real value in the information is the traffic data,” and
“the sender's social graph and its relation to the timing of events,” he
says—it could be used to predict our behavior and to further political ends
(as was the case in the Cambridge Analytica scandal).
Reproducibility in Deep Learning and Smooth Activations
The Smooth reLU (SmeLU) activation function is designed as a simple function
that addresses the concerns with other smooth activations. It connects a 0
slope on the left with a slope 1 line on the right through a quadratic middle
region, constraining continuous gradients at the connection points (as an
asymmetric version of a Huber loss function). SmeLU can be viewed as a
convolution of ReLU with a box. It provides a cheap and simple smooth solution
that is comparable in reproducibility-accuracy tradeoffs to more
computationally expensive and complex smooth activations. The figure below
illustrates the transition of the loss (objective) surface as we gradually
transition from a non-smooth ReLU to a smoother SmeLU. A transition of width 0
is the basic ReLU function for which the loss objective has many local minima.
As the transition region widens (SmeLU), the loss surface becomes smoother. If
the transition is too wide, i.e., too smooth, the benefit of using a deep
network wanes and we approach the linear model solution — the objective
surface flattens, potentially losing the ability of the network to express
much information.
The security implications of the hybrid working mega-trend
Ultimately, any high-level security model really breaks down into a trust
issue: Who and what can I trust? – the employee, the devices, and the
applications the employee is trying to connect to. In the middle is the
network, but today, more often than not, the network is the internet. Think
about it. Employees sit in coffee shops and log onto public browsers to access
their email. So now what organisations are looking for is a secure solution
for their applications, devices, and users. Every trusted or ‘would-be
trusted’ end-user computing device has security software installed on it by
the enterprise IT department. That software makes sure the device and the user
who is on the device is validated, so the device becomes the proxy to talk to
the applications on the corporate network. So now the challenge lies in
securing the application itself. Today’s cloud infrastructure connects the
user directly to the application, so there is no need to have the user connect
via an enterprise server or network. The client is always treated as an
outsider, even while sitting in a corporate office.
The Principles of Test Automation
The only way to reliably find errors is to build a comprehensive automated
test suite. Tests can check the whole application from top to bottom. They
catch errors before they can do any harm, find regressions, and run the
application on various devices and environments at a scale that is otherwise
prohibitively expensive to attempt manually. Even if everyone on the team was
an exceptionally clever developer that somehow never made a mistake,
third-party dependencies can still introduce errors and pose risks. Automated
tests can scan every line of code in the project for errors and security
issues. ... Some tests start their lives as manual tests and get automated
down the road. But, more often than not, this results in overcomplicated,
slow, and awkward tests. The best results come when tests and code have a
certain synergy. The act of writing a test nudges developers to produce more
modular code, which in turn makes tests simpler and more granular. Test
simplicity is important because it’s not practical to write tests for tests.
Code should also be straightforward to read and write. Otherwise, we risk
introducing failures with the test themselves, leading to false positives and
flakiness.
Quote for the day:
"Without courage, it doesn't matter
how good the leader's intentions are." -- Orrin Woodward
No comments:
Post a Comment