Preparing for the Upcoming Quantum Computing Revolution
The primary challenge to successful quantum computing lies within the technology
itself. In contrast to classical computers, a quantum computer employs quantum
bits, or qubits that can be both 0 and 1 at the same time, Jagannathan says.
Such two-way states give quantum computer its power, yet even the slightest
interaction with their surroundings can create distortion. "Correcting these
errors, known as quantum error correction (QEC), is the biggest challenge and
progress has been slower than anticipated," he says. There's also an important
and possibly highly destructive aspect to quantum technology. "In addition to
[a] wide range of benefits . . . it is also expected that [cybercriminals] will
someday be able to break public key algorithms that serve as a basis for many
cryptographic operations, like encryption or digital signatures," says Colin
Soutar, managing director and cyber and strategic risk leader with Deloitte
& Touche. "It's important that organizations carefully understand what
exposure they may have to this [threat] so that they can start to take
mitigation steps and not let security concerns overshadow the positive potential
of quantum computing," says Soutar
DataOps Goes Mainstream As Atlan Lands Big
Data drives businesses growth and provides valuable insights prior to any
conclusive decision making. As the enterprises scale, many challenges surface.
For instance, working professionals, including data scientists, analysts,
engineers, join in with different skill-sets and tools. Different people,
different tools, different working styles – all these lead to a major
bottleneck. Business segments are in dire need of data management to create
contextual insights, now is the time to improve the quality and speed of data
streaming into the organisation and get leadership commitment to support and
sustain a data-driven vision across the company. This is where DataOps (data
operations) come in handy. For instance, users can integrate their tables from
Databricks with Atlan in a series of steps. Initially there are some
prerequisites for establishing a connection between Atlan and Databricks
Account: Go to the Databricks console and select “Clusters” from the left
sidebar; Select the cluster you want to connect with Atlan. The cluster
should be in a Running state for the Atlan crawler to fetch metadata from
it; Click on “Advanced Options” in the “Configuration” tab.
Ransomware-as-a-service: How DarkSide and other gangs get into systems to hijack data
They're offering a service and they sit somewhere on the darker side of the
internet and they offer what's called ransomware-as-a-service. They recruit
affiliates or essentially sub-contractors who come in, who use their platform
and then attack companies. And in the case of DarkSide, if you actually logged
into the infrastructure and take a look at it, which is something we in the
research community actively do, they had a very polished operation. They provide
technical support for their affiliates who are breaking into companies. They
provide monetization controls so that an affiliate can go in and see how much
has been paid and what's outstanding and manage the money and all that. They're
basically like companies and that's the challenge with ransomware now is it's
moved from this sort of opportunistic thing where there were a few criminals
scattered around the world doing this, to being these as-a-service operations
that basically mean any enterprising criminal can get access to ransomware for,
I've seen it for less than $100, and then use that to infect stuff. And
obviously at the lower end, you're talking about things that aren't very
sophisticated. The problem is it doesn't need to be sophisticated.
3 Methods to Reduce Overfitting of Machine Learning Models
The most robust method to reduce overfitting is collect more data. The more data
we have, the easier it is to explore and model the underlying structure. The
methods we will discuss in this article are based on the assumption that it is
not possible to collect more data. Since we cannot get any more data, we should
make the most out of what we have. Cross validation is way of doing so. In a
typical machine learning workflow, we split the data into training and test
subsets. In some cases, we also put aside a separate set for validation. The
model is trained on the training set. Then, its performance is measured on the
test set. Thus, we evaluate the model on previously unseen data. In this
scenario, we cannot use a portion of the dataset for training. We are kind of
wasting it. Cross validation allows for using every observation in both training
and test sets. Ensemble models consist of many small (i.e. weak) learners. The
overall model tends to be more robust and accurate than the individual ones. The
risk of overfitting also decreases when we use ensemble models. The most
commonly used ensemble models are random forest and gradient boosted decision
trees.
IT’s silent career killer: Age discrimination
There is a widespread misconception in most industries that older employees
are not “digital savvy” and are afraid to learn new things when it comes to
technology, Miklas adds. “This assumption often results in decisions that can
result in being sued for age discrimination, especially when the older worker
is passed over for promotion, not hired, or terminated,” he says. One issue
that arises more in age discrimination claims than other types of
discrimination is an employer’s use of selection criteria for hiring,
promotion, or layoff decisions that are susceptible to assumptions about age,
says Raymond Peeler, director of the Coordination Division, Office of Legal
Counsel at the U.S. Equal Employment Opportunity Commission (EEOC). “For
example, an employer making determinations about workers based on ‘energy,’
‘flexibility,’ ‘criticality,’ or ‘long-term concerns’ are susceptible to
employer assumptions based on the age of the worker,” Peeler says. The EEOC is
responsible for enforcing federal laws that make it illegal to discriminate
against job applicants or employees because of a person’s race, color,
religion, sex, national origin, disability, genetic information, or age.
Helium Network combines 5G, blockchain and cryptocurrency
Self-appointed as ‘The People’s Network,’ the existing LoRa-based Helium
Network is live with 28,000+ hotspots devices deployed in over 3,800 cities
worldwide, and there are 200,000+ hotspot devices on backorder from various
manufacturers. Helium aims to take that experience and apply it to a new tier
of 5G connectivity that is enabled by the unique CBRS spectrum, 3550 MHz-3700
MHz, which the US Federal Communications Commission has made available on
three tiers of access, two of which are open to non-government users. Though
the Priority Access level is licensed, General Authorized Access permits open
access for the widest group of potential users and use cases. Using gateways
from Helium partner FreedomFi, hotspot hosts – including individual consumers
– will have the option to earn Helium’s own HNT cryptocurrency, in part by
offloading carrier cellular traffic to their 5G hotspots. The FreedomFi
Gateways will be compatible with Helium’s existing open-source blockchain and
IoT network and will by default act as a Helium hotspot, also mining rewards
for proof of coverage and data transfers on the IoT network.
Abu Dhabi could achieve technological sovereignty thanks to quantum computing, says expert
In a panel discussion on whether UAE fintech is going global, Ellen Moeller,
head of EMEA partnerships at Stripe, a San Francisco-based company that offers
software to manage online payments, said key areas of interest for fintechs
included ensuring that transactions were a “very frictionless experience” for
consumers. “They’re used to calling a taxi from the touch of a button,” she
said. “Why shouldn’t it be so simple when we’re talking about financial
services? There’s a lot of opportunity for innovation for fintech. “The final
piece is regulators and central banks embracing this innovation. I think we’ve
only scratched the surface of fintech innovation and there’s lots more to
come.” She added that the UAE “has all the right ingredients” to be a
world-class technology and fintech hub, including a deep pool of talent and
good investment climate. “We’ve seen the UAE do a remarkable job at fostering
fintech,” she added. The region is seeing rapid growth in the number of tech
start-ups in a range of fields, according to Vijay Tirathrai, managing
director of Techstars, a company in the US state of Colorado, that supports
tech start-ups.
A Quantum Leap for Quantum Computing
Quantum computers are expected to greatly outperform the most powerful
conventional computers on certain tasks, such as modeling complex chemical
processes, finding large prime numbers, and designing new molecules
that have applications in medicine. These computers store quantum
information in the form of quantum bits, or qubits — quantum systems that can
exist in two different states. For quantum computers to be truly powerful,
however, they need to be “scalable,” meaning they must be able to scale up to
include many more qubits, making it possible to solve some challenging
problems. “The goal of this collaborative project is to establish a novel
platform for quantum computing that is truly scalable up to many qubits,” said
Boerge Hemmerling, an assistant professor of physics and astronomy at UC
Riverside and the lead principal investigator of the three-year project.
“Current quantum computing technology is far away from experimentally
controlling the large number of qubits required for fault-tolerant computing.
...”
Everyone Wants to Build a Cyber Range: Should You?
The most compelling reason for building a cyber range is that it is one of the
best ways to improve the coordination and experience level of your team.
Experience and practice enhance teamwork and provide the necessary background
for smart decision-making during a real cyberattack. Cyber ranges are one of
the best ways to run real attack scenarios and immerse the team in a live
response exercise. An additional reason to have access to a cyber range is
that many compliance certifications and insurance policies cite mandatory
cyber training of various degrees. These are driven by mandates and compliance
standards established by the National Institute of Standards and Technology
and the International Organization for Standardization (ISO). With these
requirements in place, organizations are compelled to free up budgets for
relevant cyber training. There are different ways to fulfill these training
requirements. Per their role in the company, employees can be required to
undergo certifications by organizations such as the SANS Institute.
The biggest diversity, equity and inclusion trends in tech
It’s important to take a look at the hiring strategy, and make sure that it
attracts a diverse talent pool. Nabila Salem, president at Revolent Group,
commented: “For the tech industry, there is more than just a moral imperative
to solve the issue of missing equity. The lack of diversity within the tech
sector also compounds upon a very real business challenge for organisations: a
lack of available talent. “The consequences of not plugging this skills gap
are of great concern: GDP growth across the G20 nations could be stunted by as
much as $1.5 trillion over the next decade, if companies refuse to adapt to
the needs that tech presents to us. “One way to overcome this is to invest in
new, diverse talent to help solve both the skills gap and the lack of
representation in tech. New, innovative programs like the Salesforce training
provided by Revolent specialise in fuelling the market with the diverse,
highly skilled new talent it so desperately needs. “There is an opportunity
here, to address the issue of a lack of representation and an overall skills
gap, all at once. Companies must be open to the idea that the average
applicant is not as homogenous as they think. ...”
Shifting to Continuous Documentation as a New Approach for Code Knowledge
Continuously verifying documentation means making sure that the current state
of the documentation matches the current state of the codebase, as the code
evolves. In order to keep the docs in sync with the codebase, existing
documentation needs to be checked against the current state of the code
continuously and automatically. If the documentation diverges from the current
state of the code, the documentation should be modified to reflect the updated
state (automatically or manually). Continuously verifying documentation means
that developers can trust their documentation and know that what’s written
there is still relevant and valid, or at least get a clear indication that a
certain part of it is no longer valid. In this sense, Continuous Documentation
is very much like continuous integration - it makes sure the documentation is
always correct, similar to verifying that all the tests pass. This could be
done on every commit, push, merge, or any other version control mechanism.
Without it, keeping documentation up-to-date and accurate is extremely hard,
and requires manual work that needs to be repeated regularly.
Quote for the day:
"Without courage, it doesn't matter
how good the leader's intentions are." -- Orrin Woodward
No comments:
Post a Comment