![](https://venturebeat.com/wp-content/uploads/2021/02/crowd_electromagnetic.jpg?w=800&strip=all)
The research team plans to examine public acceptance and ethical concerns around
the use of this technology. Such concerns would not be surprising and conjure up
a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the
thought police watchers are expert at reading people’s faces to ferret out
beliefs unsanctioned by the state, though they never mastered learning exactly
what a person was thinking. This is not the only thought technology example on
the horizon with dystopian potential. In “Crocodile,” an episode of Netflix’s
series Black Mirror, the show portrayed a memory-reading technique used to
investigate accidents for insurance purposes. The “corroborator” device used a
square node placed on a victim’s temple, then displayed their memories of an
event on screen. The investigator says the memories: “may not be totally
accurate, and they’re often emotional. But by collecting a range of
recollections from yourself and any witnesses, we can help build a corroborative
picture.” If this seems farfetched, consider that researchers at Kyoto
University in Japan developed a method to “see” inside people’s minds using an
fMRI scanner, which detects changes in blood flow in the brain.
![](https://images.idgesg.net/images/article/2020/07/ransomware_locked_data_by_metamorworks_gettyimages-913641990_bitcoins_by_nature_gettyimages-1195279346_2400x1600-100852471-large.jpg)
Whatever backup solution you choose, copies of backups should be stored in a
different location. This means more than simply putting your backup server in a
virtual machine in the cloud. If the VM is just as accessible from an electronic
perspective as it would be if it were in the data center, it’s just as easy to
attack. You need to configure things in such a way that attacks on systems in
your data center cannot propagate to your backup systems in the cloud. This can
be done in a variety of ways, including firewall rules, changing operating
systems and storage protocols. ... If your backup system is writing backups to
disk, do your best to make sure they are not accessible via a standard
file-system directory. For example, the worst possible place to put your backup
data is E:\backups. Ransomware products specifically target directories with
names like that and will encrypt your backups. This means that you need to
figure out a way to store those backups on disk in such a way that the operating
system doesn’t see those backups as files. For example, one of the most common
backup configurations is a backup server writing its backup data to a target
deduplication array that is mounted to the backup server via server message
block (SMB) or network file system (NFS).
![](https://img2.helpnetsecurity.com/posts2021/accenture-15022021.jpg)
“The role of the CFO has further evolved beyond serving as the finance lead to
becoming a ‘digital steward’ of their organization. Increasingly, CFOs are
focused on collecting and interpreting data for key business decisions and
enabling strategy beyond the borders of the finance function,” said Christian
Campagna, Ph.D., senior managing director and global lead of the CFO &
Enterprise Value practice at Accenture. “Faced with new challenges spurred by
the pandemic, today’s CFOs must execute their organizations’ strategies at
breakthrough speeds to create breakout value and success that can be realized
across the enterprise.” The report identifies an elite group (17%) of CFOs who
have transformed their roles effectively, resulting in positive changes to their
organizations’ top-line growth and bottom-line profitability. ... increasingly,
companies are looking to CFOs to spearhead thinking around future operating
models and drive the technology agenda forward with a focus on security and
ESG. In fact, 68% of surveyed CFOs say that finance takes ultimate
responsibility for ESG performance within their enterprise. However, 34%
specifically cited concern about data and privacy breaches as a barrier
preventing them from realizing their full potential as a driver of strategic
change.
Reproducibility is a major principle of the scientific method. It means that a
result obtained by an experiment or observational study should be achieved again
with a high degree of agreement when the study is replicated with the same
methodology by different researchers. According to a 2016 Nature survey, more
than 70% of researchers have tried and failed to reproduce another scientist's
experiments, and more than half have failed to reproduce their own experiments.
This so-called reproducibility or replication crisis has not left artificial
intelligence intact either. Although the writing has been on the wall for a
while, 2020 may have been a watershed moment. That was when Nature published a
damning response written by 31 scientists to a study from Google Health that had
appeared in the journal earlier. Critics argued that the Google team provided so
little information about its code and how it was tested that the study amounted
to nothing more than a promotion of proprietary tech. As opposed to sometimes
obscure research, AI has the public's attention and is backed and capitalized by
the likes of Google. Plus, AI's machine learning subdomain with its black box
models makes the issue especially pertinent. Hence, this incident was widely
reported on and brought reproducibility to the fore.
![](https://images.idgesg.net/images/article/2020/06/diversity_south-african-woman-reaching-out-to-shake-hands_make-deal_south-africa_merger-and-acquisition_collaboration_partner_by-peopleimages-gettyimages-1132055509-100850369-large.jpg)
ICMCP and Women in CyberSecurity (WiCyS) announced that they will work with
Target this spring to expand access to the National Cyber League (NCL) virtual
competition and training program for 500 women and BIPOC individuals as a way to
introduce cybersecurity and technology careers to more underrepresented
students. The competition gives participants a chance to tackle simulated
real-world scenarios as a way to sharpen their cybersecurity skills, explore
areas of career specialization, and boost their resume. Target CISO Rich
Agostino said the opportunity for his company to participate fit with its
long-standing efforts to increase the diversity of its workforce and the
technical professions, too. For example, Agostino has a formal mentoring
program, pairing women on his team with outside executives. “I’m a huge believer
that if you want to make a difference in someone’s career, you get them
connected with the right people to build their network,” he says. Target, which
is headquartered in Minneapolis, also works with the University of Minnesota
through various programs, such as scholarships and networking opportunities, to
help increase diversity among the students and, thus, the future workforce.
At the heart of Filecoin is the concept of provable storage. Simply put, to
"prove" storage is to convince any listener that you have a unique replica of a
certain piece of data stored somewhere. It is important that the data stored be
uniquely replicated, for if not anyone can claim to have stored a long string of
zeros (or some other junk data). The completely naive proof of storage would be
to simply furnish the entirety of the stored data to someone demanding to see
the proof. This is infeasible when the size of the data grows large. The
Filecoin protocol specifies a secure cryptographic approach to proving storage.
Storage providers submit such proofs once a day, which are validated by every
node on the Filecoin network. The upshot is that someone storing data with a
Filecoin storage provider does not have to worry about the data being secretly
lost or corrupted. If that happens, it will be automatically detected by the
network within a day, and the storage provider will be penalized appropriately.
The Filecoin marketplace provides a platform for storage clients and providers
to meet and negotiate storage deals.
![](https://s27389.pcdn.co/wp-content/uploads/2021/02/improving-understanding-machine-learning-end-users-1024x440.jpeg)
Firstly, machine learning processes need to be explainable. With the vast
majority of models being trained by human employees, it’s vital that users know
the information it needs to provide for the goal of usage to be reached, so that
alerts of any anomalies can be as accurate as possible. Samantha Humphries,
senior security specialist at Exabeam, said: “In the words of Einstein: ‘If you
can’t explain it simply, you don’t understand it well enough’. And it’s true –
vendors are often good at explaining the benefits of machine learning tangibly –
and there are many – but not the process behind it, and hence it’s often seen as
a buzzword. “Machine learning can seem scary from the outset, because ‘how does
it know?’ It knows because it’s been trained, and it’s been trained by humans.
“Under the hood, it sounds like a complicated process. But for the most part,
it’s really not. It starts with a human feeding the machine a set of specific
information in order to train it. “The machine then groups information
accordingly and anything outside of that grouping is flagged back to the human
for review. That’s machine learning made easy.” Mark K. Smith, CEO of
ContactEngine, added: “Those of us operating in an AI world need to explain
ourselves – to make it clear that all of us already experience AI and its subset
of machine learning every day.
The areas that I chose were primarily in healthcare because the space can be
transformed using technology. If India needs to provide affordable, quality, and
accessible healthcare to 1.3 billion people, it has to be built on technology
and a new model of the healthcare system. So from areas of the ageing brain, I
also looked at the other aspects of healthcare, including preventive, curative,
and palliative care.To that end, I invested in multiple companies. I set up my
startup, Bridge Health Medical & Digital Solutions, and recently invested in
a palliative care company, Sukino Healthcare Solutions. I have also invested in
a health-tech startup called Niramai Health Analytix, besides my investments in
Neurosynaptic Communications, and Cure.fit, among others. ... The perfect
business is a predictable business: what you forecast, what you plan, you
achieve. But it is never like that [in reality] because there are so many
variables which are not under [your] control. The pandemic is an example,
unfortunately, of what can go wrong. The idea of a business is to create a
self-sustaining model. A startup should think about creating a profitable
business. As you scale up, one option is to opt for Series C and D funding
rounds and then exit by selling out to another company.
![](https://img.deusm.com/informationweek/February21/Graph_Data-DIgilife-adobe.jpg)
Graph databases are a key pillar of this new order. They provide APIs,
languages, and other tools that facilitate the modeling, querying, and writing
of graph-based data relationships. And they have been coming into enterprise
cloud architecture over the past two to three years, especially since AWS
launched Neptune and Microsoft Azure launched Cosmos DB, respectively, each of
which introduced graph-based data analytics to their cloud customer bases.
Riding on the adoption of graph databases, graph neural networks (GNN) are an
emerging approach that leverages statistical algorithms to process graph-shaped
data sets. Nevertheless, GNNs are not entirely new, from an R&D standpoint.
Research in this area has been ongoing since the early ‘90s, focused on
fundamental data science applications in natural language processing and other
fields with complex, recursive, branching data structures. GNNs are not to be
confused with the computational graphs, sometimes known as “tensors,” of which
ML/DL algorithms are composed. In a fascinating trend under which AI is helping
to build AI, ML/DL tools such as neural architecture search and reinforcement
learning are increasingly being used to optimize computational graphs for
deployment on edge devices and other target platforms.
Users are responsible for configurations, so your IT team needs to prioritize
mastery of the various settings and options. Cloud resources are guarded by an
array of configuration settings that detail which users can access applications
and data. Configuration errors and oversights can expose data and allow for
misuse or alteration of that data. Every cloud provider uses different
configuration options and parameters. The onus is on users to learn and
understand how the platforms that host their workloads apply these settings. IT
teams can mitigate configuration mistakes in several ways. Adopt and
enforce policies of least privilege or zero trust to block access to all cloud
resources and services unless such access is required for specific business or
application tasks. Employ cloud service policies to ensure resources are
private by default. Create and use clear business policies and guidelines
that outline the required configuration settings for cloud resources and
services. Be a student of the cloud provider's configuration and security
settings. Consider provider-specific courses and certifications. Use
encryption as a default to guard data at rest and in flight where
possible.
Quote for the day:
"Leadership is the creation of an
environment in which others are able to self-actualize in the process of
completing the job." -- John Mellecker
No comments:
Post a Comment