Robots Developing The Unique Sixth Sense
In the sense of smell and taste, robots with chemical sensors could be far more
precise than humans, but building in proprioception, the robot’s awareness of
itself and its body, is far more challenging and is a big reason why humanoid
robots are so tough to get right. Tiny modifications can make a big difference
in human-robot interaction, wearable robotics, and sensitive applications like
surgery. In the case of hard robotics, this is usually solved by putting a
number of strain and pressure sensors in each joint, which allow the robot to
figure out where its limbs are. This is fine for rigid robots with a limited
number of joints, but it is insufficient for softer, more flexible robots.
Roboticists are torn between having a large, complicated array of sensors for
every degree of freedom in a robot’s mobility and having limited proprioception
skills. This challenge is being addressed with new solutions, which often
involve new arrays of sensory material and machine-learning algorithms to fill
in the gaps. They discuss the use of soft sensors spread at random through a
robotic finger in a recent study in Science Robotics.
The Rise of Enterprise Data Inflation
Data inflation ensues when spending on data rises without deriving proportional
enterprise value from that spending. Surprisingly, digital transformation and
application modernization have created fertile ground for data inflation to run
rampant. As enterprises refactor applications and ever-expanding datasets aren’t
managed carefully, enterprises experience data sprawl. Moving to the cloud to
deliver more capability and use can inadvertently lead to data inflation. Often,
a dataset is helpful across multiple areas of a business. Different development
groups or people with unrelated objectives might make numerous copies of the
same data. They often change a dataset’s taxonomy or ontology for their software
or business processes, making it harder for others to identify it as a
duplicate. This occurs because the average data scientist trying to hone in on a
particular data insight has different priorities than the data engineers
responsible for pipelining that data and creating new features. And the typical
IT person has little visibility into the use of the data at all. The result is
that the enterprise pays for many extra copies without getting any new value – a
core driver of data inflation.
Will Apple build its own blockchain?
One thing that is pretty clear is that if Apple creates a specific carve-out
for NFTs in its own App Store rules, it’s going to be on its own terms. They
could take a number of different paths; I could see a world where Apple could
only allow certain assets on certain blockchains or even build out their own
blockchain. But Apple’s path toward controlling the user experience will most
likely rely on Apple taking a direct hand in crafting their own smart
contracts for NFTs, which developers might be forced to use in order to stay
compliant with App Store rules. This could easily be justified as an effort to
ensure that consumers have a consistent experience and can trust NFT platforms
on the App Store. These smart contracts could send Apple royalties
automatically and lead to a new in-app payment fee pipeline, one that could
even persist in transactions that took place outside of the Apple
ecosystem(!). More complex functionality could be baked in as well, allowing
Apple to handle workflows like reversing transactions. Needless to say, any of
these moves would be highly controversial among existing developers.
A Microservice Overdose: When Engineering Trends Meet the Startup Reality
Microservices are not the only big engineering trend that is happening right
now. Another big trend that naturally comes together with microservices, is
using a multi-repo version control approach. The multi-repo strategy enables
the microservice team to maintain a separate and isolated repository for each
responsibility area. As a result, one group may own a codebase end to end,
developing and deploying features autonomously. Multi-repo seems like a great
idea, until you realize that code duplication and configuration duplication
are still not solved. Apart from the code duplication that we already
discussed, there is a whole new area of repository configurations – access,
permissions, branch protection, and so on. Such duplications are expected with
a multi-repo strategy because multi-repo encourages a segmented culture. Each
team does its own thing, making it challenging to prevent groups from solving
the same problem repeatedly. In theory, a better alternative could be the
mono-repo approach. In a mono-repo approach, all services and codebase are
kept in a single repository. But in practice, mono-repo is fantastic if you’re
Google / Twitter / Facebook. Otherwise, it doesn’t scale very well.
Talking Ethical AI with SuperBot’s Sarvagya Mishra
AI is the most transformative technology of our era. But it brings to the fore
some fundamental issues as well. One, a rapidly expanding and pervasive
technology powered by mass data, may bring about a revolutionary change in
society; two, the nature of AI is to process voluminous raw information which
can be used to automate decisions at scale; three, all of this is happening
while the technology is still in the nascent stage. If we think about it, AI
is a technology that can impact our lives in multiple ways – from being the
backbone of devices that we use to how our economies function and even how we
live. AI algorithms are already deployed across every major industry for every
major use case. Since AI algorithms are essentially sets of rules that can be
used to make decisions and operate devices, they could make judgement calls
that harm an individual or a larger population. For instance, consider the AI
algorithm for a self-driving car. It’s trained to be cautious and follow
traffic rules, but what happens if it suddenly decides that breaking the rules
is more beneficial? It could lead to a lot of accidents.
Data Science: How to Shift Toward More Transparency in Statistical Practice
A common misconception about statistics is that it can give us certainty.
However, statistics only describe what is probable. Transparency can be best
achieved by conveying the level of uncertainty. By quantifying research
inferences about uncertainty, a greater degree of trust can be achieved. Some
researchers have done studies of articles in physiology, the social sciences,
and medicine. Their findings demonstrated that error bars, standard errors,
and confidence intervals were not always presented in the research. In some
cases, omitting these measures of uncertainty can have a dramatic impact on
how the information is interpreted. Areas such as health care have stringent
database compliance requirements to protect patient data. Patients could be
further protected by including these measures, and researchers can convey
their methodology and give readers insights into how to interpret their
data. Assessing Data Preprocessing Choices Data scientists are often
confronted with massive amounts of unorganized data.
DAO regulation in Australia: Issues and solutions, Part 2
So, the role of the government is to introduce regulations and standards, to
make sure that people understand that when they publish a record — say, on
Ethereum — it will become immutable and protected by thousands of running
nodes all around the globe. If you publish it on some private distributed
ledger network controlled by a cartel, you basically need to rely on its
goodwill. The conclusion for this part of the discussion is the following.
With blockchain, you don’t need any external registry database, as blockchain
is the registry, and there is no need for the government to maintain this
infrastructure, as the blockchain network is self-sustainable. Users can
publish and manage records on a blockchain without a registrar, and there must
be standards that allow us to distinguish reliable blockchain systems. ... The
difference is that this must be designed as a standard requirement for the
development of a compliant DAO. Those who desire to work under the Australian
jurisdiction must develop the code of their decentralized applications and
smart contacts compliant with these standards.
Data Governance Adoption: Bob Seiner on How to Empower Your People to Participate
When you consider the ADKAR model for change, any program adoption requires
personal activation. “You need to find a way to make that connection with
people,” Bob says. “ADKAR relies on personal traits and things that people
need to adjust to and adopt to further the way they’re able to govern and
steward data in their organization. Make it personable, make it reasonable,
and help them understand they play a big role in data governance.” But even
the most energized workforce can’t participate in active data governance
without the right tools — your drivers won’t win their race without cars,
after all. Like most large organizations, Fifth Third has a very divided data
platform ecosystem, with several dozen tools employing both old and new
technology. But as their vice president of enterprise data, Greg Swygart,
notes, where data consumption starts and ends — curation and interaction —
“the first step in the data marketplace is always Alation.” “Implementing an
effective data governance program really requires getting people involved,”
Bob concludes.
AI Regulatory Updates From Around the World
Under the proposed ‘Artificial Intelligence Act,' all AI systems in the EU
would be categorized in terms of their risk to citizens' privacy, livelihoods,
and rights. ‘Unacceptable risk' covers systems that are deemed to be a "clear
threat to the safety, livelihoods, and rights of people.” Any product or
system which falls under this category will be banned. This category includes
AI systems or applications that manipulate human behavior to circumvent users'
free will and systems that allow ‘social scoring' by governments. The next
category, 'High-risk,' includes systems for critical infrastructure which
could put life or health at risk, systems for law enforcement that may
interfere with people's fundamental rights, and systems for migration,
asylum-seeking, and border control management, such as verification of the
authenticity of travel documents. AI systems deemed to be high-risk will be
subject to “strict obligations” before they can be put on the market,
including risk assessments, high quality of the datasets, ‘appropriate’ human
oversight measures, and high levels of security.
SEC Breach Disclosure Rule Makes CISOs Assess Damage Sooner
The central question facing CISOs who've experienced a security incident will
be around how materiality is determined. The easiest way to assess whether an
incident is material is by looking at the impact to sales as a percentage of
the company's overall revenue or by tracking how many days a company's systems
or operations are down as the result of a ransomware attack, Borgia says. But
the SEC has pressured companies to consider qualitative factors such as
reputation and the centrality of a breach to the business, he says. For
instance, Pearson paid the SEC $1 million to settle charges that it misled
investors about a breach involving millions of student records. Though the
breach might not have been financially material, he says it put into doubt
Pearson's ability to keep student data safe. The impact of the proposed rule
will largely come down it how much leeway the SEC provides breach victims in
determining whether an incident is material. If the SEC goes after businesses
for initially classifying an incident as immaterial and then changing their
minds weeks or months later when new facts emerge, he says, companies will
start putting out vague and generic disclosures that aren't helpful.
Quote for the day:
"Give whatever you are doing and
whoever you are with the gift of your attention." -- Jim Rohn
No comments:
Post a Comment