10 Key AI & Data Analytics Trends for 2022 and Beyond
Whilst most research is understandably focused on pushing the boundaries of
complexity, the reality is that training and running complex models can have a
big impact on the environment. It’s predicted that data centres will represent
15% of global CO2 emissions by 2040, and a 2019 research paper, “Energy
considerations for Deep Learning,” found that training a natural language
translation model emitted CO2 levels equivalent to four family cars over their
lifetime. Clearly, the more training, the more CO2 is released. With a greater
understanding of environmental impact, organisations are exploring ways to
reduce their carbon footprint. Whilst we can now use AI to make data centres
more efficient, the world should expect to see more interest in simple models
that perform as well as complex ones for solving specific problems.
Realistically, why should we use a 10-layer convolutional neural network when a
simple bayesian model performs equally well while using significantly less data,
training, and compute power? “Model efficiency” will become a byword for
environmental AI, as creators focus on building simple, efficient, and usable
models that don't cost the earth.
“Digital Twin” with Python: A hands-on example
IBM defines a digital twin as follows “A digital twin is a virtual model
designed to accurately reflect a physical object”. They go on to describe how
the main enabling factors for creating a digital twin are the sensors that
gather data and the processing system that inserts the data in some particular
format/model into the digital copy of the object. Further, IBM says “Once
informed with such data, the virtual model can be used to run simulations, study
performance issues and generate possible improvements”. ... So, how do we use
our favorite language Python to create a digital twin? Why do we even think it
will work? The answer is deceptively simple. Just look at the figure above and
then at the one below to see the equivalency between a Digital Twin model and a
classic Python object. We can emulate the sensors and data processors with
suitable methods/functions, store the gathered data in a database or internal
variables, and encapsulate everything into a Python class.
Patterns for Authorization in Microservices
When you have a monolith, you generally only need to talk to one database to
decide whether a user is allowed to do something. An authorization policy in a
monolith doesn't need to concern itself too much with where to find the data
(such as user roles) — you can assume all of it is available, and if any more
data needs to be loaded, it can be easily pulled in from the monolith's
database. But the problem gets harder with distributed architectures. Perhaps
you're splitting your monolith into microservices, or you're developing a new
compute-heavy service that needs to check user permissions before it runs jobs.
Now, the data that determines who can do what might not be so easy to come by.
You need new APIs so that your services can talk to each other about
permissions: "Who's an admin on this organization? Who can edit this document?
Which documents can they edit?" To make a decision in service A, we need data
from service B. How does a developer of service A ask for that data? How does a
developer of service B make that data available?
How the use of AI and advanced technology is revolutionizing healthcare
In the payments realm, Mastercard® Healthcare Solutions optimizes the workflow
for payers and providers by automating repetitive and error-prone operations,
such as billing and claims processing. According to CIO magazine, many hospitals
are now using AI to automate mundane tasks, reduce workloads, eliminate errors
and speed up the revenue cycle. The author notes AI’s effectiveness for
reducing incorrect payments for erroneous billings, and for preventing the
labor-intensive process of pulling files, resubmitting to payers and eventual
payment negotiations. ... The successful use of AI for FWA prevention is
increasing in popularity. A recent study by PMYNTS revealed that approximately
12 percent of the 100 sector executives surveyed use AI in healthcare payments,
three times the number using AI in 2019. Nearly three-quarters of the 100 execs
plan to implement AI by 2023. ... These are all important factors when building
an AI model and show the need to demonstrate return on investment (ROI) through
a proof of concept.
IBM Brings AI to Monitor Petabytes of Network Traffic
“As we surround applications with our capabilities, we will understand the
traffic flow and the performance and what’s normal,” Coward says. “The longer
you run the AI within the network, the more you know about what typically
happens on a Tuesday afternoon in Seattle.” A key aspect of SevOne is the
ability to take raw network performance data from sources–such as SNMP traps,
logs in Syslog formats, and even packets captured from network taps–combine it
in a database, and then generate actionable insights from that blended data.
“The uniqueness of SevOne is really that we put it into a time-series database.
So we understand for all those different events, how are they captured [and] we
can correlate them,” Coward explains “That sounds like an extraordinary simple
things to do. When you’re trying to do that at scale across a wide network where
you literally have petabytes of data being created, it creates its own
challenge.” The insights generated from SevOne can take the form of dashboards
that anyone can view to see if there’s a network problem, thereby eliminating
the need to call IT.
Ethics in Tech: 5 Actions to Lead, Sustain Growth
The rapid deployment of AI into societal decision-making—in areas such as health
care recommendations, hiring decisions, and autonomous driving—has catalyzed
ongoing ethics discussions regarding trustworthy AI. These considerations are in
early stages. Future issues could arise as tech goes beyond AI. Focus is
intensifying on the importance of deploying AI-powered systems that benefit
society without sparking unintended consequences with respect to bias, fairness,
or transparency. Technology is increasingly a focal point in discussions about
efforts to deceive using disinformation, misinformation, deepfakes, and other
misuses of data to attack or manipulate people. Some tech companies are asking
governments to pass regulations clearly outlining responsibilities and
standards, and many organizations are cooperating with law enforcement and
intelligence agencies to promote vigilance and action. ... Many technology
organizations are facing demands from stakeholders to do more than required by
law to adopt sustainable measures such as promoting more efficient energy use
and supply chains, reducing manufacturing waste, and decreasing water use in
semiconductor fabrication.
The 5 Characteristics of a Successful Data Scientist
Everything is connected in some way, well beyond the obvious, which leads to
layer upon layer of real world complexity. Complex systems interact with other
complex systems to produce additional complex systems of their own, and so goes
the universe. This game of complexity goes beyond just recognizing the big
picture: where does this big picture fit into the bigger picture, and so on? But
this isn't just philosophical. This real world infinite web of complexity is
recognized by data scientists. They are interested in knowing as much about
relevant interactions, latent or otherwise, as they work through their problems.
They look for situation-dependent known knowns, known unknowns, and unknown
unknowns, understanding that any given change could have unintended consequences
elsewhere. It is the data scientist's job to know as much about their relevant
systems as possible, and leverage their curiosity and predictive analytical
mindset to account for as much of these systems' operations and interactions as
feasible, in order to keep them running smoothly even when being
tweaked.
PYMNTS DeFi Series: Unpacking DeFi and DAO
Like any public blockchain, the open-source code is viewable by the public.
Since there is no human being in control, users can be certain the code will
execute according to the rules it contains. As the industry saying goes, “code
is law.” DAOs are controlled by a type of cryptocurrency called governance
tokens, and these give token holders a vote on the project. The investment is
based on the idea that as the platform attracts more users and the funds are
deposited into its lending pools, the total value locked (TVL) increases and the
more valuable its tokens will become. Aave has nearly $14 billion TVL, but the
AAVE token is not loaned out. The Aave protocol’s voters have allowed lenders to
lock 30 different cryptocurrencies, each of which has interest rates for lenders
and borrowers set by the smart contract rules. Different protocols have
different voting rules, but almost all come down to this: Token holders can
propose a rule change. If it gets enough support, a vote is scheduled; if enough
voters support it, the proposal passes, the code is updated, and the protocol’s
rules are updated.
How Does Blockchain Help with Digital Identity?
It is well understood that blockchain-based digital identity management is
robust and encrypted to ensure security and ease of portability. Hence,
mandating its effective incorporation for improving the socio-economic
well-being of the users, which is mainly associated with digital identity. With
time and advanced technologies, digital identity has become an essential entity
that enables users to have various rights and privileges. Although Blockchain
has various benefits while managing digital identities, it cannot be considered
a panacea. Blockchain technology is continuously developing, and though it
offers multiple benefits, there also exist various challenges when aiming to
completely replace the traditional identity management methods with the latter.
Some of the known challenges include the constantly developing technology and
the lack of standardization of data exchange. Considering the benefits that come
with transparency and the trust earned through blockchain frameworks, numerous
organizations are merging to ensure interoperability across their borders.
Why 2022 Will Be About Databases, Data Mesh, and Open Source Communities
Data lakes will continue their dominance as essential for enabling analytics and
data visibility; 2022 will see rapid expansion of a thriving ecosystem around
data lakes, driven by enterprises seeking greater data integration. As
organizations work out how to introduce data from third-party systems and
real-time transactional production workloads into their data lakes, technologies
such as Apache Kafka and Pulsar will take on those workloads and grow in
adoption. Beyond introducing data to enable BI reporting and analytics,
technologies such as Debezium and Kafka Connect will also enable data lake
connectivity, powering services that require active data awareness. Expect that
approaches leveraging an enterprise message bus will become increasingly common
as well. Organizations in a position to benefit from the rise of integration
solutions should certainly move on these opportunities in 2022. Related to this
trend (and to Trend #1 as well): the emerging concept of a data mesh will really
come into its own in 2022.
Quote for the day:
"The greatest leader is not necessarily
the one who does the greatest things. He is the one that gets the people to do
the greatest things." -- Ronald Reagan
No comments:
Post a Comment