5 Cloud Native Trends to Watch out for in 2022
With the emphasis on cybersecurity, I expect to see open source projects and
commercial offerings squarely focused on cloud native security. Two areas will
get the attention — software supply chain and eBPF. The software supply chain
closely mimics the supply chain of real-world commerce where resources are
consumed, then transformed, through a series of steps and processes, and finally
supplied to the customer. Modern software development is about assembling and
integrating various components available in the public domain as open source
projects. In the complex supply chain of software, a compromised piece of
software can cause severe damage to multiple deployments. Recent incidents
involving CodeCov, Solarwinds, Kaseya, and the ua-parser-js NPM package
highlight the need to secure the software supply chain. In 2022, there will be
new initiatives, projects, and even new startups focusing on secure software
supply chain management. The other exciting trend is eBPF that enables cloud
native developers to build secure networking, service mesh, and observability
components.
Second-generation AI-powered digital pills are changing the future of healthcare
Many chronic diseases move along a dynamic trajectory that creates a challenge
of unpredictable progression. This is often disregarded by first-generation AI
as it requires constant adaptation of therapeutic regimens. Also, many therapies
do not show loss of response until even a few months. The second-generation AI
systems are designed to improve response to therapies and facilitate analysing
inter-subject and intra-subject variabilities in response to therapies over
time. Most first-generation AI systems extract data from large databases and
artificially impose a rigid “one for all” algorithm on all subjects. Attempts to
constantly amend treatment regimens based on big data analysis might be
irrelevant for an individual patient. Imposing a “close to optimal” fit on all
subjects does not resolve difficulties associated with dynamicity and the
inherent variability of biological systems. The second-generation AI systems
focus on a single patient as the epicentre of an algorithm and to adapt their
output in a timely manner.
How Walmart Canada Uses Blockchain to Solve Supply-Chain Challenges
A public blockchain network — one that anyone can join without asking for
permission — allows unlimited viewing of information stored on it, eliminates
intermediaries, and operates independently of any governing party. It is
well-suited for digital consumer offerings (like NFT’s), cryptocurrencies, and
certifying information such as individuals’ degrees or certificates. But
private networks — those that require a party to be granted permission to join
it — are often far better suited for businesses because access is restricted
to verified members and only parties directly working together can see the
specific information they exchange. This better satisfies industrial-grade
security requirements. For these reasons, Walmart decided to go with a private
network built on Hyperledger Fabric, an open-source platform. ... For
Walmart and its carriers, this meant working with each carrier’s unique data
(vendor name, payment terms, contract duration, and general terms and
conditions), which is combined with governing master tables of information
such as fuel rates and tax rates. The parties should then jointly agree to the
formulas that the blockchain will use to calculate each invoice.
Property Graphs vs. Knowledge Graphs
A property graph uses nodes, relationships, labels, and “properties.” Both the
relationships and their connecting nodes of data are named, and capable of
storing properties. Nodes can be labeled in support of being part of a group.
Property graphs use “directed edges” and each relationship has a start node
and an end node. Relationships can also be assigned properties. This feature
is useful in presenting additional metadata to the relationships between the
nodes. ... Knowledge graphs are very useful in working with data fabric. The
semantics feature (and the use of graphs) supports discovery layers and data
orchestration in a data fabric. Combining the two makes the data fabric easier
to build out incrementally and more flexible, which lowers risk and speeds up
deployment. The process allows an organization to develop the fabric in
stages. It can be started with a single domain, or a high value use case, and
gradually expanded incrementally with more data, users, and use cases. A data
fabric architecture, combined with a knowledge graph, supports useful
capabilities in many key areas.
Executive Q&A: Getting the Most from Unstructured Data
The world of ten years ago was dominated by structured data. After 2012,
though, as sensors became cheaper, cell phones gradually became smartphones,
and cameras were installed to make shooting easier. With this, a large amount
of unstructured data was generated, and enterprises entered uncharted
territory, making progress slow. Some of the inhibitors to progress in this
area include: Complexity: Unlike structured data which can be analyzed
intuitively, unstructured data needs to be further processed and then
analyzed, usually best done through artificial intelligence. Machine learning
algorithms classify and label content from it. However, it is not easy to
identify high-quality data from the data set due to the large amount and
complexity of unstructured data -- this has been painful for developer teams
and a key challenge to data architectures that are already complex. Cost:
Although the enterprise recognizes the value of unstructured data, the cost
can be a potential obstacle to making use of it. The cost of enterprise
infrastructure, human resources, and time can hinder the implementation and
development of AI and the data it analyzes.
Google Cloud Attacks Supply Chain Crisis with Digital Twin
“It’s a digital representation of the physical supply chain,” said Hans
Thalbauer, the managing director for supply chain and logistics for Google
Cloud. “You model all the different locations of your enterprise. Then you
model all your suppliers, not just the tier one but tier two, three, and four.
You bring in the logistic service providers. You bring in manufacturing
partners. You bring in customers and consumers so that you have really the
full view.” Once a network of supply chain players has been built out, the
customer then starts loading data into their digital twin. The customer starts
with their private enterprise data, which typically includes past orders,
pricing, costs, and supply and demand forecasts, Thalbauer said. “Then you
also want to get information from your business partners,” Thalbauer told
Datanami last year. “You share your demands with your suppliers. And they
actually loop back to you what is the supply situation. You share the
information with the logistics service providers. You share sustainability
information with the service provider.”
Fighting fraud in the supply chain with blockchain
Private blockchain platforms are particularly suited to supply chain
management because they provide traceability, transparency, real-time
logistics tracking, electronic funds transfer and smart contract management.
Processes including negotiations support and procurement can also be connected
via blockchain to build trust and confidence with new suppliers, partners and
colleagues. While private blockchains adhere to the original principles of
blockchain and offer all the distributed benefits, they also retain some of
the characteristics of more centralised, controlled networks. This improves
greater privacy and eliminates many of the illicit activities often associated
with public blockchains and cryptocurrencies. No one can enter this type of
‘permissioned’ network without proper authentication, making it ideal where it
does not suit an enterprise to allow every participant full access to the
entire contents of the database.
Carbon Neutrality Requires Good Data – and Blockchain
Blockchain technology is effective in remedying these problems. As a
decentralized, immutable ledger where data can be inputted and shared at every
point of action, blockchain works by storing information in interconnected
blocks and provides a value-add for insuring carbon offsets. This creates a
chain of information that cannot be hacked and can be transmitted between all
relevant parties throughout the supply chain. Key players can enter, view, and
analyze the same data points securely and with the assurance of the data’s
accuracy. In addition, the technology can identify patterns of error, giving
actionable insights into where systems or humans may be contributing to the
problem. Data needs to move with products throughout the supply chain to
create an overall number for carbon emissions. Blockchain’s decentralization
offers value to organizations and their respective industries so that more,
reliable data can be shared between all parties to shine a light on the areas
they need to work on, such as manufacturing operations and the offsets of
buildings.
Developing Deep Learning Systems Using Institutional Incremental Learning
Institutional Incremental Learning is one of the promising ways of addressing
data-sharing concerns. Using this approach, organizations can train the model
in a secure environment and can share the model without having to share
precious data. Institutional Incremental Learning differs from federated
learning. In federated learning, all the participants do the training
simultaneously. This is challenging, as the centralized server needs to update
and maintain models. This results in complex technology and communication
requirements. ... After training a model locally, the model, along with the
metrics, is shared with participating entities. In this way, the decision to
use a particular model lies with the organization, which is going to use them
and not be forced by anyone. This truly enables decentralized machine
learning, where a model is not only trained but also used at the user's
discretion. Incremental institutional learning helps to address catastrophic
forgetting.
Understanding AI’s Limitations Is Key to Unlocking Its Potential
To discern where AI can improve business processes and where it cannot, it’s
important to take into account its legal and ethical considerations, its
biases and its transparency. Asking critical questions about certain AI
applications is critical to setting a project up for success and avoiding risk
down the line. From a legal perspective, we must decipher who carries
responsibility for a bad judgment call (i.e. a self-driving car hits a
pedestrian). We also must recognize that there is bias when working with
cognitive-based technology. AI learns from the data it gets; however, it
doesn’t have the means to question this data, which means that data sets can
easily skew in one direction and leave AI to adopt bias. This can lead to
things like discrimination in recruiting processes or racial bias in
healthcare management. Businesses that work with AI will also find themselves
walking a fine line between trust and transparency. While the intention of
advanced AI is to make more independent decisions over time, engineers can run
into a “black box” scenario where it’s unclear how the application came to its
decision.
Quote for the day:
"To be the improver of improvements
you must challenge assumptions, starting with your own." --
Vala Afshar
No comments:
Post a Comment