What Is a Minimum Viable AI Product?
Most organizations don’t want to use a separate AI application, so a new
solution should allow easy integration with existing systems of record,
typically through an application programming interface. This allows AI
solutions to plug into existing data records and combine with transactional
systems, reducing the need for behavior change. Zylotech, another Glasswing
company, applies this principle to its self-learning B2B customer data
platform. The company integrates client data across existing platforms;
enriches it with a proprietary data set about what clients have browsed and
bought elsewhere; and provides intelligent insights and recommendations about
next best actions for clients’ marketing, sales, data, and customer teams. It
is designed specifically to directly complement clients’ existing software
suites, minimizing adoption friction. Another integration example is Verusen,
an inventory optimization platform also in the Glasswing portfolio. Given the
existence of large, entrenched enterprise resource planning players in the
market, it was essential for the platform to integrate with such systems. It
gathers existing inventory data and provides its AI-generated recommendations
on how to connect disparate data and forecast future inventory needs without
requiring significant user behavior change.
Half of 4 Million Public Docker Hub Images Found to Have Critical Vulnerabilities
A recent analysis of around 4 million Docker Hub images by cyber security firm
Prevasio found that 51% of the images had exploitable vulnerabilities. A large
number of these were cryptocurrency miners, both open and hidden, and 6432 of
the images had malware. Prevasio’s team performed both static and dynamic
analysis of the images. Static scanning includes dependency analysis, which
checks the dependency graph of the software present in the image for published
vulnerabilities. In addition to this, Prevasio's team also performed dynamic
scanning - i.e. running containers from the images and monitoring their runtime
behaviour. The report groups images into vulnerable ones as well as malicious
ones. Almost 51% of the images had critical vulnerabilities that could be
exploited, and 68% of images were vulnerable in various degrees. 0.16%, or 6432
of the analyzed images had malicious software in them. Windows images, which
accounted for 1% of the total, and images without tags, were excluded from the
analysis. Earlier this year, Aqua Security’s cyber-security team uncovered a new
technique where attackers were building malicious images directly on
misconfigured hosts.
Getting Started— A Coder’s Guide to Neural Networks
The world is talking so much about machine learning and AI, but hardly
anyone seems to know how it works, but then, on the flip side, everyone
makes it seem like they’re experts on it. The unfortunate truth is that the
knowledge and know-how seem to be stuck with the academic elites. For the
most part, the material online for learning about machine learning and deep
learning falls into 1 of 3 categories: shallow tutorials with barely any
explanation on why certain patterns are followed; copy and paste material by
those who want to pretend to have a self-made portfolio; or such
intimidating math heavy lessons, that you get lost in all the Greek. This
book was written to get away from all of that. It’s meant to be a very easy
read which walks the reader through a journey on the fundamentals of neural
networks. This books purpose is to get the knowledge out of the hands of the
few and bring it into the hands of any coder. Before continuing, let’s clear
something up. From an outsider’s perspective, the world of AI consists of so
many terms which seem to mean the same thing. Machine learning, deep
learning, artificial intelligence, neural networks. Why are there so many
seemingly synonymous terms? Let’s take a look at the diagram below.
Here's how opinions on the impact of artificial intelligence differ around the world
Views of AI are generally positive among the Asian publics surveyed: About
two-thirds or more in Singapore (72%), South Korea (69%), India (67%),
Taiwan (66%) and Japan (65%) say AI has been a good thing for society. Many
places in Asia have emerged as world leaders in AI. Most other places
surveyed fall short of a majority saying AI has been good for society. In
France, for example, views are particularly negative: Just 37% say AI has
been good for society, compared with 47% who say it has been bad for
society. In the U.S. and UK, about as many say it has been a good thing for
society as a bad thing. By contrast, Sweden and Spain are among a handful of
places outside of the Asia-Pacific region where a majority (60%) views AI in
a positive light. As with AI, Asian publics surveyed stand out for
their relatively positive views of the impact of job automation. Many Asian
publics have made major strides in the development of robotics and AI. The
South Korean and Singaporean manufacturing industries, for instance, have
the highest and second highest robot density of anywhere in the world.
Artificial Intelligence In The New Normal – Attitude Of Public In India
There have been several discussions in the society about AI posing a threat
to humanity and our way of living and working. In our study, 42% of the
public believe that the impact of AI on net new jobs created will depend on
the industry, and on balance feel that, overall, more new jobs will be
created than lost (Net score 1%). 63% of the public feel that humans
will always be more intelligent that AI systems. One puzzling trend that
emerges in the study is about how the youngsters perceive AI. Those in age
category less than 40 (Net score -8%) are relatively less optimistic that
net new jobs will be created as compared to those aged greater than 40 (Net
score 14%). Further, respondents aged less than 40 are 3 times less
confident than those aged more than 40 that human intelligence will not be
overtaken by AI. What explains this apparent diffidence among the youth? Or
I wonder if they are being more prescient than the others about reaching
singularity! I believe there is a need for appropriate education and
communication strategies for the youth in India about AI and its positive
potential. The public in India demonstrate a sense of optimism about the
future in the new normal and believe in science and technology to make their
lives better.
Data governance in the FinTech sector: A growing need
The neobanking model is another FinTech model that has seen significant
traction globally. In India, neobanks primarily operate in partnership with
one or multiple banking partner(s). This leads to sharing of data between the
two entities for multiple banking services provided to consumers. To ensure
regulated usage and security of customer data shared by banks with neobanks
and vice versa, proper data security and access guidelines would need to be in
place. Other FinTech segments, including payments and WealthTech, also require
strong DG frameworks to ensure compliance both within the organisation and
across its partners. In recent times, the industry has seen the introduction
of several data-related laws and regulations aimed at ensuring the privacy and
security of an individual’s PII and sensitive data. Some of the key focus
areas include data sharing, data usage, consent and an individual’s data
rights. Hence, there is increasing pressure on companies to remain compliant
while adopting rapidly evolving FinTech models. Considering the changing
regulatory landscape and requirements, some FinTech companies have already
performed readiness assessments and have started to adopt an enterprise DG
framework that would help them ensure effective data management ...
Ten Essential Building Blocks of a Successful Enterprise Architecture Program
The danger (or maybe, in some cases, the opportunities) for EAs is that they
may be expected to be conversant in any type of architecture. In other
words, some organizations may only hire one EA and expect her to be able to
do any kind of architecture work except that of licensed architects. If one
considers that EA work could be very different in, say, government
organizations compared to for profit or non-profit ones, then one could
imagine specialized EAs (e.g., Government Enterprise Architect, Non-Profit,
Conglomerate Architect, etc.) that requires specialized training and
experience. In fact, there has been general recognition that doing EA
in government can be quite different from in profit-driven enterprises and
therefore special frameworks training for government-centric EA may be
appropriate. Nonetheless, the leading generic, openly available EA framework
for professional certification is The Open Group Architecture Framework
(TOGAF), which, with expert assistance, can be adapted to incorporate
elements of both DODAF and the FEA Framework (FEAF). With so many
frameworks, methods, and standards to choose from, why is customization
always required?
How Data Governance Can Improve Corporate Performance
While data governance is a systematic methodology for businesses to comply
with external regulations such as GDPR, HIPAA, Sarbanes-Oxley, and future
regulations, it can also establish a foundation and controls to strengthen
internal decision-making for determining product costs, inventory, consumer
demand, and more. While there are many factors to consider for building a
data governance program, two of the most pressing items that should be top
of mind are data quality and self-service analytics. It’s advantageous to
include efforts to ensure data quality is part of your data governance
program. Trying to govern data that is old, corrupted or duplicated can
become quite messy. Although the tools for managing quality and governance
are generally different, data governance provides a framework for data
quality. Poor data quality exists for many reasons, such as having data
spread out in department silos, different versions of the “same” data or
information lacking in common name identifiers. Without data quality,
organizations also face a real possibility of making faulty business
decisions and having a sub-standard governance program. Generally, the more
data governance a company has, the stronger its data quality will be.
5 Steps to Success in Data Governance Programs
What exactly does a successful data governance program look like? Author
Bhansali (2014) defines data governance as “the systematic management of
information to achieve objectives that are clearly aligned with and
contribute to the organization’s objectives” (p.9). So, a successful data
governance program is one that achieves these aligned objectives and
furthers the interests of the organization to which it is applied. In our
reading for this week (Bhansali, 2014) outlined several key steps in the
creation of data governance platforms. These steps are by no means an
exhaustive road-map for a perfect data governance platform, nor are they
necessarily chronological. Still, they do provide a launching point for
useful discussion. A data governance program must be aligned with any
existing business strategies. This also involves being aware of the vision
of the future that guides and defines the business. If Apple were the
company under consideration, you might think of their vision being an iPhone
in the pocket of every person on earth. Create a clear and logical
model of the data governance process that is specific to your organization.
This model should stand apart from any products or technologies created by
the company and must be based on any key processes or standards
Application Level Encryption for Software Architects
Unless well-defined, the task for application-level encryption is frequently
underestimated, poorly implemented, and results in haphazard architectural
compromises when developers find out that integrating a cryptographic library
or service is just the tip of the iceberg. Whoever is formally assigned with
the job of implementing encryption-based data protection, faces thousands of
pages of documentation on how to implement things better, but very little on
how to design things correctly. Design exercises turn out to be a bumpy ride
every time you don’t expect the need for design and have a sequence of ad-hoc
decisions because you anticipated getting things done quickly: First, you face
key model and cryptosystem choice challenges, which hide under “which
library/tool should I use for this?” Hopefully, you chose a tool that fits
your use-case security-wise, not the one with the most stars on GitHub.
Hopefully, it contains only secure and modern cryptographic decisions.
Hopefully, it will be compatible with other team’s choices when the encryption
has to span several applications/platforms; Then you face key storage and
access challenges: where to store the encryption keys, how to separate them
from data, what are integration points where the components and data meet for
encryption/decryption, what is the trust/risk level toward these
components?;
Quote for the day:
"No one reaches a high position without daring." -- Publilius Syrus
No comments:
Post a Comment