Why it's vital that AI is able to explain the decisions it makes
The effort to open up the black box is called explainable AI. My research
group at the AI Institute at the University of South Carolina is interested in
developing explainable AI. To accomplish this, we work heavily with the
Rubik’s Cube. The Rubik’s Cube is basically a pathfinding problem: Find a path
from point A – a scrambled Rubik’s Cube – to point B – a solved Rubik’s Cube.
Other pathfinding problems include navigation, theorem proving and chemical
synthesis. My lab has set up a website where anyone can see how our AI
algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to
learn how to solve the cube from this website. This is because the computer
cannot tell you the logic behind its solutions. Solutions to the Rubik’s Cube
can be broken down into a few generalized steps – the first step, for example,
could be to form a cross while the second step could be to put the corner
pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power
possible combinations, a generalized step-by-step guide is very easy to
remember and is applicable in many different scenarios. Approaching a problem
by breaking it down into steps is often the default manner in which people
explain things to one another.
Why KubeEdge is my favorite open source project of 2020
The KubeEdge architecture allows autonomy on an edge computing layer, which
solves network latency and velocity problems. This enables you to manage and
orchestrate containers in a core data center as well as manage millions of
mobile devices through an autonomous edge computing layer. This is possible
because of how KubeEdge uses a combination of the message bus (in the Cloud
and Edge components) and the Edge component's data store to allow the edge
node to be independent. Through caching, data is synchronized with the local
datastore every time a handshake happens. Similar principles are applied to
edge devices that require persistency. KubeEdge handles machine-to-machine
(M2M) communication differently from other edge platform solutions. KubeEdge
uses Eclipse Mosquitto, a popular open source MQTT broker from the Eclipse
Foundation. Mosquitto enables WebSocket communication between the edge and the
master nodes. Most importantly, Mosquitto allows developers to author custom
logic and enable resource-constrained device communication at the edge.
DevOps, DevApps and the Death of Infrastructure
The godfather of the DevOps movement, Patrick Debois, often speaks about how
we are moving to a more service-oriented or serviceful intranet. I have been
calling this riff on DevOps deployment methodology, DevApps. This is an
emerging design pattern where cloud native applications are a combination of
bespoke services (like Twilio, Salesforce, and many others) alongside custom
software deployed as functions on scale-to-zero web services like Amazon
Lambda. Services are being managed with Terraform, just as the services of the
past had been managed by Chef or Puppet. Once organizations tackle the
well-accepted practice to automate deployment, the next frontier is to create
applications that are composable via automated means. What we’re talking about
here is layering integration-as-code on top of infrastructure-as-code. With a
wide variety of cloud services at their disposal, application developers need
not worry about the latter — just the former. At TriggerMesh, we are seeing
more and more organizations looking to create applications that are configured
with automated workflows on the fly.
5 Qualities Of Highly Engaged Teams
Trust is not just the cornerstone of leadership. It is also a fundamental
building block in high-performance teams. When teams trust each other, it
gives them more confidence in their abilities. They know they will get
support when needed. Also, they will be willing to provide support to teams
in need. This collaboration and cooperation help the sharing of best
practices, which brings the level of the whole team, or teams higher. Trust
is one of those reflexive qualities; the more the leader shows trust, the
more they will be trusted. The more we trust our teams, the more they will
trust themselves and each other. Leaders need to be the role model when it
comes to this but also need to go that extra step to providing support and
also to ask for it. Leaders who can show this vulnerability make it ok for
their teams to ask for help when needed, as well as give it. Teams that
consistently deliver are teams that feel empowered, teams that understand
what needs to be done and have the tools to achieve it. This empowerment
boost self-confidence and belief that the teams will reach their goals.
Being engaged is great, but if you’re empowered, this can lead to
frustration and disengagement.
Four key real world intelligent automation trends for 2021
In 2021, there will be an overdue re-think of how organisations choose RPA
and intelligent automation technologies. We’ll see greater selection rigour
fuelling more informed assessments of these technologies’ abilities to
successfully operate and scale in large, demanding, front-to back-office
enterprise environments, where performance, security, flexibility,
resilience, usability, and governance are required. ... For a RPA or
intelligent automation programme to really deliver, a strategy and purpose
is needed. This could be improving data quality, operational efficiency,
process quality and employee empowerment, or enhancing stakeholder
experiences by providing quicker, more accurate responses. By examining the
experiences and proven outcomes experienced by those organisations with
mature automation programs, we’ll see more meaningful methods of measuring
the impact of RPA and intelligent automation. ... This year, there will also
be a greater understanding of which vendor software robots really possess
the ability to be ‘the’ catalyst for digital transformation. These robots
are typically pre-built, smart, highly productive and self-organising
processing resources, that perform joined up, data-driven work across
multiple operating environments of complex, disjointed, difficult to modify
legacy systems and manual workflows.
Why North Korea Excels in Cybercrime
The cybercrime market's size and the scarcity of effective protection continue
to be a mouth-watering lure for North Korean cyber groups. The country's cyber
operations carry little risk, don't cost much, and can produce lucrative
results. Nam Jae-joon, the former director of South Korea's National
Intelligence Service, reports that Kim Jong Un himself said that cyber
capabilities are just as important as nuclear power and that "cyber warfare,
along with nuclear weapons and missiles, is an 'all-purpose sword' that
guarantees our [North Korea's] military's capability to strike
relentlessly." Other reports note that in May 2020, the North Koreans
recruited at least 100 top-notch science and technology university graduates
into its military forces to oversee tactical planning systems. Mirim College,
dubbed the University of Automation, churns out approximately 100 hackers
annually. Defectors have testified that its students learn to dismantle
Microsoft Windows operating systems, build malicious computer viruses, and
write code in a variety of programming languages. The focus on Windows may
explain the infamous North Korean-led 2017 WannaCry ransomware cyberattack,
which wrought havoc in more than 300,000 computers across 150 countries by
exploiting vulnerabilities in the popular operating system.
To see the future more clearly, find your blind spots
There are multiple causes for the blind spots. One is a persistent state of
denial, described in four parts by an emergency management professional
after Hurricane Katrina: “One is, it won’t happen. Two is, if it does
happen, it won’t happen to me. Three: If it does happen to me, it won’t be
that bad. And four: If it happens to me and it’s bad, there’s nothing I can
do to stop it anyway.” To this, I’m sure we can now add a fifth
rationalization: “It won’t happen again.” Denial, however, has never been a
successful strategy. An additional cause of blind spots is an overreliance
on available data. Executives have benefited greatly from increased insights
derived through analytics and other sophisticated methods of pattern
recognition. The limitation of these tools, however, is that they can’t
detect the “dog that didn’t bark,” a reference to a Sherlock Holmes case in
which the crucial clue is not what happened but what did not. Leading is, in
part, about bringing an organization into the future, and so executives
should sharpen their thinking to include not only what they can see clearly
but also what they can’t. A third cause is conditions that can tightly bind
thinking.
Being Future Ready Is The Only Way To Survive In Data Science Field
There are three key skills for any data scientist– a stronghold on
mathematics and statistics. Secondly, you need a programming language base
for different tasks such as data processing, storage, etc. Lastly, domain
knowledge. When you are working in a company, you must think about what
value you are adding. Having acquired these skills next comes constant
upgradation and upskilling. There is a sea of resources available online.
For example, Coursera and EDx are good sources for theoretical introductions
to a variety of topics. For a more practical approach, aspirants may check
Datacamp and Udemy. I would also suggest using Kaggle, participating in
hackathons, and undertaking internships to gain an edge. It is also
important to think from the perspective of being ready for future
challenges, given this field’s dynamic nature. It does get difficult to
catch up with every new model or concept. I find it difficult too. What I
tend to do is I try to look at the bigger picture, and once a tech starts
picking pace, I spend time understanding it. The secret lies in following a
broad macro trend, not just in DS but in complete tech space.
How to implement a DevOps toolchain
A good DevOps toolchain is a progression of different DevOps tools used to
address a specific business challenge. Connected in a chain, they guarantee a
profitable cycle between the front-end and back-end developers, quality
analyzers, and customers. The goal is to automate development and deployment
processes to ensure the rapid, reliable, and budget-friendly delivery of
innovative solutions. We found out that building a successful DevOps toolchain
is not a simple undertaking. It takes experimentation and nonstop refinement
to guarantee that essential processes are fully automated. A DevOps toolchain
automates all of the technical elements in your workflow. It also gets
different teams on the same page so that you can focus on a business strategy
to drive your organization into the future. We have come to identify five all
the more valid benefits in support of the DevOps toolchain implementation. ...
A fully enabled and properly implemented DevOps toolchain propels your
innovation initiatives from start to end and ensures prompt deployment. Your
toolchain will look different than this, depending on your requirements, but I
hope seeing our workflow gives you a sense of how to approach automation as a
solution.
3 Essential Steps to Exploit the Full Power of AI
A key to generating a good ROI is in executing data, automation, analytics and
AI initiatives. Close to 23% of respondents have already set up or are in the
process of setting up an AI Center of Excellence that shares and coordinates
resources across different areas of the company. This number has risen from 18%
just a year back. Also, nearly 19% of companies have a company-wide AI leader
who oversees AI strategy and governance. The reason why such an integrated
delivery model makes sense is the convergence of the cloud infrastructure that
provides the storage and compute, the data that is the raw material for the
analysis, the automation that operates on the technology infrastructure, the
analytics that operates on the data to generate better insights, and the AI that
enhances both the automation and the analytics resulted in decreased costs and
better revenues. In large (greater than $1 billion revenues) companies the
existing data and analytics group have expanded their remit to include AI.
Companies that currently have separate centers of excellence (COE) for analytics
and/or automation and/or AI must integrate, or the very least, coordinate their
initiatives. Doing so would provide more seamless integration and yield better
ROI. Companies that are just starting their journey in analytics and AI can
start with an analytics or automation COE that expands to include AI
capabilities.
Quote for the day:
"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera
No comments:
Post a Comment