How to scale AI with a high degree of customization
Scaling machine learning programs is very different to scaling traditional
software because they have to be adapted to fit any new problem you approach. As
the data you’re using changes (whether because you’re attacking a new problem or
simply because time has passed), you will likely need to build and train new
models. This takes human input and supervision. The degree of supervision
varies, and that is critical to understanding the scalability challenge. A
second issue is that the humans involved in training the machine learning model
and interpreting the output require domain-specific knowledge that may be
unique. So someone who trained a successful model for one business unit of your
company can’t necessarily do the same for a different business unit where they
lack domain knowledge. Moreover, the way an ML system needs to be integrated
into the workflow in one business unit could be very different from how it needs
to be integrated in another, so you can’t simply replicate a successful ML
deployment elsewhere. Finally, an AI system’s alignment to business objectives
may be specific to the group developing it. For example, consider an AI system
designed to predict customer churn.
The snags holding back DevOps: culture, delivery and security
Cultural issues create this disjointed relationship between Dev and Ops.
"Culture is the number one missing component, but there is also a failure to
truly connect and automate across functional silos," Dawson says. "This results
in lack of shared visibility, consistent feedback to drive improvement and,
potentially, a negative experience which inhibits adoption." There are too many
tools competing for Dev and Ops teams' mindshare as well. "A single team may
have anywhere between 20 to 50 tools," says Kakran. "Separating
signal-from-noise when you are bombarded by hundreds of alerts per hour is quite
challenging." The continuous delivery piece is also a snag in the continuous
integration / continuous delivery (CI/CD) that should flow effortless through
DevOps. "Enterprises are lagging in test automation and are increasing efforts
to automate continuous testing, which is a core component of CD," says Venky
Chennapragada, DevOps architect with Capgemini North America.. "Some enterprises
are unable to adopt a high level of CI/CD because their application portfolio
mostly consists of packaged software, legacy software or ERP systems."
PyTorch Code for Self-Attention Computer Vision
Self-Attention is gradually gaining prominent place from sequence modeling in
natural language processing to Medical Image Segmentation. It replaces
conventional recurrent neural networks and convolutional neural networks in many
applications to achieve new state-of-the-art in respective fields. Transformers,
its variants and extensions are well-utilizing self-attention mechanisms.
Self-Attention Computer Vision, known technically as self_attention_cv, is a
PyTorch based library providing a one-stop solution for all of the
self-attention based requirements. It includes varieties of self-attention based
layers and pre-trained models that can be simply employed in any custom
architecture. Rather than building the self-attention layers or blocks from
scratch, this library helps its users perform model building in no-time. On the
other hand, the pre-trained heavy models such as TransUNet, ViT can be
incorporated into custom models and can finish training in minimal time even in
a CPU environment! According to its contributors Adaloglou Nicolas and Sergios
Karagiannakos, the library is still under development by updating the latest
models and architectures.
How to Choose the Right Cybersecurity Framework
Start by setting goals for your cybersecurity program that align with the
business's needs. Stakeholders from across the organization — from the C-suite
and upper management to support teams and IT — should be involved in the
initial risk-assessment process and setting a risk-tolerance level. While
deciding where to start your implementation can feel like trying to boil the
ocean, one way to make it less intimidating is to run a pilot program focused
on a single department. This can help uncover lessons about what does and
doesn't work, what tools will help you succeed, and best practices for a wider
rollout. From there, identify the type of data the organization processes and
map out its life cycle. A simple model will help lay a foundation for
understanding the organization's cybersecurity risk and identify points along
the supply chain to invest more time and resources. Business tools and
software are often important sources and collectors of data, so ask vendors
about their data privacy policies to ensure they reflect your goals. ... A
good cybersecurity framework will help you identify risks, protect company
assets (including customer data), and put steps in place to detect, respond,
and recover from a cybersecurity event.
Is Data Science a science?
Before we tackle the idea of whether Data Science is a science or not,
something that doesn’t seem to have a definitive answer, let’s step back and
look at the idea of proof. This is a word that is overused quite frequently as
there are many different kinds of proof: for example, there are scientific
proofs, legal proofs, and mathematical proofs. In mathematics, a proof is an
inferential argument that shows a statement is true as supported by axioms,
definitions, theorems, and postulates. Mathematicians normally use deductive
reasoning to show that the premises, also called statements, in a proof are
true. A direct proof is one that shows a given statement is always true and
the proof is usually written in a symbolic language. In an indirect proof,
mathematicians usually employ proof by contradiction, where they assume the
opposite statement is true and eventually reach a contradiction showing the
assumption is false. In science, an inherently inductive enterprise,² we
cannot prove any hypothesis to be true as that would require an infinite
number of observations so the best we can hope to do is use inductive
reasoning as the basis of our generalization and hold it to be provisionally
true.
Seven lessons on how technology transformations can deliver value
Not only do the transformations focused on talent strategy stand out in their
value potential, but they are also much more commonplace at top-performing
companies. Top-quartile respondents are more than three times likelier than
their bottom-quartile peers (41 percent, compared with 12 percent) to say
they’ve pursued a transformation of their talent strategy in recent years. Yet
the need to address talent is universal and urgent. Respondents believe that
more than 40 percent of their workforce will need to be either replaced or
fundamentally retrained to make up for their organizations’ skills gaps. But
only 15 percent of respondents say their companies plan to pursue a
talent-strategy transformation in the next two years, even though the talent
challenge remains considerable. At companies that have pursued recent
transformations, the top challenges to doing so continue to revolve around
talent as well as culture: namely, skill gaps and cultural differences, the
difficulty of changing cultures and ways of working, and difficulty finding
talent to fill new roles—which is as challenging for top performers as it is
for everyone else. Talent also appears to impede progress at the companies
that haven’t pursued technology transformations;
More Intelligent Medicine
The combination of human and machine intelligence could optimize the practice
of clinical medicine and streamline health care operations. Machine
learning-based AI tools could be especially valuable because they rely on
adaptive learning. This means that with each exposure to new data, the
algorithm gets better at detecting telltale patterns. Such tools have the
capacity to transcend the knowledge-absorption and information-retention
limits of the human brain because they can be “trained” to consider millions
of medical records and billions of data points. Such tools could boost
individual physicians’ decision-making by offering doctors accumulated
knowledge from billions of medical decisions, billions of patient cases, and
billions of outcomes to inform the diagnosis and treatment of an individual
patient. AI-based tools could alert clinicians to a suboptimal medication
choice, or they could triage patient cases with rare, confounding symptoms to
rare-disease experts for remote consults. AI can help optimize both diagnostic
and prognostic clinical decisions, it can help individualize treatment and it
can identify patients at high risk for progressing to serious disease or for
developing a condition, allowing physicians to intervene preemptively.
Surviving Zombie Scrum
There’s not one specific cause of Zombie Scrum, but in relation to the
symptoms we described earlier, we can share some common causes. Generally
speaking, Zombie Scrum systems occur in organizations that optimize for
something else than actual agility. This creates problems that the teams can
usually not solve on their own. For example, Scrum Teams that operate in
environments with Zombie Scrum rarely have a clear answer as to what makes
their product valuable. Much like zombies that stumble around without a sense
of direction, many Zombie Scrum Teams work hard on getting nowhere in
particular. While they still produce something the question remains whether
they are actually effective. ... Another cause is the struggle many
organizations face with shipping fast. Often heard excuses are that the
product is too complex, technology doesn’t support it, or customers aren’t
asking for it. Shipping fast is perceived as a “nice to have”, instead of a
necessary activity to manage risk and deliver value sooner. Without shipping
fast, Scrum’s loop of Empirical Process Control collapses. In Zombie Scrum,
organizations don’t create safety to fail. Teams can’t improve when they
experience no room for uncertainty, doubt, or criticism. They often develop
all kinds of defensive strategies to prevent uncertainty.
AI Must Play a Role in Data Cloud Management
Data intelligence, or the use of data to glean useful information, allows a
business to both increase revenue and their position in the market. But the
continual multiplication of data and its sources are making an already
substantial challenge even more laborious. This emphasis on data is where
artificial intelligence (AI) can play an especially useful role. By leveraging
the cloud and AI for the storage, collection, and analysis of data, a business
can monetize information in a fast, effective manner. Indeed, mastering data
management through the use of the cloud will continue to be top of mind for
many IT groups as they are asked more and more to improve business agility
through the fostering of better business intelligence. Thus, data science --
the large umbrella under which AI, machine learning, automation, data storage,
and more all fall within -- will see huge leaps in growth both this year and
in the years ahead. The cloud is perfectly positioned to assist organizations
in AI because of its unique ability to provide business with flexibility,
agility, scalability, and speed that other models of infrastructure simply
can’t achieve at the same level. If the core of a business isn’t managing a
datacenter, then the cloud is all the more appealing, since it allows IT teams
to focus on the value-driving projects that will truly make a difference for
employees and customers.
Why data privacy will be the catalyst for digital identity adoption
Firstly, the story around digital identities needs to change. What they won’t
be is a one-stop-shop to access every piece of personal information about you
at the touch of a button, shareable and stealable. What digital identities
could be, if we put data privacy at their core, is selective. We have the
opportunity to create a technology, which means people only need to share the
specific data they need at any one time, withholding as much data as they can
to get the job done. This doesn’t seem too big of an ask, either. Mastercard
recently partnered with Deakin University and Australia Post to test out a
digital ID solution enabling students to register for their exams digitally.
This removed the need for tiresome paperwork and trips to campus, but also
reduced the amount of data shared about each student. Students created a
digital identity with Australia Post, using this to gain access to their
university exam portal. With each registration, only specific personal
information was required to allow students’ entry to the exam portal – nothing
was shared than didn’t need to be. Now imagine this in our banks, shops, and
workplaces. Rather than revealing most of your ‘identity’ with every purchase
of alcohol, you only show your ID documents when you first create the identity
– to verify that you are who you say you are.
Quote for the day:
"Don't dare to be different, dare to
be yourself - if that doesn't make you different then something is wrong."
-- Laura Baker
No comments:
Post a Comment