Benefits of RPA: RPA Best Practices for successful digital transformation
A main benefit of RPA solutions is that they reduce human error while enabling
employees to feel more human by engaging in conversations and assignments that
are more complex but could also be more rewarding. For instance, instead of
having a contact center associate enter information while also speaking with a
customer, an RPA solution can automatically collect, upload, or sync data into
with other systems for the associate to approve while focusing on forming an
emotional connection with the customer. Another impact of RPA is it can
facilitate and streamline employee onboarding and training. An RPA tool, for
instance, can pre-populate forms with the new hire’s name, address, and other
key data from the resume and job application form, saving the employee time.
For training, RPA can conduct and capture data from training simulations,
allowing a global organization to ensure all employees receive the same
information in a customized and efficient manner. RPA is not for every
department and it’s certainly not a panacea for retention and engagement
problems. But by thinking carefully about the benefits that it offers to
employees, RPA can transform workflows—making employees’ jobs less robotic and
more rewarding.
Hey Alexa. Is This My Voice Or a Recording?
The idea is to quickly detect whether a command given to a device is live or
is prerecorded. It's a tricky proposition given that a recorded voice has
characteristics similar to a live one. "Such attacks are known as one of the
easiest to perform as it simply involves recording a victim's voice," says
Hyoungshick Kim, a visiting scientist to CSIRO. "This means that not only is
it easy to get away with such an attack, it's also very difficult for a victim
to work out what's happened." The impacts can range from using someone else's
credit card details to make purchases, controlling connected devices such as
smart appliances and accessing personal data such home addresses and financial
data, he says. The voice-spoofing problem has been tackled by other research
teams, which have come up with solutions. In 2017, 49 research teams submitted
research for the ASVspoof 2017 Challenge, a project aimed at developing
countermeasures for automatic speaker verification spoofing. The ASV
competition produced one technology that had a low error rate compared to the
others, but it was computationally expensive and complex, according to Void's
research paper.
Reduce these forms of AI bias from devs and testers
Cognitive bias means that individuals think subjectively, rather than
objectively, and therefore influence the design of the product they're
creating. Humans filter information through their unique experience, knowledge
and opinions. Development teams cannot eliminate cognitive bias in software,
but they can manage it. Let's look at the biases that most frequently affect
quality, and where they appear in the software development lifecycle. Use the
suggested approaches to overcome cognitive biases, including AI bias, and
limit their effect on software users. A person knowledgeable about a topic
finds it difficult to discuss from a neutral perspective. The more the person
knows, the harder neutrality becomes. That bias manifests within software
development teams when experienced or exceptional team members believe that
they have the best solution. Infuse the team with new members to offset some
of the bias that occurs with subject matter experts. Cognitive bias often
begins in backlog refinement. Preconceived notions about application design
can affect team members' critical thinking. During sprint planning, teams can
fall into the planning fallacy: underestimating the actual time necessary to
complete a user story.
Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud
A different approach to bridging the worlds of on-prem data centers and the
growing variety of cloud computing services is offered by a company called
Alluxio. From their roots at the Berkeley Amp Labs, they've been focused on
solving this problem. Alluxio decided to bring the data to computing in a
different way. Essentially, the technology provides an in-memory cache that
nestles between cloud and on-prem environments. Think of it like a new spin on
data virtualization, one that leverages an array of cloud-era advances.
According to Alex Ma, director of solutions engineering at Alluxio: "We
provide three key innovations around data: locality, accessibility and
elasticity. This combination allows you to run hybrid cloud solutions where
your data still lives in your data lake." The key, he said, is that "you can
burst to the cloud for scalable analytics and machine-learning workloads where
the applications have seamless access to the data and can use it as if it were
local--all without having to manually orchestrate the movement or copying of
that data."
Redis and open source succession planning
Speaking of the intersection of open source software development and cloud
services, open source luminary Tim Bray has said, “The qualities that make
people great at carving high-value software out of nothingness aren’t
necessarily the ones that make them good at operations.” The same can be said
of maintaining open source projects. Just because you’re an amazing software
developer doesn’t mean you’ll be a great software maintainer, and vice versa.
Perhaps more pertinently to the Sanfilippo example, developers may be good at
both, yet not be interested in both. (By all accounts Sanfilippo has been a
great maintainer, though he’s the first to say he could become a bottleneck
because he liked to do much of the work himself rather than relying on
others.) Sanfilippo has given open source communities a great example of how
to think about “career” progression within these projects, but the same
principle applies within enterprises. Some developers will thrive as managers
(of people or of their code), but not all. As such, we need more companies to
carve out non-management tracks for their best engineers, so developers can
progress their career without leaving the code they love.
How data science delivers value in a post-pandemic world
The uptick in the need for data science, across industries, comes with the
need for data science teams. While hiring may have slowed down in the tech
sector – Google slowed its hiring efforts during the pandemic – data
scientists professionals are still in high demand. However, it’s important to
keep a close eye on how these teams continue to evolve. One position which is
increasingly in-demand as businesses become more data-driven is the role of
the Algorithm Translator. This person is responsible for translating business
problems into data problems and, once the data answer is found, articulating
this back into an actionable solution for business leaders to apply. The
Algorithm Translator must first break down the problem statement into use
cases, connect these use cases with the appropriate data set, and understand
any limitations on the data sources so the problem is ready to be solved with
data analytics. Then, in order to translate the data answer into a business
solution, the Algorithm Translator must stitch the insights from the
individual use cases together to create a digestible data story that
non-technical team members can put into action.
Open source contributions face friction over company IP
Why the change? Companies that have established open source programs say the
most important factor is developer recruitment. "We want to have a good
reputation in the open source world overall, because we're hiring technical
talent," said Bloomberg's Fleming. "When developers consider working for us,
we want other people in the community to say 'They've been really contributing
a lot to our community the last couple years, and their patches are always
really good and they provide great feedback -- that sounds like a great idea,
go get a job there.'" While companies whose developers contribute code to open
source produce that code on company time, the company also benefits from the
labor of all the other organizations that contribute to the codebase. Making
code public also forces engineers to adhere more strictly to best practices
than if it were kept under wraps and helps novice developers get used to
seeing clean code.
How Ekans Ransomware Targets Industrial Control Systems
The Ekans ransomware begins the attack by attempting to confirm its target.
This is achieved by resolving the domain of the targeted organization and
comparing this resolved domain to a specific list of IP addresses that have
been preprogrammed, the researchers note. If the domain doesn't match the IP
list, the ransomware aborts the attack. "If the domain/IP is not available,
the routine exits," the researchers add. If the ransomware does find a match
between the targeted domain and the list of approved IP addresses, Ekans then
infects the domain controller on the network and runs commands to isolate the
infected system by disabling the firewall, according to the report. The
malware then identifies and kills running processes and deletes the shadow
copies of files, which makes recovering them more difficult, Hunter and
Gutierrez note. In the file stage of the attack, the malware uses RSA-based
encryption to lock the target organization's data and files. It also displays
a ransom note demanding an undisclosed amount in exchange for decrypting the
files. If the victim fails to respond within first 48 hours, the attackers
then threaten to publish their data, according to the Ekans ransom recovered
by the FortiGuard researchers.
The best SSDs of 2020: Supersized 8TB SSDs are here, and they're amazing
If performance is paramount and price is no object, Intel’s Optane SSD 905P is
the best SSD you can buy, full stop—though the 8TB Sabrent Rocket Q NVMe SSD
discussed above is a strong contender if you need big capacities and big-time
performance. Intel’s Optane drive doesn’t use traditional NAND technology like
other SSDs; instead, it’s built around the futuristic 3D Xpoint technology
developed by Micron and Intel. Hit that link if you want a tech deep-dive, but
in practical terms, the Optane SSD 900P absolutely plows through our storage
benchmarks and carries a ridiculous 8,750TBW (terabytes written) rating,
compared to the roughly 200TBW offered by many NAND SSDs. If that holds true,
this blazing-fast drive is basically immortal—and it looks damned good, too.
But you pay for the privilege of bleeding edge performance. Intel’s Optane SSD
905P costs $600 for a 480GB version and $1,250 for a 1.5TB model, with several
additional options available in both the U.2 and PCI-E add-in-card form
factors. That’s significantly more expensive than even NVMe SSDs—and like
those, the benefits of Intel’s SSD will be most obvious to people who move
large amounts of data around regularly.
SRE: A Human Approach to Systems
Failure will happen, incidents will occur, and SLOs will be breached. These
things may be difficult to face, but part of adopting SRE is to acknowledge that
they are the norm. Systems are made by humans, and humans are imperfect. What’s
important is learning from these failures and celebrating the opportunity to
grow. One way to foster this culture is to prioritize psychological safety in
the workplace. The power of safety is very obvious but often overlooked.
Industry thought leaders like Gene Kim have been promoting the importance of
feeling safe to fail. He addresses the issue of psychological insecurity in his
novel, “The Unicorn Project.” Main character Maxine has been shunted from a
highly-functional team to Project Phoenix, where mistakes are punishable by
firing. Gene writes “She’s [Maxine] seen the corrosive effects that a culture of
fear creates, where mistakes are routinely punished and scapegoats fired.
Punishing failure and ‘shooting the messenger’ only cause people to hide their
mistakes, and eventually, all desire to innovate is completely extinguished.”
Quote for the day:
No comments:
Post a Comment