AI can see things we can’t – but does that include the future?

“What we focus on is augmented intelligence for humans to take action [on],”
says Radtke when I raise this concern. “We are not prescribing the action to be
taken based on the insights that we get – we're trying to make sure that the
human has all the necessary intelligence to drive the behavior that they need to
drive. We're reporting facts back – this actually happened here, this is what
has happened in the past – and you can take action based on that. It's all about
driving improved safety for everyone in that area.” When I press him on the
possible human rights concern and the inevitable pushback that will arise if AI
is routinely used to pre-emptively police areas deemed as problematic, he
answers: “I think that with every technology that's ever been out there in
history there is always a way to use it for non-good. I think you have to focus
on the good that it can provide and make sure that you police the non-good
behavior that could happen from it.” This will entail some sort of oversight.
“There are consortiums out there to help drive the ethical adoption of AI
throughout the industry – we definitely keep aware of those.
RPA vs. BPA: Which approach to automation should you use?
Where BPA and RPA overlap, according to Mullakara, is the goal of eliminating
human intervention in order to process multiple automation. “The whole idea of
BPA was to remove people from the process and that's kind of what RPA is also
aiming for. In the sense of the simple workflow automation, both can do it. RPA
does it through a UI integration whereas BPA does it mostly with APIs. And you
know, automating the workflow with the systems by invoking the systems,” he
tells us. However, Taulli explains that automation really won’t get rid of
people at this point and it will be the usual suspects that will, such as
recessions. Mullakara agrees that this messaging for BPA and RPA is a common
misconception and has earned both technologies quite a bad rap. “So, what you
actually automate with RPA for example is tasks – it's not jobs. It's not an
entire job even if it's a process. It’s not jobs, so we still need people,” he
says. 
All the Things a Service Mesh Can Do

Many organizations have different teams and services dispersed across
  different networks and regions of a given cloud. Many also have services
  deployed across multiple cloud environments. Securely connecting these
  services across different cloud networks is a highly desirable function that
  typically requires significant effort by network teams. In addition,
  limitations that require non-overlapping Classless Inter-Domain Routing (CIDR)
  ranges between subnets can prevent network connectivity between virtual
  private clouds (VPCs) and virtual networks (VNETs). Service mesh products can
  securely connect services running on different cloud networks without
  requiring the same level of effort. HashiCorp Consul, for example, supports a
  multidata center topology that uses mesh gateways to establish secure
  connections between multiple Consul deployments running in different networks
  across clouds. Team A can deploy a Consul cluster on EKS. Team B can deploy a
  separate Consul cluster on AKS. Team C can deploy a Consul cluster on virtual
  machines in a private on-premises data center. 
Snowballing Ransomware Variants Highlight Growing Threat to VMware ESXi Environments
The proliferation of ransomware targeting ESXi systems poses a major threat to
  organizations using the technology, security experts have noted. An attacker
  that gains access to an EXSi host system can infect all virtual machines
  running on it and the host itself. If the host is part of a larger cluster
  with shared storage volumes, an attacker can infect all VMs in the cluster as
  well, causing widespread damage. "If a VMware guest server is encrypted at the
  operating system level, recovery from VMware backups or snapshots can be
  fairly easy," McGuffin says. '[But] if the VMware server itself is used to
  encrypt the guests, those backups and snapshots are likely encrypted as well."
  Recovering from such an attack would require first recovering the
  infrastructure and then the virtual machines. "Organizations should consider
  truly offline storage for backups where they will be unavailable for attackers
  to encrypt," McGuffin adds. Vulnerabilities are another factor that is likely
  fueling attacker interest in ESXi. VMware has disclosed multiple
  vulnerabilities in recent months.
5 typical beginner mistakes in Machine Learning
Tree-based models don’t need data normalization as feature raw values are not
  used as multipliers and outliers don’t impact them. Neural Networks might not
  need the explicit normalization as well — for example, if the network already
  contains the layer handling normalization inside (e.g. BatchNormalization of
  Keras library). And in some cases, even Linear Regression might not need data
  normalization. This is when all the features are already in similar value
  ranges and have the same meaning. For example, if the model is applied for the
  time-series data and all the features are the historical values of the same
  parameter. In practice, applying unneeded data normalization won’t necessarily
  hurt the model. Mostly, the results in these cases will be very similar to
  skipped normalization. However, having additional unnecessary data
  transformation will complicate the solution and will increase the risk of
  introducing some bugs.
Git for Network Engineers Series – The Basics
Version control systems, primarily Git, are becoming more and more prevalent
    outside of the realm of software development. The increase in DevOps,
    network automation, and infrastructure as code practices over the last
    decade has made it even more important to not only be familiar with Git, but
    proficient with it. As teams move into the realm of infrastructure as code,
    understanding and using Git is a key skill. ... Unlike other Version Control
    Systems, Git uses a snapshot method to track changes instead of a
    delta-based method. Every time you commit in Git, it basically takes a
    snapshot of those files that have been changed while simply linking
    unchanged files to a previous snapshot, efficiently storing the history of
    the files. Think of it as a series of snapshots where only the changed files
    are referenced in the snapshot, and unchanged files are referenced in
    previous snapshots. Git operations are local, for the most part, meaning it
    does not need to interact with a remote or central repository. 
Deep learning delivers proactive cyber defense

The timing couldn’t be better. The increasing availability of
    ransomware-as-a-service offerings, such as ransomware kits and target lists,
    are making it easier than ever for bad actors—even those with limited
    experience—to launch a ransomware attack, causing crippling damage in the
    very first moments of infection. Other sophisticated attackers use targeted
    strikes, in which the ransomware is placed inside the network to trigger on
    command. Another cause for concern is the increasing disappearance of an IT
    environment’s perimeter as cloud compute storage and resources move to the
    edge. Today’s organizations must secure endpoints or entry points of
    end-user devices, such as desktops, laptops, and mobile devices, from being
    exploited by malicious hackers—a challenging feat, according to Michael
    Suby, research vice president, security and trust, at IDC. “Attacks continue
    to evolve, as do the endpoints themselves and the end users who utilize
    their devices,” he says. “These dynamic circumstances create a trifecta for
    bad actors to enter and establish a presence on any endpoint and use that
    endpoint to stage an attack sequence.”
Towards Geometric Deep Learning III: First Geometric Architectures

The neocognitron consisted of interleaved S- and C-layers of neurons (a
    naming convention reflecting its inspiration in the biological visual
    cortex); the neurons in each layer were arranged in 2D arrays following the
    structure of the input image (‘retinotopic’), with multiple ‘cell-planes’
    (feature maps in modern terminology) per layer. The S-layers were designed
    to be translationally symmetric: they aggregated inputs from a local
    receptive field using shared learnable weights, resulting in cells in a
    single cell-plane have receptive fields of the same function, but at
    different positions. The rationale was to pick up patterns that could appear
    anywhere in the input. The C-layers were fixed and performed local pooling
    (a weighted average), affording insensitivity to the specific location of
    the pattern: a C-neuron would be activated if any of the neurons in its
    input are activated. Since the main application of the neocognitron was
    character recognition, translation invariance was crucial. 
Don’t Just Climb the Ladder. Explore the Jungle Gym

Most of us do not approach work (or life) with a master plan in mind, and
    many of the steps we take are beautiful accidents that help us become who we
    are. “I’m 67 years old,” Guy said, “and I think I finally found my true
    calling.” He was referring to his podcast, Remarkable People, where he
    interviews exceptional leaders and innovators (think Jane Goodall, Neil
    deGrasse Tyson, Steve Wozniak, and Kristi Yamaguchi) about how they got to
    be remarkable. “In a sense, my whole career has prepared me for this moment.
    I’ve had decades of experience in startups and large companies. So that
    gives me the data to ask great questions that my listeners really want the
    answers to,” Guy said. Guy is undeniably brilliant, and his success is no
    accident. But still, he believes that luck has played a part in his success.
    In his words, “Basically, I’ve come to the conclusion that it’s better to be
    lucky than smart.” Maybe Guy is right. Or perhaps, the smartest people know
    when to take advantage of luck and act on the opportunities that present
    themselves. Whatever the case, it’s important to take calculated risks.
Should You Invest in a Digital Transformation Office?
With the digital transformation office comes a transformation team, who
    initiates organizational change. Laute says that it’s crucial that everyone
    inside the organization stand behind the transformation team if they truly
    want to see changes happening. “You need to have an environment where these
    people, the transformation lead and the transformation team, are allowed and
    are not afraid to speak up. These people shouldn't be biased, not just
    following what the executive board says, but really [being] able to
    challenge and to speak up. And they should have the freedom to call out if
    something is going in the wrong direction, may it be content or
    behavioral-wise,” she explains. And while clearly there can be
    technology-related challenges, Laute tells us that digital transformation is
    also a people problem, and calls for a change in culture and mindset in
    order to find success. The cultural shift, she explains, is truly where
    everything starts to come together in order to get the transformation going.
    “Digital [transformation] is not only technology. You need to change
    behaviors and you need to change processes. And most of the time, you change
    your target operating model, right?”
Quote for the day:
"Uncertainty is a permanent part of
      the leadership landscape. It never goes away." --
      Andy Stanley
 
 
No comments:
Post a Comment