Move Fast Without Breaking Things in ML
The first step in the response to the problem has happened even before you got
invited to the call with your CTO. The problem has been discovered and the
relevant people have been alerted. This is likely the result of a metric
monitoring system that is responsible for ensuring important business metrics
don’t go off track. Next using your ML observability tooling, which we will talk
a bit more about in a second, you are able to determine that the problem is
happening in your search model since the proportion of users who are engaging
with your top n-links returned has dropped significantly. After learning this
you rely on your model management system to either roll back to your previous
search ranking model or deploy a naive model that can hold you over in the
interim. This mitigation is what stops your company from losing (as much) money
every minute since every second counts for users being served incorrect
products. Now that things are somewhat working again, you need to look back to
your model observability tools to understand what happened with your model.
Ransomware is the top cybersecurity threat we face, warns cyber chief
Not only are cyber-criminal ransomware groups encrypting networks and demanding
a significant payment in exchange for the decryption key, now it's common for
them to also steal sensitive information and threaten to release it unless a
ransom is paid – often leading victims to feel as if they have no choice but to
give in to the extortion demands. "As the business model has become more and
more successful, with these groups securing significant ransom payments from
large profitable businesses who cannot afford to lose their data to encryption
or to suffer the down time while their services are offline, the market for
ransomware has become increasingly professional," Cameron will say. Ransomware
is successful because it works; in many cases, because organisations still don't
have the appropriate cyber defences in place to prevent cyber criminals
infiltrating their network in the first place in what the NCSC CEO describes as
"the cumulative effect of a failure to manage cyber risk and the failure to take
the threat of cyber criminality seriously".
Become software engineers, not software integrators.
Ever since its inception, the IT industry has been evolving every day, by giving
better and more awesome technology experiences to end-users. On the other hand,
the industry has also continually focused on reducing the development time and
cycle for software engineering teams. A significant portion of IT engineers
& organizations are motivated to ease the development process. This in turn
has become a race to give the best technologies (frameworks, tools, etc.) to
engineering teams. In this race, their focus has gradually shifted from “ease of
development” to almost “no development at all”, i.e. making tools, which allow
the engineers to just integrate stuff to provide the final product. Essentially,
plug and play. Of course, the big advantages because of this are that: Now
the companies which are building software for businesses can focus more on
business ideas; and With a reduced development cycle, companies can build
many more software products. However, the concern starts when engineers, who get
used to the plug & play tools, start losing core engineering skills like
optimizing, maturing, and architecting the code.
How External IT Providers Can Adopt DevOps Practices
The key is to overcome waterfall thinking. A modern supplier will work in small
batches and will use an experimental approach to product development. The
supplier’s product development team will create hypotheses and valid them with
small product increments, ideally in production. According to my experience,
many IT suppliers use agile software development and Continuous Integration
these days. But they stop their iterative approach at the boundary to
production. One problem of having separated silos for development and operations
is that in most cases these two silos have different goals (dev = throughput,
ops = stability), Diener mentioned. In contrast, a DevOps team has a common
business goal. ... In order to adopt DevOps practices, the supplier has to find
out what his client’s goal is. It has to become the supplier’s goal as well. We
at cosee use product vision workshops to shape and document the client’s goal
(impact) and its user’s needs (outcome). That’s a prerequisite for an iterative
and experimental product development approach.
Blockchain in Space: What’s Going on 4 Years After the First Bitcoin Transaction in Orbit?
The growth in both scale and affordability of space exploration is creating a
whole new sector — the Space Economy, as the United Nations Office for Outer
Space Affairs already calls it. An inevitable question then arises: what money
will the players in this space economy use? ... Despite all the advances, space
exploration often remains a costly business, both in money and science capital.
Because of that high cost nature, any large project in space requires the
cooperation of numerous private companies, each providing resources and talent.
And the most ambitious programs are collaborations between governments — not all
of which necessarily put a lot of trust in each other. This is where one of
blockchain’s key advantages comes in: it enables the exchange of value and data
between independent parties in a way that doesn’t involve trust. With smart
contracts, peer-to-peer transaction settlement, and the transparency and
accountability enabled by public blockchain records
Upcoming Trends in DevOps and SRE in 2021
Service meshes are quickly becoming an essential part of the cloud-native stack.
A large cloud application may require hundreds of microservices and serve a
million users concurrently. A service mesh is a low-latency infrastructure layer
that allows high traffic communication between different components of a cloud
application(databases, frontends, etc.) This is done via application programming
interfaces (APIs). Most distributed applications today have a load balancer that
directs traffic; however, most load balancers are not equipped to deal with a
large number of dynamic services whose locations/counts vary over time. To
ensure that large volumes of data are sent to the correct endpoint, we need
tools that are more intelligent than traditional load balancers. This is where
Service Meshes come into the picture. In typical microservice applications, the
load balancer or firewall is programmed with static rules. However, as the
number of microservices increases and the architecture changes dynamically,
these rules are no longer enough.
How GPT-3 and Artificial Intelligence Will Destroy the Internet
As a natural language processor and generator, GPT-3 is a language learning
engine that crawls existing content and code to learn patters, recognizes syntax
and can produce unique outputs based on prompts, questions and other inputs. But
GPT-3 is more than just for use by content marketers as witness by the recent
OpenAI partnership with Github for creating code using a tool dubbed “Copilot.”
The ability to use autoregressive language modeling doesn’t just apply to human
language, but also various types of code. The outputs are currently limited, but
its future potential use could be vast and impacting. How GPT-3 is
Currently Kept at Bay With current beta access to the OpenAI API, we developed
our own tool on top of the API. The current application and submission process
with OpenAI is stringent. Once an application has been developed before it can
be released to the public for use in any commercial application, OpenAI requires
a detailed submission and use case for approval by the OpenAI team.
NFTs, explained
“Non-fungible” more or less means that it’s unique and can’t be replaced with
something else. For example, a bitcoin is fungible — trade one for another
bitcoin, and you’ll have exactly the same thing. A one-of-a-kind trading card,
however, is non-fungible. If you traded it for a different card, you’d have
something completely different. You gave up a Squirtle, and got a 1909 T206
Honus Wagner, which StadiumTalk calls “the Mona Lisa of baseball cards.” (I’ll
take their word for it.) At a very high level, most NFTs are part of the
Ethereum blockchain. Ethereum is a cryptocurrency, like bitcoin or dogecoin, but
its blockchain also supports these NFTs, which store extra information that
makes them work differently from, say, an ETH coin. It is worth noting that
other blockchains can implement their own versions of NFTs. (Some already have.)
NFTs can really be anything digital (such as drawings, music, your brain
downloaded and turned into an AI), but a lot of the current excitement is around
using the tech to sell digital art.
Demystifying AI: The prejudices of Artificial Intelligence (and human beings)
In a way, the results of these algorithms hold a mirror to human society. They
reflect and perhaps even amplify the issues already present. We know that these
algorithms need data to learn. Their predictions are only as good as the data
they are trained on and the goal they are set to achieve. The data needed to
train these algorithms is huge (think millions and above). Suppose we are trying
to develop an algorithm to identify cats and dogs from pictures. Not only do we
need thousands of pictures of cats and dogs, but they should be labeled (say the
cat is class 0 and dog is class 1) so that the algorithm can understand. We can
download these images off the internet (the ethics of which is questionable),
but still, they need to be labeled manually. Now, consider the complexity and
effort required to correctly label a million images in one thousand classes.
Often this labeling task is done by “cheap labor” who may or may not have the
motivation to do it correctly, or they simply make mistakes. Another problem in
the data set is that of class imbalance.
Three Mistakes That Will Ruin Your Multi-Cloud Project (and How to Avoid Them)
A multi-cloud strategy only augments the likelihood of experiencing one of these
errors. The complexity of multiple clouds provides an extended attack surface
for threat actors. An increased number of services means a higher chance of
experiencing a misconfiguration or data leak. Centralized visibility and
management are necessary to combat risk and ensure protection and compliance
across multi-cloud environments. Proper governance requires a full view of the
cloud, complete with resource consumption, how new services are accessed, and
systems in place for risk mitigation, including data and privacy policies and
processes. Rather than a cyclically executed process, risk management must be
continuous and contain various coordinated actions and tasks in order to oversee
and manage risks. An ecosystem-wide framework going beyond traditional IT is
necessary for proper risk management. Enterprises must therefore prioritize
training and awareness within their organization, teaching team members how to
securely use multiple cloud services.
Quote for the day:
"Integrity is the soul of leadership!
Trust is the engine of leadership!" -- Amine A. Ayad
No comments:
Post a Comment