AI Cameras: Can They Replace IoT Sensors?
High-end cameras, especially on smartphones, are an easy and qualitative source
of data acquisition, directly in the field. When it comes to industrial
maintenance, any technician has a tool in his pocket to upload an image or video
and consult an AI to find the solution to the issue. In agriculture, a farmer
can take a picture of a crop and immediately have information about a potential
disease. Drones have also become an important part of computer vision,
especially for agricultural or large industrial installations (power lines,
recycling plants, pipelines, etc.). The ability for drones to fly over large
areas means that cameras can collect images that would have been far too
cost-prohibitive even a few years ago. ... Edge infrastructure addresses this
challenge by analyzing the footage locally and uploading only a fraction of its
data for further analysis. With video, data privacy and security are extremely
sensitive, especially compared to devices such as agricultural soil sensors.
Storing the files locally on edge devices can reduce the risk of their hacking,
but above all clarify the responsibilities in the case of data robbery (site
manager, client).
How to write tests that find bugs
Some people. Not to focus on unit tests, but focus more on integration and
system tests. I belong to the team unit testing. So in my book, I do recommend
people to try to focus on unit testing as much as possible, the reason being I
believe that if you design your code, well, the core of your system, the
important parts of your system will be just a. For loops and F’s and data
structures being manipulated. And those can be easily tested with unit testing
and by easily, I mean, it’s super easy and fast to write a test, they run super
fast. You can quickly explore different corner cases. You know, it’s super easy
to just instantiate a class, put some values in column methods. That is why I
prefer unit [00:16:00] testing. But that requires though that you develop your
system with this, you know, unit testing, the stability in mind, and this is not
always. Why do some people prefer integration testing and they have a point
there because, you know, in lots of types of systems, we do a lot of the bugs
only happen when you put components together. ... And if you’re really mocking
out components, you know, when testing one component, you kind of mock the rest,
maybe you’re going to meet.
Why Everyone’s Talking About Event Streaming
To maximize the value of their data as it’s created — instead of waiting hours,
days, or even longer to analyze it once it’s at rest—Overstock needed a
streaming and messaging platform, which would enable them employ real-time
decision-making to deliver personalized experiences and recommend products
likely to be well-received by customers at the perfect time (really fast, in
other words). Data messaging and streaming is a key part of an event-driven
architecture, which is a software architecture or programming approach built
around the capture, communication, processing, and persistence of events—mouse
clicks, sensor outputs, and the like. Processing streams of data involves taking
actions on a series of data that originates from a system that continuously
creates “events.” The ability to query this non-stop stream and find anomalies,
recognize that something important has happened, and act on it quickly and in a
meaningful way, is what streaming technology enables. This is in contrast to
batch processing, where an application would store a data after intaking it,
process it, and then store the processed result or forward it to another
application or tool.
3 Vectors of Artificial Intelligence and Machine Learning
Is MLOps for real? It’s happening. With services such as auto ML, it’s getting
easier to scale-out machine learning models. OctoML is based upon Apache TVM,
an end-to-end machine learning compiler framework for CPUs, GPUs and
accelerators. Apache TVM is a project originating from the University of
Washington. TVM stands for tensor virtual machine. It provides a common layer
across targets that exposes a clean interface to the upper layers of the
stack, and machine learning frameworks, such as TensorFlow and
PyTorch. ... What Apache TVM does is create a set of common primitives
across all sorts of different hardware from embedded CPUs to service CPUs,
small GPUs, large GPUs, accelerators, and so on. And then it uses machine
learning internally to produce efficient machine learning code. Okay, so any
section here uses machine learning for machine learning, code optimization.
The reason that’s important is because by and large today, the work done to
get a model ready for deployment is manual.
Critical ManageEngine Desktop Server Bug Opens Orgs to Malware
On the mobile side, users can deploy profiles and policies; configure
devices for Wi-Fi, VPNs, email accounts and so on; apply restrictions on
application installs, camera usage and the browser; and manage security with
passcodes and remote lock/wipe functionality. As such, the platform offers
far-reaching access into the guts of an organization’s IT footprint, making
for an information-disclosure nightmare in the case of an exploit,
potentially. As well, the ability to install a .ZIP file paves the way for
the installation of malware on all of the endpoints managed by the Desktop
Central instance. In the case of the MSP version – which, as its name
suggests, allows managed service providers (MSPs) to offer endpoint
management to their own customers – the bug could be used in a supply-chain
attack. Cybercriminals can simply compromise one MSP’s Desktop Central MSP
edition and potentially gain access to the customers whose footprints are
being managed using it, depending on security measures the provider has put
in place.
DevOps: CI/CD Tools to Watch Out for in 2022
With Continuous Integration, we are able to find bugs as soon as they are
introduced. This leads to shipping releases faster and with better quality.
In addition, Continuous Integration eliminates manual handoffs and ensures
that releases are frequent. This way, developers can focus on writing code
for new features rather than fixing bugs manually all day long. It saves
time, effort, and increases developer morale towards the work. In most
organizations, the friction in software delivery is because most of the
tasks are manual and error-prone. For example, someone provisions/updates
the environment as needed, a team deploys a specific version of the
software, and another team keeps track of what’s running where. This creates
a dependency hell and lack of visibility. Ultimately, these inefficiencies
lead to slower software releases and lost revenue and opportunities. The
best approach to achieve velocity is to automate all steps in the deployment
pipeline. This leads to more frequent and rapid release cycles that are
predictable and error-free and will lead to happy and productive engineering
teams.
Cybersecurity, blockchain and NFTs meet the metaverse
One big difference is the maturity of regulation. Bitcoin was born out of
the financial crisis and the subprime mortgage meltdown of 2008/2009. It
tapped the visceral reaction to a system that favors large financial
institutions and hurts the average person. You remember the movie “The Big
Short”? Christian Bale’s character couldn’t understand why, when real estate
markets were cratering around him, that his “insurance policy” wasn’t
skyrocketing in value. The reason was the big banks, likely with government
knowledge, were unwinding their positions to reduce the damage. Once they
limited their downside exposure, the market crashed in dramatic fashion.
Watch this clip of Stephen Colbert interviewing Michael Lewis, author of
“The Big Short.” ... Bitcoin specifically (and cryptocurrency generally) is
the confluence of cryptography, software engineering and game theory – all
well-understood and applied disciplines. The blockchain and cryptocurrency
can cut out the so-called trusted third party and enable direct, highly
secure transactions between two parties.
Edge computing set for growth – that is, when we can agree what it is
Perhaps the main issue with edge computing is that it is really a bunch of
diverse applications that have been lumped together into one category,
simply because they operate outside the bounds of the traditional data
centre. However you choose to define it, it looks like it isn't going
away. In Red Hat's 2022 Global Tech Outlook report, edge computing was
listed among the emerging technology workloads that organisations are most
likely to consider over the coming year. In fact, if you consider edge and
IoT to overlap somewhat, the two combined were the leading category, with 61
per cent of respondents saying they were considering one or both. Red Hat
itself defines edge computing as a distributed computing model in which data
is captured, stored, processed and analysed at or near the physical location
where it is created. The firm says that it views it as "an opportunity to
extend the open hybrid cloud all the way to the data sources and end users."
... Because workloads can reside across a continuum of core, edge, and
endpoint locations, edge computing requires a significant amount of
coordination among technology and service providers, it says.
Could Rust be the Future of JavaScript Infrastructure?
Rust helps developers write fast software that’s memory-efficient. It’s a
modern replacement for languages like C++ or C with a focus on code safety
and concise syntax. Rust is quite different than JavaScript. JavaScript
tries to find variables or objects not in use and automatically clears them
from memory. This is called garbage collection. The language abstracts the
developer from thinking about manual memory management. With Rust,
developers have more control over memory allocation without it being as
painful as C++. “Rust uses a relatively unique memory management approach
that incorporates the idea of memory ‘ownership’. Basically, Rust keeps
track of who can read and write to memory. It knows when the program is
using memory and immediately frees the memory once it is no longer needed.
It enforces memory rules at compile-time, making it virtually impossible to
have runtime memory bugs. You do not need to manually keep track of memory.
The compiler takes care of it.” — Discord
A new kind of old-school testing
Developers are writing less logic and spending more time gluing things
together. Today the average production system has interactions with multiple
databases, APIs, and other microservices and endpoints. Any time your
software has to talk to a different piece of software, you can no longer
make simple assumptions about how your system is going to behave. Every
database, message queue, cache, and framework has its own particular states,
rules, and constraints that determine its behavior. Developers need a way to
test these behaviors in advance of deployment, and this class of testing is
called integration testing. ... There will always be impassioned debates
about how to best balance speed versus software quality. One reason for the
great popularity of the Java compiler and similar technologies has been
their ability to help developers find failure closer to the point of
development so they can fix them quickly. There will always be diabolical
bugs that evade your testing, but with the increasing ease of software unit
testing and integration testing today, it’s getting harder to credibly argue
against investing more cycles into testing your code and its integration
surface before pushing to production.
Quote for the day:
"Focusing on others will give you
more influence and power than focusing on yourself." --
Kevin Eikenberry
No comments:
Post a Comment