For websites and services where you need to ensure the security of your account, like your bank, passwords alone simply are not enough anymore. In this scenario, you need two-factor authentication (2FA) -- specifically, the kind where a mobile app generates login codes for you. Not the kind where you are sent an SMS text message, because those can be intercepted or just fail to arrive. With app-based 2FA, you log into an app or website like normal, then you open an app that generates a special six-digit code every 30 seconds. This authentication app is synced with the other app or service so that your code matches the one that the main app or service expects to get. You enter the code from the authenticator app into the app or website that's asking for it, and then your login is complete. Google makes its own free authenticator app for iOS and Android. Unfortunately, there isn't a standardized method for setting up your account with 2FA. Amazon, PayPal, eBay and your bank will all use slightly different systems and terminology.
A bottleneck is the point in a system where contention occurs. In any system, these points usually surface during periods of high usage or load. Once identified, the bottleneck may be remedied bringing performance levels into an acceptable range. Utilizing synthetic load testing enables you to test specific scenarios and identify potential bottlenecks, although this only covers contrived situations. In most cases, it is better to analyze production metrics and look for outliers to help identify trouble on the horizon. Key performance indicators from your application include request/sec, latency, and request duration. Indicators from the runtime or infrastructure also include CPU time, memory usage, heap usage, garbage collection, etc. This list isn't inclusive, there may be business metrics or other external metrics which may factor into your optimizations as well.
Effective data governance can enable intelligent real-time business decision-making that will, in turn, drive organisations in a more profitable direction. One of the best approaches when it comes to unleashing big data’s potential is investing in a data lake: a central repository that allows organisations to collect everything — every bit of data, regardless of its structure and format — which can then be accessed, normalised, explored and enriched by users across multiple business units to reveal patterns across a shared infrastructure. The advantage of this approach is that organisations can gain end-to-end visibility of the enterprise data and actionable business insights. The disadvantage is that the data has to be kept up to date, which takes time and effort. Another downside is the GDPR compliance and data security risks that are associated with depositing the entirety of an organisation’s business-critical data into a data lake.
The promise of AI is in augmenting and enhancing human intelligence, expertise and experience. Think helping a aircraft mechanic make better, more accurate and more timely repairs – not automating the mechanic out of the picture. But the scope of what you can do is tempered by inherent limitations in today’s AI systems. I like to frame this as a recognition that computers don’t “understand” the world the way we do (if at all). I don’t want to get into an epistemological discussion about the definition or nature of understanding, but here’s what I think is a very illustrative and accessible example. One common application of AI is in image processing problems. i.e., I show the machine an image – like what you might take with your phone – and the machine’s task is to report back what’s in the image. You build a system like this by shoving in thousands or millions or even billions of images to an AI program (such as a neural network) – you might hope that somehow, as a result of processing all of these images the software builds some kind of semantic representation of the world.
Dr Web researchers note that for now UC Browser represents a "potential threat" but warn that all users could be exposed to malware due to its design. "If cybercriminals gain control of the browser's command-and-control server, they can use the built-in update feature to distribute any executable code, including malware. Besides, the browser can suffer from MITM (man-in-the-middle) attacks," the security company notes. The MITM threat arises because UCWeb committed the security blunder of delivering updates to the browser over an unsecured HTTP connection. "To download new plug-ins, the browser sends a request to the command-and-control server and receives a link to file in response. Since the program communicates with the server over an unsecured channel (the HTTP protocol instead of the encrypted HTTPS), cybercriminals can hook the requests from the application," explains Dr Web. "They can replace the commands with ones containing different addresses. ... "
In three separate experiments, research teams used electrocorticography (ECoG) to measure electrical impulses in the brains of human subjects while the subjects listened to someone speaking, or while the subjects themselves spoke. The data was then used to train neural networks to produce speech sound output. The motivation for this work is to help people who cannot speak by creating a brain-computer interface or "speech prosthesis" that can directly convert signals in the user's brain into synthesized speech sound. The first experiment, which was run by a team at Columbia University, used data from patients undergoing treatment for epilepsy. The patients had electrodes implanted in their auditory cortex, and ECoG data was collected from these electrodes while the patients listened to recordings of short spoken sentences. The researchers trained a deep neural-network (DNN) using Keras and Tensorflow using the ECoG data as the input and a vocoder/spectrogram representation of the recorded speech as the target.
“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained Shashank Samala, Tempo’s co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.” Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses Amazon Web Services (AWS) GovCloud to network everything in a bi-directional feedback loop. “After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production.
Value stream mapping purists may argue that the above exercise is not the real process because traditional components such as the time metrics, activity ratios and future state were omitted. Fear not, these components are included in a full-blown formal value stream mapping exercise. However, teams such as Thrasher’s have made substantial improvements from shorter versions of the exercise by making work visible. The net result is a compelling change in the right direction. Value stream management is the practice of improving the flow of the activities that deliver and protect business value -- and prove it. It’s a nascent digital-concept that measures work artifacts in real time to visualize the flow of business value and expose bottlenecks to optimize business value. A significant strength of this practice centers around how and where work is undertaken. This activity is captured through the work items mentioned above in the toolchain, providing a traceable record of how software is planned, built and delivered.
Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in so many different ways. Depending on the requirements, it can act as a primary database, cache, or message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode. ... If you have already built microservices with Spring Cloud, you probably have some experience with Spring Cloud Config. It is responsible for providing a distributed configuration pattern for microservices. Unfortunately, Spring Cloud Config does not support Redis as a property source's backend repository. That's why I decided to fork a Spring Cloud Config project and implement this feature. I hope my implementation will soon be included into the official Spring Cloud release, but, for now, you may use my forked repo to run it. It is available on my GitHub account: piomin/spring-cloud-config.
Data visualization in VR and AR could be the next big use case for the technologies. It's early days, but examples of 3D data visualizations hint at big changes to come in how we interact with data. Recently, I spoke with Simon Wright, Director of AR/VR for Genesys, about one such experiment. Genesys helps companies streamline their customer service experience with automated phone menus and chatbots, for example, but in his role Wright has a lot of latitude to push the boundaries of Mixed Reality technologies for enterprise customers. "One of the things I'm personally excited about is the ability to create hyper visualizations," Wright tells me. "We capture massive amounts of data, and we've created prototypes to almost magically bring up a 3D model of Genesys data. This is where there could be huge opportunities for AR, which has advantages over a 2D screen." For one recent project, Wright and his team wanted to project data pertaining to website analytics onto the wall of a restaurant in a beautiful way. "It started as a marketing-led project," he explains.
Quote for the day:
"Leadership to me means duty, honor, country. It means character, and it means listening from time to time." -- George W. Bush