Daily Tech Digest - December 17, 2017

With 2018 upon us, the worlds of both business and personal software are ramping up to make the next few years something of an artificial intelligence arms race. On the consumer side of things, machine learning and AI make our lives easier in small ways. Case in point: many of us now have a smart speaker like an Amazon Echo or Google Home sitting on our countertops. While these kinds of AI applications are helpful and entertaining, their self-learning capabilities are limited, to say the least. In the world of business, there’s more immediate potential for self-learning software. “We are drowning in information,” says Vita Vasylyeva of Artsyl Technologies. “The biggest bottlenecks in any business process involve the handling of documents and manual input of data from those documents. At the heart of those bottlenecks is the transformation of unstructured content into structured data.”


A Review on Business Intelligence and Big Data

Technological advancements of IT have led to storing more data at lower cost and drastically  increased transmitting rates. Parallel computing has increased computing power as well by processing multiple cores simultaneously. It is hard to find any device that doesn’t generate data like sensors, plane engines, online transactions, emails, videos, audios, images, click streams, logs, posts, search queries, health records, social networking interactions, science data, and mobile phones. All of these and their applications have begun to generate huge volume data at high velocity and variety which is impossible to store and process with classical technologies and programming paradigms. This kind of data is called big data. International Data Corporation (IDC) reports that digital universe will continuously expand, be complex and interesting. The volume of data is expected to be 8 ZB by 2020. Data generation speed is also increasing exponentially. 


Deep learning is currently one of the main focuses of machine learning. It has led to many speculative comments about A.I. and its possible impact on the future. Although deep learning garners much attention, people fail to realize that deep learning has inherent restrictions which limit its application and effectiveness in many industries and fields. Deep learning requires human expertise and significant time to design and train. Deep learning algorithms lack interpretability as they are not able to explain their decision-making. In mission critical applications, such as medical diagnosis, airlines, and security, people must feel confident in the reasoning behind the program, and it is difficult to trust systems that does not explain or justify their conclusions. Another limitation is minimal changes can induce big errors. For example, in vision classification, slightly changing an image which was once correctly classified in a way that is imperceptible to the human eye can cause a deep neural network to label the image as something else entirely. 


The day when the computer becomes a data scientist

The data scientist usually starts every project by digging into the data (using charts, scatter plots, histograms and other visual tools), then cleaning it by dropping irrelevant variables (and adding missing data) – AKA preprocessing. The next step is choosing the right classifier / regression method followed by picking the right features in the data in order to get the most accurate prediction. In between, the data scientist tests different combinations of classifiers parameters for obtaining the most optimal and efficient prediction mechanism. All the mentioned steps and methods demand high analytical and comprehension skills from the person who apply them, and right now, it doesn't look like a computer can do all of these steps better than a human being. Nevertheless, the computer plays an important role in many parts of the data scientist's projects. A good example for this - is the Cross Validation in the Model Selection module where an algorithm 'finds' best classifier or the best classifier parameters. 


Why telcos will soon be betting on Artificial Intelligence to build their networks
“As more reliable and affordable bandwidth is enabled, it unleashes a plethora of opportunities that can traverse over telecom networks. So, a convergence at network level becomes possible. This is then value enhanced by adding dynamism and intelligence in to the systems through AI which makes the solution intuitive, proactive as well as reactive to the situations,” said Faisal Kawoosa, Lead Analyst, CyberMedia Research. ... One may not see the telecom the way we look at it presently, meaning a different set of revenue streams as well. “AI is expected to have an impact in a multitude of areas – the most important being traffic classification, anomaly detection and prediction, resource utilization and network optimization, along with network orchestration. Further, it will also assist the mobile devices with virtual assistants and bots,” said Arjun Vishwanathan, Associate Director, Emerging Technologies, IDC


2018: The Year Central Banks Begin Buying Cryptocurrency

In 2018, G7 central banks will witness bitcoin and other cryptocurrencies becoming the biggest international currency by market capitalization. This event, together with the global nature of cryptocurrencies with 24/7 trading access, will make it intuitive to own cryptocurrencies as they become a de-facto investment as part of a central banks investment tranche. Cryptocurrencies will also fulfil a new requirement as digital gold. Furthermore, foreign reserves are used to facilitate international trade. This means holding reserves in a trading partner's currency makes trading simpler. In 2018, cryptocurrencies like bitcoin will be utilized for international trade on a moderate basis because the high returns as an investment will encourage a ‘hold’ strategy for G7 countries. Foreign reserves are also used as monetary policy tool. Central banks may pursue the option to sell and buy foreign exchange currencies to control exchange rates.


Bluetooth 5 – the Biggest Breakthrough in the IoT in 20 Years

bluetooth 5 with IoT
The capabilities of the Bluetooth 5 were nothing short of remarkable. The new devices were twice as fast, had four times the range and over nine times the broadband messaging capacity as their predecessor, the Bluetooth 4. These new devices are leading to new IoT applications that we didn’t envision a year ago. Keyinsight predicted that the new IoT devices would be used in every industry from agriculture to transportation. These predictions will finally come to fruition due to advances in Bluetooth technology. ... When Bluetooth first hit the market, it was one of the first IoT devices available. People could use their Bluetooth to connect to automobile CD players, radios and other devices. It was an unprecedented level of connectivity between previously segregated devices. It was only the first major breakthrough with the IoT, but it wouldn’t be the last. Nearly 20 years later, Bluetooth is still a pioneer in the IoT.


The lesson behind 2017’s biggest enterprise security story


For one, security teams are overwhelmed. The average security team typically examines less than 5 percent of the alerts flowing into them every day (and in many cases, much less than that). Ironically, some attempts to improve this efficacy may backfire. Automation is clearly required to help security teams prioritize their work and defend their environments, but many systems prioritize alerts based on measures of the severity and impact of the threat itself rather than measuring its potential impact within the context of the business. In other words, while a human analyst may understand that a “simple” exploit of an unpatched vulnerability on a server that houses your crown jewels is a higher priority than a sophisticated zero-day attack targeting the machine housing the cafeteria menu, automated tools may mistakenly believe otherwise.


Why do Decision Trees Work?

Decision trees are a type of recursive partitioning algorithm. Decision trees are built up of two types of nodes: decision nodes, and leaves. The decision tree starts with a node called the root. If the root is a leaf then the decision tree is trivial or degenerate and the same classification is made for all data. For decision nodes we examine a single variable and move to another node based on the outcome of a comparison. The recursion is repeated until we reach a leaf node. At a leaf node we return the majority value of training data routed to the leaf node as a classification decision, or return the mean-value of outcomes as a regression estimate. ... For true conditions we move down and left, for falsified conditions we move down and right. The leaves are labeled with the predicted probability of account cancellation. The tree is orderly and all nodes are in estimated probability units because Practical Data Science with R used a technique similar to y-aware scaling


Q&A With Eberhard Wolff On the Book “A Practical Guide to Continuous Delivery”

The obvious and original goal of CD is to improve time to market for new features and thereby to get better business results. But there is more to CD: Constantly testing the software with reproducible results and a high degree of automation improves the quality of the software. Deploying more often and automating deployment decreases the risk of the deployment. This has a positive impact on software development and IT. These benefits might be reason enough to implement CD. How far you can go with CD depends on the buy-in from business as well as software development, operations, and QA. With limited buy-in from business you won’t be able to get better time-to-market. With limited buy-in from Ops you won’t be able to extend the automated pipeline to go directly into production. Still even a limited implementation of CD will be worth it and of course it can always grow. The early adopters were looking for a more agile way to work.



Quote for the day:


“If you’re not a risk taker, you should get the hell out of business.” -- Ray Kroc


No comments:

Post a Comment