Daily Tech Digest - July 16, 2017

Getting Started With Apache Ignite

Although often associated with relational database systems, it is now used far more widely with many non-relational database systems also supports SQL to varying degrees. Furthermore, there is a huge market for a wide range of SQL-based tools that can provide visualization, reports, and business intelligence. These use standards such as ODBC and JDBC to connect to data sources. ... The latest releases of the Apache Ignite project provide support for Data Manipulation Language (DML) commands, such as INSERT, UPDATE, and DELETE. Additionally, some Data Definition Language (DDL) support has also been added. Furthermore, index support is also available and data can be queried both in RAM and on disk. A database in Apache Ignite is horizontally scalable and fault-tolerant, and the SQL is ANSI-99 compliant. Figure 1 shows the high-level architecture and vision.


How a new wave of machine learning will impact today’s enterprise

Advances in deep learning and other machine learning algorithms are currently causing a tectonic shift in the technology landscape. Technology behemoths like Google, Microsoft, Amazon, Facebook and Salesforce are engaged in an artificial intelligence (AI) arms race, gobbling up machine learning talent and startups at an alarming pace. They are building AI technology war chests in an effort to develop an insurmountable competitive advantage. Today, you can watch a 30-minute deep learning tutorial online, spin up a 10-node cluster over the weekend to experiment, and shut it down on Monday when you’re done – all for the cost of a few hundred bucks. Betting big on an AI future, cloud providers are investing resources to simplify and promote machine learning to win new cloud customers. This has led to an unprecedented level of accessibility that is breeding grassroots innovation in AI.


Under the hood of machine learning

The key design point that allows Apache Mesos to scale is its two-level scheduler architecture. Unlike a monolithic scheduler that schedules every task or virtual machine, the two-level scheduler delegates actual tasks to the frameworks. The first-level scheduling allows Mesos Master to decide which framework gets the resources based on allocation policy. The second-level scheduling happens at the framework level, which decides which tasks to execute. This enables data services to run without resource contention with the other data services in the cluster, improving framework scheduling regardless of scale. It also allows the Mesos Master to be a lightweight piece of code that is easy to scale as the size of the cluster grows. Working with Apache Mesos, though, can be challenging in terms of building the framework and components.


5 Common Challenges to Building BI in the Cloud

Building successful Business Intelligence solutions is a well-documented process with many successful, and unsuccessful projects to learn from. The traditional BI/DW model has always been challenging, but a lot of good practices and patterns have emerged over the years that BI professionals can leverage. A net-new BI solution or migration of an existing on-prem BI solution into the cloud creates a different set of challenges to be addressed. What I wanted to do was to try to come up with a top 5 list that may help you in considerations for your cloud BI project planning. I've been focused on building analytics, BI and Big Data solutions in the cloud in Azure for the past 2 years, so I'm going to share a few of my findings for you here.


Blockchain The Chain of Trust and its Potential to Transform Insurance Industry

In the longer term, the potential disruption to the insurance industry from blockchain technology is staggering. Blockchain technologies will enable the creation of assets in a new, distributed form — such as documents, credentials, assessments and transactions— that span the entire insurance value chain. These distributed assets will challenge the traditional insurance business model. IBM is helping Insurers across the globe to determine what use cases are best suited for blockchain, and how to make it easier to innovate on top this middleware fabric. During our discussions, it has come out clearly that a majority of the Insurance CIO’s are keen to understand how they can potentially leverage Blockchain to overcome the challenges they are facing today in the Insurance Industry.


What’s your risk appetite? Your robo-adviser has the answer

The wealth management industry has been transitioning its focus on mere product sales to higher value-added service-based offerings over the past few years, a result of the segmentation of different products and their underlying volatility based on financial advisers’ feedback of what investors want, according to Barry Freeman. He said Xuanji, a robo-adviser platform launched by Pintec last year, was able to make suggestions on asset allocation in a full portfolio of mutual funds based on investment target and risk tolerance levels derived from a set of questions answered by the investors, powered by big data, quantitative modelling and machine learning. As the robo-advisory platform owns data of 80 per cent of mutual funds in China through partnership with all the fund houses, algorithms based on the data and performances of different funds will be able to segment different opportunities, making it a better performer compared with a human stock broker, Freeman said.


Bitcoin Crashes as Chain-Split Risks Increase

We tried to speak to Jeff Garzik, the lead maintainer of the new segwit2x client, to gain some clarity on the relationship between segwit2x and Bitcoin Core, but have received no response at the time of writing. Segwit2x implements segwit largely unchanged, but there are suggestions after the activation the client may only accept segwit blocks, while Bitcoin Core would accept both segwit and non-segwit blocks, which may lead to a split. However, as some 90% of miners seem to be supporting segwit2x, it appears unlikely any miner would produce non-segwit blocks, so they would probably remain in consensus. On the bigger blocks side, there is Bitcoin Unlimited and BitcoinABC, which largely follows the approach of Bitcoin Unlimited but goes further in implementing a User Activated Hard-Fork that will chain-split regardless of miners support.


A pervasive security solution that makes practical sense

First, the SDSN platform’s automated threat remediation capability enforces security all the way down to the network layer, including end clients or data centers populated with switches and wi-fi access points from different vendors. With the SDSN platform, you can still quarantine or block infected hosts in a multivendor environment, without swapping out your existing infrastructure. Imagine not having to write off the thousands or even millions of dollars in equipment investments while taking your security game to the next level. ... The decision to migrate workloads to clouds, or determining what applications run on which cloud, should not break your network’s security posture. SDSN goes one step further, not only enforcing consistent policies in all the deployments but also interoperating with native cloud technologies to maintain the same level of enforcement granularity available in physical networks.


5 Steps to Migrate Unisys Mainframes to AWS

The most effective method to exploit the value of Unisys mainframe applications and data is a transformative migration to modern systems frameworks in AWS, reusing as much of the original application source as possible. A least-change approach like this reduces project cost and risk (compared to rewrites or package replacements) and reaps the benefits of integration with new technologies to exploit new markets — all while leveraging a 20- or 30-year investment. The best part is that once migrated, the application will resemble its old self enough for existing staff to maintain its modern incarnation; they have years of valuable knowledge they can also reuse and pass on to new developers. The problem is most Unisys shops, having been mainframe focused for a very long time, don’t know where to start or how to begin. But don’t let that stop you. The rest of this article will give you some guidance.


Understanding the Basics of Biometrics

There is no one-size solution for the optimal biometric modality, however. Each has a specific set of strengths and weaknesses that must be considered when planning a system, based on the requirements and the application context. Certain deployments may even require multiple biometric modalities (commonly referred to as multimodal biometrics), often with fusion of the results, to ensure the highest levels of accuracy and protection. In addition to considering budget and performance, other factors in selecting the right biometric modalities include accuracy, risk of error, user acceptance, and hygiene. For example, DNA is among the most accurate biometric modalities if the sample isn’t degraded, but the option demands proximity to the person or actual DNA sample to touch and collect it—a requirement that isn’t possible in every scenario.



Quote for the day:


"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham


No comments:

Post a Comment