January 23, 2013

For most companies, NoSQL & Big Data are misunderstood at best, or hype at worst
As you know, Big Data is so large that it's difficult to process using traditional databases and software techniques. Of course, the relational model and SQL dominate today's database landscape. But on the other side there are databases built without relations, made for higher scalability. So, we asked the expert in the database area, Monty, about the current and future state of SQL, NoSQL and Big Data. And his answers were somewhat surprising:


Large companies run with BYOD as workers cover costs
Seventy-six percent of companies supporting bring your own device policies have more than 2,000 employees and there's a good reason for that. Half of employees under BYOD policies have to foot the device and wireless costs, according to a report from Good Technology.


Micron introduces its highest-endurance MLC SSD for servers
The P400m SSD's endurance is the equivalent to writing every picture posted to Facebook daily to the drive for 311 days straight (about 78 billion photos total), Micron said. By comparison, Micron's entry-level SSDs offer one to three full drive writes per day and a midrange SSD offers about five full drive writes, according to Matt Shaine, Micron's product marketing manager for enterprise SSDs.


Download: Information Governance Research Report
Discover what smart IT leaders are doing to make sure business exectuives are fully grasping the wide range of strategic advantages information governance can bring.


Windows Workflow and Timeouts
Workflows make it easy to do something that's very hard to do in any other development environment: Take action when something doesn't happen. Here's how to protect yourself when something's isn't happening in your Windows Workflows.


Big Data for Development: From Information - to Knowledge Societies
The article uses an established three-dimensional conceptual framework to systematically review literature and empirical evidence related to the prerequisites, opportunities, and threats of Big Data Analysis for international development. The opportunities and threats around big data has potential to result in a new kind of digital divide: a divide in data-based knowledge to inform intelligent decision-making. This shows that the exploration of data-based knowledge to improve development is not automatic and requires tailor-made policy choices that help to foster this emerging paradigm.


Has Ayasdi turned machine learning into a magic bullet?
At its core, Ayasdi’s product, a cloud-based service called the Insight Discovery Platform, is a mix of distributed computing, machine learning and user experience technologies. It processes data, discovers the correlations between data points, and then displays the results in a stunning visualization that’s essentially a map of the dataset and the connections between every point within it.


German Government Should Make its Software Available As Open Source, Committee Advises
Germany should change a law to enable public administrations to make their software available as free and open source, a German parliamentary committee has advised. German public administrations currently are not allowed to give away goods, including software, said Jimmy Schulz, a member of Parliament and chairman of the Interoperability, Standards and Free Software Project Group in an email Thursday. The current law prohibits governments from being part of the development process in the free software community, he said.


Indian insurers to spend 101 bn rupees on IT in 2013
Indian insurers are about to go on a spending spree! Market research and advisory firm Gartner predicts that insurance firms in India will spend 101 billion rupees on IT products and services in 2013, an increase of more than 9 per cent over 2012 when they spent an approximate 92.5 billion rupees.


When Big Data Goes Bad
The break between the viability of Big Data and the efficacy of analysis lies not in the actual analysis, but rather in perspective. Ultimately, what we are talking about is not about the value of a data point, but how multiple data points interrelate with one another. How does one determine causation as opposed to allowing causation to validate an analysis?



Quote for the day:

"If the rate of change on the outside exceeds the rate of change on the inside, the end is near." -- Jack Welch