High-performing companies are 2X more likely than underperformers to at least half of their employee base uses analytics tools. In my experience, training and empowerment of all employees is key to scale, as long as right tools and business processes are in place to be inclusive of all employees. Often organizations will limit the visibility into analytics and access to tools to only management and business analysts, and by doing so, limit the insight and full potential of the entire organizations. The importance of systems integration, data quality, data consolidation and customization and mobility are key to democratization of insights. Here's why an analytics platform is key to success.
"Accepting a risk doesn't mean it's going to happen," Hart said. "It means if the thing happens, you accepted the risk and will take the steps to mitigate that risk." As CSIO for the FBI, Hart said she is responsible for managing everything from governance to operational security in protecting the FBI's cloud infrastructure against internal and external threats. "I'm not packing heat," Hart quipped, clarifying she is not an FBI agent in the field. Hart offered a few insights into the FBI's cloud infrastructure, noting everything done by federal agencies must be compliant with the FedRamp cloud framework. "The cloud is all about big data and being able to aggregate data, which are amazing things," Hart said. "But when the sword cuts, it cuts both ways."
Lisa Dolev, CEO of Qylur Intelligent Systems, explained that her company's technologies fit into the industrial Internet of things (IIoT) space, with machines that are able to learn from each other and evolve in their decision-making capabilities to help stay ahead of threats. "For the Qylatron Entry Experience Solution, what we're doing is combining the aspects of greeting a person based on the entry ticket and doing security scanning," Dolev told eWEEK. The Qylatron is a self-service machine comprising multiple pods that can be used for screening bags and other items (pictured). It has a number of different sensors that use machine learning to come to automated decisions, according to Dolev. The automated decisions are intended to stop things defined by the system's administrators as being dangerous or even just items that are prohibited by the venue.
Bates does not blame interoperability issues for the healthcare industry's slow adoption of predictive analytics. "You can do a great deal with just your own data," he says. Rather, the problem has to do with personnel. "Healthcare organizations don't have groups with the right training to understand how to use data to reduce costs and improve care," he says. "If they do, the groups are relatively small and completely consumed with meeting external requirements, such as reporting quality data. They just don't have the bandwidth." Another problem is that up-to-date analytics software and tool kits—especially those that take a more "self-serve" approach to data—have not been available until recently.
From a logical perspective, virtual switches provide much of the same functionality as the traditional top-of-rack switches. Today, for example, it's not uncommon to see a virtual switch with several virtual LANs. A handful of VMs communicating with each other via a virtual switch is a basic example of network virtualization. Inter-VLAN traffic, meanwhile, is provided via a trunk between a virtual switch and the physical network. The traffic traverses the physical port of the host server. Essentially, the physical server port serves as an uplink port of the virtual switch. If two VMs residing on the same physical host --but on separate VLANs - needed to communicate, the traffic is routed to the physical network. At that point, a firewall could be used to filter traffic between the two hosts.
Increasingly you're going to be liable for committing any vulnerability and as we've seen, if you're a senior executive, you may have to take the fall for the hack. And that puts a lot of pressure on companies to really rethink how they're doing security. So, really to sum up the answer, it's the [problems] of the perimeter-less architecture; the emergence of a professional threat economy; and the impact of getting hacked both from a personal career limiting perspective, as well as from a regulatory compliance perspective. One of the other big things that you're seeing evolve in addition to the professional threat economy is now you've got people who built all the pieces, and there's almost an inverse correlation between the mental effort that's required and the criminality of certain things.
Consistently discovering alpha is the holy grail of investment management, and is an arena populated by two primary schools of thought. The first consists of active managers who proactively try to uncover investment opportunities that can generate higher returns, and the other consists of passive managers who believe markets are efficient and invest in a diversified portfolio of securities mirroring the market. While there is growing acceptance even amongst die-hard efficient market finance theorists that financial markets are not efficient to the level originally hypothesized, active managers have not consistently outperformed their passive counterparts in many asset classes in recent times. However, can investment managers systematically uncover pockets of market inefficiencies using Big Data analytics?
The thorniest problem for open data now is privacy. Governments rushing to release individual-level data such as tax, medical or education records are “walking into a massive minefield”, warns Martin Tisne of the Omidyar Network, a philanthropic outfit. Such data are among the most valuable: they can boost, for example, precision medicine, which tailors each patient’s treatment. But a privacy scandal can cause a backlash against all open data. A public outcry recently forced Britain’s National Health Service to rethink plans for making anonymised patient-level data available for reuse. Open-data activists have joined forces with bureaucrats and entrepreneurs to sort out all these problems. Their solutions are starting to work, and growing amounts of data are being put to good use.
Erlang concurrency is designed around the actor model and encourages an elegant style of programming where problems are modelled by many isolated processes (actors) that communicate through immutable message passing. Each process has its own heap and by default is very lightweight (512 bytes) making it practical to spin up many hundreds of thousands of processes on commodity type servers. These individual processes are scheduled by a virtual machine over all available processor cores in a soft real time manner making sure that each process gets a fair share of processing time. The fact that each Erlang process has its own heap means that it can crash independently without corrupting shared memory.
Amazon, Microsoft, Google and other leading cloud providers are already adopting container technologies. We are also seeing the same approach among OS, hardware and application developers. For example, Intel too is supporting containerization with its Cloud Integrity Technology 3.0. It is therefore quite obvious that support for containers will continue to grow in the coming years and we are likely to see more deployment in this ecosystem. An increasing number of micro-service applications will be built on containers. In fact, experts predict that most cloud platforms will either switch to a new container stack or at least start supporting containers by 2017.
Quote for the day:
"Technology is just a tool. In terms of getting the kids working together and motivating them, the teacher is most important." -- Bill Gates