By contrast, the performance and capacity of NFV-defined functions are transformed by the multi-tenant hosting infrastructure (think network server) and the current load from all its applications. The workload on that infrastructure is constantly changing due to the context of the network at any point in time. Yes, NFV can enable flexibility and agility, but it’s hard to monitor exactly what’s going on and thereby proactively manage it. That lack of determinism frustrates traditional means of administering the network, because the need for real-time operations support systems (OSSs) is moving deeper into the network itself.
After all, none of the hardware equipment has any intrinsic value for a company. Instead, the intelligence resides in the data itself and its potential for any business. Residing ‘in the cloud’, virtualised data can be rendered both user accessible and secure. The balancing act is substantially simplified through emerging technology, which reduces the number of physical data copies, thus representing a smaller attack surface, a tighter span of protection and greater control. Reducing the number of physical data copies also eliminates the necessity for continuously adding storage capacity. Copy data virtualisation delivers new frontiers of flexibility and speed to meet business objectives at higher performance levels and lower costs.
“What made it [Spark] game changing is it had cross-platform capability,” Glickman said. “It combined relational, functional, iterative APIs without going through all the boilerplate or all the conversions back and forth to SQL or not. It was storage agnostic, which I think was the key insight Hadoop had been missing, because people were thinking about how to put compute on HDFS" Glickman also saw other advantages of Spark, including that it provides compute elasticity as well as the ability to scale storage and the number of application users. “The power of Spark is in the API abstractions,” said Glickman. “Spark is becoming the lingua franca of big data analytics. We should all embrace this.”
JSR 376 is, of course, the Java Specification Request that aims to define "an approachable yet scalable module system for the Java Platform." But Project Jigsaw actually comprises JSR 376 and four JEPs (JDK Enhancements Proposals), which are sort of like JSRs that allow Oracle to develop small, targeted features for the Java language and virtual machine outside the Java Community Process (JCP). (The JCP requires full JSRs.) JEP 200: The Modular JDK defines a modular structure for the JDK. Reinhold has described it as an "umbrella for all the rest of them." JEP 201: Modular Source Code reorganizes the JDK source code into modules. JEP 220: Modular Run-Time Images restructures the JDK and JRE run-time images to accommodate modules.
An insurance company wanted to investigate the relationship between good or bad habits and the propensity for buying life insurance. When the company realized "habits" was too general, it focused solely on smokers versus non-smokers, but even that didn't work. "In half a year, they closed this project, because they didn't find anything," Sicular said. The failure, in this case, was due to the complexity of the problem. There's a big gray area the insurance company didn't account for: People who smoked and quit, a nuance likely overlooked because, to put it simply, "they're not healthcare professionals," Sicular said.
After decades of research and development, artificial intelligence (AI) is finally becoming a part of daily life. While we may not be fully in the age of AI just yet, there's no denying that it's just around the corner. The evidence is clear in the consumer market with personal assistant apps like Siri and Google Now using AI to provide contextual, relevant information, and anticipate our needs. But, the enterprise is rife with AI as well, in the form of cognitive computing, machine learning, and more. Here are ten enterprise technologies that are setting the stage for the AI era to come.
There are plenty of tools for looking at the data, but each tool and the data exposes is isolated from all of the other tools. And without the ability to correlate data across tiers, it is hard to understand why something is happening. Splunk itself is good for pulling in arbitrary data sources, allowing analysts to correlate data such as real-time sales data with web server traffic and database health. But that alone isn’t enough. The ad hoc queries written by the analysts are, by their nature, non-repeatable. And with the application developers being pulled in multiple directions, it can be hard to find one to build and maintain custom dashboards. Splunk’s new product called IT Service Intelligence (ITSI). Splunk ITSI is designed to allow analysts create their own dashboards.
Traditionally, HR and recruiting have been pressured to solve the problem of the talent war and the skills gap through ever-increasing compensation packages, poaching talent, offshoring and outsourcing, but clearly that hasn't worked to solve the whole problem. This "crowdsourcing" approach gives CIOs a new way to address these talent issues, says Harry West, vice president of services product management for Appirio. "The gig economy doesn't have to be threatening at all. There's a huge opportunity for businesses here, and a large pool of flexible, highly skilled workers almost on-demand. CIOs have the opportunity to tap into a scalable workforce that can help them meet IT needs and reduce costs," says West.
I’ve actually had an opportunity to talk to some individual at some large enterprises where they are worried about things like their ERP system but also how people operate, interoperate with it. I think that there’s probably interesting problems in all of the realms, you don’t have to be a website, public facing, Open API, whatever, in order to have interesting problems to solve. I do think that perhaps running an exchange server yourself in-house these days is possibly better not done. I would say that you’re probably going to add more value to your business by just going and getting one of the numerous cloud solutions that’s available for that. Having people actually help, I don’t know, make the bring-your-own-device solutions work better for your enterprise, or something.
There are a number of out-of-box automation solutions that give you access to predefined production workflows that can help cut out the need for custom development. This helps support the concept that the operations team should be the team that drives the automation. However, the term “predefined” often has a secondary definition of “limitation.” You still have the ability to create the “custom” code needed to remove any limitations you encounter with the predefined workflows. The developer role will still be needed for any kind of DevOps model, but I have a strong belief that you can teach the Dev to Ops, but cannot really teach the Ops to Dev.
Quote for the day: "You think you can win on talent alone? Gentlemen, you don't have enough talent to win on talent alone." -- Herb Brooks