Information Technology supply chains depend upon complex and interrelated networks of component suppliers across a wide range of global partners. Suppliers deliver parts to OEMS, or component integrators who build products from them, and in turn offer products to customers directly or to system integrators who integrate them with products from multiple providers at a customer site. This complexity leaves ample opportunity for malicious components to enter the supply chain and leave vulnerabilities that can potentially be exploited. As a result, organizations now need assurances that they are buying from trusted technology providers who follow best practices every step of the way.
ETSI NFV ISG Chairman Steven Wright of AT&T, commenting on the continued momentum, observed: “Among our most important 2015 goals was to foster interoperable implementations rather than creating new standards activity. Exiting our final meeting for the year, we are pleased with the progress and entertaining proposals for 2016 work items, which will continue to guide the entire industry on the direction for NFV.” A significant outcome resulting from NFV#12 was the completion of “Report on SDN Usage in NFV Architecture Framework.” This study was conducted over the past 12 months, with 40 contributors from across the industry, and motivated 35 recommendations for the ISG. The report analyzed SDN use cases for NFV, highlighting lessons learned from 14 ETSI NFV PoCs using SDN and NFV, along with open source SDN controllers.
There is a question whether the product owner is a value-adding role or not? In many organizations it appears that it is not: The product owner is a middle-man with authority over prioritization. While sequencing and scheduling when done well will generate value, the act of sequencing one item over another, does not add value to the items themselves and from a Lean value stream mapping perspective it's a non-value-adding role. Even more curious then that Agile requires you to create non-value-adding positions. So in some extreme cases, Agile methods appear to have added 2 new people in non-value-adding positions for every 6 in value-adding positions. Put another way, after the Agile transition, 25% of the workforce is additional "Agile" overhead for operating the Agile method.
ECS’s geo-replication ensures that the data is protected against site failures and disasters. ECS gives customers the option to link geographically dispersed systems and bi-directionally replicate data among these sites across WAN. Several smart strategies such as geo-caching are used to reduce WAN traffic for data access. That leads to the next natural question: If data is replicated to multiple sites, will I incur a large storage overhead? In order to reduce the number of copies in a multi-site deployment, ECS implements a data chunk contraction model, which dramatically reduces storage overhead in multi-site ECS environments. In fact, storage efficiency increases as more sites are added for geo-replication!
To enable a workflow that truly leverages the advantages of a Hadoop-based data lake, businesses need a set of tools that can open up all the assets in the data lake to everyone in the organization who needs them. They need to make analysis accessible and iterative. And they need a workflow that reduces the need for many specialized resources, placing core analytical capability into the hands of power business analysts. Businesses also need to empower these analysts to be citizen data scientists, who can free actual data scientists to pursue complex analysis rather than spending their time performing data preparation. With Spark, a data lake can become a true big data discovery environment. Spark’s emergence as a processing framework for big data is a game changer because its advanced analytics capabilities allow for large-scale data analysis across the enterprise.
This past year was no exception. Everybody talks about the promise and the potential of big data. Yet there's a sense of disenchantment as CIOs search for use-cases to inspire change inside their own companies. They want to be shown, not told. They want the signal and not the noise. We noticed that 2015 was a noisy year, and 2016 seems like it will be equally as loud. It's not something that CIOs can afford to tune out. With digital transformations and pure-play startups disrupting established industries -- Uber is the example everyone mentions first -- the pressure is on to leverage data in new ways for competitive advantage. CIOs need to straddle two different worlds -- satisfying their existing customer base while moving fast to deliver instant, data-driven services to customers, or they risk losing ground to market upstarts.
Another way to avoid data issues across public and private clouds is to simply choose one or another based on workload type and not have any particular workload straddle both. Some workloads have steady demand or sensitive data, which makes them better suited for the firewalled, fixed capacity confines of a private cloud. Financial analytics and Human Resources workloads are good examples. Other workloads see wide variations in demand and have publicly viewable data that make them a great fit for the elasticity of the public cloud. A customer-facing marketing website or customer analytics that have been sanitized to remove Personally Identifiable Information are typical candidates.
One other concept we should stop and discuss briefly is the idea of change saturation. This concept captures the idea that organizations in general, and certain individuals in specific, can only absorb so much change at one time. One frequent occurrence with change efforts is the situation where more than one project or larger change effort may require the same human, financial, physical, information or other resources at the same time. To become aware of this situation and to enable you to work to mitigate the effects of change saturation, you will want to build a heat map identifying the different timing, duration, and intensity of the different requirements all of the different projects and change efforts will place on the different types of resources within the organization. This too is a prerequisite
Right now the focus of PX 2 is to detect and recognize objects, but Nvidia wants self-driving cars to also recognize circumstances. For example, a self-driving car may be able to distinguish an ambulance from a truck, and slow down. A car may also recognize snowy conditions and operate on a road in which the lanes are hidden. But such learning patterns are complex, and it could be a while until self-driving cars can handle such situations. The Drive PX 2 has 12 CPU cores, offers 8 teraflops of floating-point performance, has two Pascal GPUs and draws 250 watts. It is the equivalent of "150 MacBook Pros in your trunk," said Jen-Hsun Huang, Nvidia's CEO, during the press conference at CES.
Quote for the day:
"Products are made in the factory, but brands are created in the mind." -- Walter Landor