Daily Tech Digest - October 12, 2018

The first step in reducing TCO is understanding what it is and why current solutions are driving it so high. A data protection TCO analysis should do what its name implies; calculate the TOTAL cost of ownership. For data protection this means adding up all the hard costs like data protection storage, the data protection network and data protection software. It should also include periodic costs like hardware and software maintenance (including support) as well as subscription costs like cloud storage or cloud compute. Calculating data protection infrastructure TCO also means adding up the operating costs associated with learning and operating the data protection system. Most data protection solutions are not self-service or designed for IT generalists; they need a well-trained administrator familiar with the infrastructure to interact with it. Operating costs are particularly important because certain complicated data protection tasks – like a full restore – will require a knowledgeable person to complete.

Taking Agile Transformations Beyond the Tipping Point

Not all leaders can make this transition. For example, one Asia-Pacific company undergoing an agile transformation replaced one-quarter of its top 40 leaders with individuals who better embodied agile values, such as collaboration and teamwork. Middle managers will also face challenges. Those who have grown up inside silos will need to learn how to manage cross-functional teams and delegate decision making to employees closer to the field. They may even need to return to doing the daily work rather than only managing other people. The coordination activities that consumed so much of managers’ time are increasingly handled within and between teams. While agile may be a fundamentally different way of working, many of the steps to become an agile organization are familiar to any executive who has gone through a successful corporate transformation. (See Exhibit 2.) The steps of committing, designing, preparing, and refining are variations of any large-scale change.

Detail of Dutch reaction to Russian cyber attack made public deliberately

The attackers used a rental car parked close to the OPCW building in The Hague. The hackers then attempted to use Pineapples to break into the WiFi network of the organisation. Pineapples are devices usually used for intercepting network traffic. The hackers were also caught using antennas and signal amplifiers, and other equipment the MIVD considers “specifically used during hacking operations”. During the operation, the MIVD found laptops with extra batteries (which the MIVD said were purchased in the Netherlands), and mobile phones with 4G connectivity, which the hackers tried to destroy during their arrest. Eichelsheim reiterated that the excuse the Russian might’ve simply been on holiday won’t fly. “They were caught with very specific equipment, entered on diplomatic visas, and were found carrying €20,000 and $20,000 in cash. That’s not a holiday.”

A Day In The Life Of Ms. Smith: How IoT And IIoT Enhance Our Lives

Ms. Smith walks out of the building. An RFID reader at the door scans her badge as she walks past it. Computer vision sees her approaching the exit and walking into the parking lot. The drive home is much like her drive to work. Computer vision devices on the road monitor and control traffic signals. Her ride home is slow—but again, she misses most of the red lights. Fifteen minutes before she gets home, the thermostat automatically turns on the heat (or cooling) so that the temperature is comfortable when she comes in the door. Finally at home, she walks inside, and the lights turn on. To relax, she turns on the TV, and the lights in the room automatically dim, making it easier for her to watch her favorite show. As she’s ready for bed, she says, “Turn down the lights,” to her digital assistant. “Oh, and wake me up at 5:30,” she says. “No, make it 6.” Lights in the other parts of her house dim, the lights in her bedroom slowly fade, and so does Ms. Smith.

5 CRM trends for 2018

5 CRM trends for 2018
Applying machine learning to CRM data has been a difficult process for most organizations. To do this traditionally you would need machine learning expertise on staff, developers and the drive to build the solution. Alternatively, you would have to build and maintain integration between your CRM system and an external machine learning service. That’s starting to change. “Machine learning is now built directly into CRM products,” explains Julian Poulter, research director for CRM and CX (customer experience) at Gartner. “We have seen about 30 use cases applying machine learning to CRM, but industry adoption is slow so far. The use cases include recommending alternative products, lead scoring and ecommerce recommendations.” That means the kinds of product recommendation features offered by Amazon and other ecommerce providers are within reach of many more organizations. But that’s not the only way machine learning can help.

Spinnaker is the Kubernetes of Continuous Delivery

Despite its humble and slow start, Spinnaker is enjoying widespread adoption. Today, Spinnaker is backed by industry leaders like Microsoft, Google, Netflix, Oracle and so on. It’s supported by all major cloud providers, including but not limited to, AWS, Google Compute Platform, Microsoft Azure and OpenStack. Spinnaker users include big names like Capital One, Adobe, Schibsted, LookOut and more. There is a growing vendor ecosystem around it which includes players like Mirantis, Armory and OpsMx. ... There were roughly 400 people at the event, representing over 125 companies and over 16 countries. During the Summit, the community announced the governance structure for the project. “Initially, there will be a steering committee and a technical oversight committee. At the moment Google and Netflix are steering the governance body, but we would like to see more diversity,” said Steven Kim, Google’s Software Engineering Manager who leads the Google team that works on Spinnaker.

Anomaly detection methods unleash microservices performance

AKF cube diagram
A symptom-manifestation-cause approach involves working back from external signs of poor performance to internal manifestations of a problem to then investigate likely root causes. For example, the symptom of increased response times can be tracked to the internal manifestation of excess latency in message passing between the app's services, which occurred because of a failing network switch. Other potential root causes exist for those same symptoms and manifestation, however. For example, an application design using overly large message requests, or too many small messages, would cause the same issue. These root causes would be found by different tools and resolved by different people. Change-impact analysis creates broad categories that lump together changes in component-level metrics based on their effect on external performance measures. These metric categories might include network link latency, database queue depth and CPU utilization, grouped according to assessments such as excessive resource usage, cost overages or response time.

Unlock distributed analytics with a microservices approach

Combining BI and analytics software with a microservices approach enables average end users to drill down into data with specific types of queries. When it comes time to visualize that data, organizations must decide whether to build customized visualization tools in-house or adopt a third-party option. A vast number of options exist for visualization, which include web-based platforms and stand-alone, open source tools. These tools tend to focus on a range of data interaction, from complex depictions of near-time data to simple renderings. However, big data sources have their limitations. Streaming and unstructured data sources present challenges that mainstream analytical tools struggle to depict. For example, some query connections won't accept data set blending, which limits exploratory analysis. Teams may also encounter system timeouts, out-of-memory exceptions, long query waits and rendering limitations. However, distributed analytics approaches can excel in big data.

Digital transformation in 2019: Lessons learned the hard way

Because of the focus on the technology components, the people-side of the changes required for digital transformation often go under-addressed, yet arguably are the key success factors. That's because the people in the organization have to carry out the digital transformation, yet are often inadequately equipped to do so from a skill, culture, mindset, inclination, and talent perspective. Many organizations have had their digital change initiatives crash upon the shoals of insufficient human capability to carry them out or an inadequately enabling environment. Currently, lack of appropriately skilled personnel ranks in the top five obstacles to digital transformation and is reported by 39 percent of orgs. The good news is that improved organizational focus and improved techniques for upskilling workers to support digital transformation have been arriving. Expect to see both more in 2019. The smart digital leader will use the resources of HR's L&D department to help drive them.

Multicloud does not eliminate vendor lockin

Multicloud does not eliminate vendor lockin
You might think you can avoid the trade-off by using containers or otherwise writing applications so they are portable. But there is a trade-off there as well. Containers are great, and they do provide cloud-to-cloud portability, but you’ll have to modify most applications to take full advantage of containers. That could be an even bigger cost than going cloud-native. Is it worth the avoided lockin? That’s a question you’ll need to answer for each case. Moreover, writing applications so they are portable typically leads to the least-common-denominator approach to be able to work with all platforms. And that means that they will not work well everywhere, because they are not cloud-native. I suppose you could write portable applications that are cloud-native to mutiple clouds, but then you’re really writing the application multiple times in advance and just using one instance at a time. That’s really complex and expensive. Lockin is unavoidable. But lockin is a choice we all must make in several areas: language, tooling, architecture, and, yes, platform.

Quote for the day:

"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson