Garmin’s Vivofit line drives home this point. Popping the display out of the band makes you realise what’s at the heart of these products when stripped of their bands. It’s like that scene in Return of the Jedi when you realise that, beneath all of that cool black armour, Darth Vader is really just an elderly bald man playing the harmonica. Paired with the dozens of different bands the company offers, the Vivofit can become a completely different thing entirely. Misfit’s Shine takes this idea to a compelling extreme. It’s a device that’s essentially a little metal pebble that slots into various different wearable form factors, including a wristband, a necklace and a simple clip.
Although most companies recognize the importance of customer relationships, they lack the necessary skills, processes and technology to utilize data to their advantage. Most businesses are drowning in customer and employee data, yet they're unable to quench their thirst for actionable insights that deliver customer value and successful business outcomes. IDC research shows that less than one percent of customer data is analyzed by businesses today. The inability to analyze customer data results in 77% of customer who are not engaged with companies that they do business with. Companies cannot afford to ignore the connected customer.
Thiel has called Altman and Musk's fears "a little bit overdone at this point," but for years has admitted that the outcome of artificial intelligence research could be a mixed bag. An artificially intelligent computer "could be very good, it could be very bad, it could be somewhere in between," he told Business Insider in 2009. "Certainly we would hope that it would be friendly to human beings." Regardless of how friendly AI might be, Thiel says that with the technology developing, it might be best not to come off as an anti-computer human being, lest future synthetic entities turn out to be the type to hold a grudge.
Luthans’ research clearly demonstrates that boosting psychological capital in a company equates to improved productivity. In a paper titled “Positive psychological capital: Beyond human and social capital,” he states that “the value created when human capital is aligned with corporate strategy and fully engaged in making the enterprise effective has been researched extensively…and found to have a significant positive impact on performance outcomes.” As companies face tougher competition for both human capital and improved financial results, they would benefit by investing in programs that foster a resilient workforce. As the ever-changing workplace requires people to learn new skills and adapt to changing management styles, the importance of stress management is evident. But it’s how a person responds to these situations that magnifies his or her level of resiliency.
The first lesson Big Data can learn from the NoSQL world (and from other modern software domains like mobile, social and more) is that simplicity and ease-of use are key – they are not nice-to-haves and do not take a back seat to anything else. Developers are viewed by the NoSQL world as the “masters” – and the technology needs to fit the way these masters will use it. Perhaps the main reason that NoSQL has been so successful is its appeal to developers who find it easy to use and who feel they are an order of magnitude more productive than other environments. The same is true for ops. The result is something that makes everyone more productive – developers, ops people etc.
Ensuring business continuity in a connected environment will require high availability. This refers to the operational duration of any system. A 100% uptime means your infrastructure never experiences any unexpected outage. As this is virtually impossible, reputable service providers aim for at least 99.999% uptime, which translates to only five minutes of downtime in any given year. From the perspective of a connected business, this approach ensures the optimized performance of a website or enterprise platform. It detects points of failure that can potentially cause the downtime and mitigates failure by distributing the load and traffic across the infrastructure. In the event of failure, a high availability infrastructure will have failover and recovery mechanisms.
What those really old systems will do, however, is fail. I don’t know about you, but I sure wouldn’t want to try to restore data from a Windows 2000 system, never mind a VAX/VMS box, an AT&T 3B2 System V Release 3.2 Unix system, or a TRS Color Computer (endearingly known as CoCo). I didn’t pick these computers at random. I know people who are using all of them for production. I can also guarantee that if you’re using a “modern” but out-of-date copy of Mac OS X, Linux or Windows, you will be attacked and hacked. If your system is on the Internet, it’s only a matter of days before your systems will be cracked. Worse still are those embedded devices, such as Wi-Fi access points, that never get their firmware updated. Many of these contain cracked software, such as OpenSSL with the Heartbleed vulnerability.
Cybersecurity is more than a technological issue—it’s a business issue. In a BoardVision video moderated by Judy Warner—editor-in-chief of NACD Directorshipmagazine—Mary Ann Cloyd, leader of PwC’s Center for Board Governance, and Zan M. Vautrinot, former commander of the Air Forces Cyber Command and current director of Symantec, Ecolab, and Parsons Corp., discuss effective cyber-risk oversight, addressing the following questions: How can boards communicate with management about cyber risk? How does cyber risk fit into discussions about risk appetite?
The second pillar of a successful release was made possible by what we call the “meta-solution” situation, an Alice in Wonderland kind of paradox that occurs when you build monitoring solutions that you can use to monitor your own services. To give you an idea how this was beneficial to us, let me describe the solution we were building in a few words. Plumbr is designed to detect slow and failing user transactions in an application, and automatically link such transactions to the root cause in the source code. Building such a solution meant that the task of testing and especially performance testing new code was reduced to processing the alerts triggered by Plumbr (the instance that was monitoring Plumbr) and fixing the exposed root causes as they appeared during the development process.
Almost all latency benchmarks are broken because almost all benchmarking tools are broken. The number one cause of problems in benchmarks is something called “coordinated omission,” which Gil refers to as “a conspiracy we’re all a part of” because it’s everywhere. Almost all load generators have this problem. We can look at a common load-testing example to see how this problem manifests. With this type of test, a client generally issues requests at a certain rate, measures the response time for each request, and puts them in buckets from which we can study percentiles later. The problem is what if the thing being measured took longer than the time it would have taken before sending the next thing? What if you’re sending something every second, but this particular thing took 1.5 seconds?
Quote for the day:
"Boring is an attitude, not the truth. Possibility is where you decide it is." -- Seth Godin