First, you need rules that lay out who can do what with what information, which providers and in which world regions and time zones. Losing control of that is one of the biggest stumbling blocks organizations come across, Cancila said. "You want to be able to position your organization to use cloud services effectively, but you have to retain some level of control so that you meet the requirements of the business," she said. Of course, organizations already use authentication mechanisms like Microsoft Active Directory so users and computers can access systems. If they're using Azure, they can manage users directly in the cloud by having them log in to the cloud version, Azure AD, a separate directory of users that lives in the cloud.
Efficiency is a great place to start and there are still thousands of colocation centers that have PUE values of over 2.0 (depending on regional climate issues a good PUE should be 1.2 – 1.45). This means on average colocation facilities are burning 30% plus more energy than they should be to support their hosted IT equipment. This inefficiency is bad enough, but when you combine it with the dirty energy mix that most of these 3685 data centers run on the story only gets worse. Consider that a data center running at a PUE of 2.0 on natural gas produces less carbon output than a 1.4 PUE data center running on coal generated energy. Combining a high PUE with a dirty energy supply just exacerbates the situation.
Azure Data Lake makes HDInsight, our Apache Hadoop-based service a key part of the Azure Data Lake. As one of the fastest growing services in Azure, HDInsight gives you the breadth of the Hadoop ecosystem in a managed service that’s monitored and supported by Microsoft. Furthering our commitment to productivity, we’ve also updated our Visual Studio Tools for authoring, advanced debugging, and tuning for Hive queries and Storm topologies running in HDInsight. Today, we are announcing the general availability of HDInsight on Linux. We work closely with Hortonworks and Canonical to provide the HDP™ distribution on the Ubuntu Operating System that powers the Linux version of HDInsight in the Data Lake.
The first performance and scalability challenge then is how to keep up with the latest open software progressions, adopting them and getting them to work together. This challenge is where the IBM Open Platform for Apache Hadoop can help. It provides a collection of the latest versions of Hadoop ecosystem components that have been tested, tuned and packaged for easy consumption. This collection also paves the way to exploit even more advanced big data and analytics software tools offered with IBM InfoSphere BigInsights. The next performance and scalability challenge exists below the open software at the physical infrastructure—the full scale-out architecture that represents the compute, networking and storage sprawl.
Here's the kicker: Smith and Shmatikov are moving forward with this research with a grant from Google, the company that helped to put deep learning on the computing map with its research in the first place. In other circles, Google is also known as the company that has played a not-so-small role in making online privacy, or lack thereof, a growing concern. Google bestowed the grant -- the amount of which was not reported -- under its Faculty Research Awards program, which gives one-year awards structured as unrestricted gifts to universities to support research in a range of subjects that might benefit from collaboration with Google, according to Penn State.
Called Unite, the architecture is embodied in the company’s Junos operating system software and encompasses a handful of new and existing Juniper products. They include the EX9200 switch, the Junos Space Network Director management system, and third party products integrated through Juniper’s Open Converged Framework. Unite is intended to enable enterprises to build private clouds and then interconnect them to public cloud infrastructures in a hybrid environment for application access and delivery. At the heart of it is Junos Fusion Enterprise, Junos software designed to provide a single point of network configuration and management for the enterprise network. Junos Fusion Enterprise allows customers to collapse multiple network layers into a single enterprise cloud, Juniper says.
“In IT we’re changing the ways of working from waterfall to agile,” says Shivanandan. Adopting agile has been the first step in turning the ship, and now approximately 70% of Aviva’s IT work is performed in an agile manner. Not all of this is taking place in the digital garage – the transformation is taking place across the entire business. But switching from traditional methods of working, where there is pressure to get it right first time, to an agile approach, where staff are encouraged to“fail fast and learn”, requires time and effort. Agile coaches are being used across the business to train employees in methodologies, standup meetings and ways of working.
Nowadays, it is almost impossible to prevent employees from using social media sites – Facebook, Twitter, LinkedIn, Instagram, Pinterest – while at work. Some businesses are fine with that, even encouraging employees to promote the company and its products or services on social media. At the same time, however, they don’t want productivity to slip, or to have workers portray the company negatively on popular social media channels. So what steps can organizations realistically take to limit or control social media use while at work, without seeming like Big Brother or forbidding its use? Following are five expert tips, along with a sidebar on the legal ramifications of using social media for work or at the office.
To understand why this might be the case, you have to remember that under the current system there are (generally) two stages to a transaction: first you have the assignment of rights and responsibilities (X promises to pay Y $5; this obligation is recorded as a debit in X’s account and a credit in Y’s), then you have the settlement ($5 is actually transferred from X’s account to Y’s). With ACH (the system used to transfer money between bank accounts used by almost all U.S. banks), and the systems that run on it, settlement generally takes at least a couple of days, which means that if fraudulent activity is detected before the actual transfer happens, the transfer of funds can be stopped, and Y never will not get his ill-gotten payment. This would not be possible in a real time system because the transfer of money would occur nearly instantly.
Service-managed keys can give you the assurances of per tenant and per subscription keys, with segregation of duties and auditing, without the headache of managing keys. “But with BYOK, we're requesting customers get involved in significant way,” Plastina says. “That means setting up vaults, managing vaults; in some cases, that requires HSM-backed keys so they’re purchasing an HSM on premise, they have to run their own quorums for administrator’s smart cards and PINs, they have to save smartcards in the right place. It definitely raises the burden on them.”
Quote for the day: "Organizations are most vulnerable when they are at the peak of their success." -- R.T. Lenz