The big question is: Can they keep up with the demand? Here's the thing: Amazon is dominating that space in ways that has every other company (even Google) shaking their heads. So when you have the likes of Google, Microsoft, and IBM playing a serious game of catchup with the big 'Zon, you know the demand is nowhere near the supply. No surprise, right? What is surprising, however, is that Google does not rank at the top of the heap. Considering the global domination of the Android platform, one would think Google leads the top seven providers, but Google doesn't come close to Amazon's IaaS profit. In 2015, Amazon Web Services drew in over $7 billion in profit, compared to Google Compute Engine drawing in a mere $281 million. Google knows it is lagging behind Amazon and is doing everything it can to shrink the margin.
The next BriefingsDirect expert panel discussion examines the value and direction of The Open Group IT4IT initiative, a new reference architecture for managing IT to help business become digitally innovative. ... This panel, conducted live at the event, explores how the reference architecture grew out of a need at some of the world's biggest organizations to make their IT departments more responsive, more agile. We’ll learn now how those IT departments within an enterprise and the vendors that support them have reshaped themselves, and how others can follow their lead. The expert panel consists of Michael Fulton, Principal Architect at CC&C Solutions; Philippe Geneste, a Partner at Accenture; Sue Desiderio, a Director at PriceWaterhouseCoopers; Dwight David, Enterprise Architect at Hewlett Packard Enterprise (HPE); and Rob Akershoek, Solution Architect IT4IT at Shell IT International.
A more promising approach is to use machine-vision algorithms that rely on neural networks to extract features from an image and use these to produce a sketch. In this area, machines have begun to rival and even outperform humans in producing accurate sketches. But what of the inverse problem? This starts with a sketch and aims to produce an accurate color photograph of the original face. That’s clearly a much harder task, so much so that humans rarely even try. Now the machines have cracked this problem. Today, Yagmur Gucluturk, Umut Guclu, and pals at Radboud University in Denmark have taught a neural network to turn hand-drawn sketches of faces into photorealistic portraits. The work is yet another demonstration of the way intelligent machines, and neural networks in particular, are beginning to outperform humans in an increasingly wide variety of tasks.
Traditional reliance on policies, procedures and training to promote confidentiality also no longer are effective when the data integrity is threatened because it’s not accessible, says Paul Bond, a partner in the Reed Smith law firm who specializes in IT and privacy issues. With the availability of health data in peril, organizations must have contingency plans in place so they have an action plan for what to do when facing a ransom incident. Should they pay the ransom and get their data back? Some organizations may not have an alternative if their data back-up processes were not optimal. Some hospitals have paid ransom. For example, Hollywood Presbyterian Medical Center in Los Angles struggled for 10 days to regain its data, then paid $17,000 in Bitcoin—an Internet currency—to get access back to its data. Kansas Heart Hospital paid an undisclosed amount of ransom, but did not get back all its data after the attackers demanded another ransom, and the hospital refused.
“Any multi-party process where shared information is necessary to the completion of transactions and the coordination of activity and the exchange of value — that’s where blockchain technology can be put to good use,” Ms. Masters told attendees of The Wall Street Journal’s CFO Network in Washington D.C. “It’s one of the great opportunities, I think, in the financial services sector,” Ms. Masters said. “We’re talking about billions of dollars in annual savings for the banking industry.” ... Blockchain can help companies in all industries manage the movement of money in exchange for goods and services across multiple different parties in a secure, timely and coordinated way. Instituting a centralized, encrypted repository for such information can help companies make complicated transactions more efficiently, she explained.
A probabilistic programming language is a high-level language that makes it easy for a developer to define probability models and then “solve” these models automatically. These languages incorporate random events as primitives and their runtime environment handles inference. Now, it is a matter of programming that enables a clean separation between modeling and inference. This can vastly reduce the time and effort associated with implementing new models and understanding data. Just as high-level programming languages transformed developer productivity by abstracting away the details of the processor and memory architecture, probabilistic languages promise to free the developer from the complexities of high-performance probabilistic inference. What does it mean to perform inference automatically? Let’s compare a probabilistic program to a classical simulation such as a climate model. A simulation is a computer program that takes some initial conditions such as historical temperatures, estimates of energy input from the sun, and so on, as an input.
The primary reason is really to expose information that's kind of "stove piped" in all our legacy systems and make that available [while also] protecting it from our adversaries. We're moving into the SOA environment precisely for that reason. Most of the legacy systems ... were built on a client-server framework. ... Data is kind of bottled up in those databases. And with the SOA middleware layer, we're exposing that data and making it available to other users without building custom interfaces that pretty quickly become expensive to manage. The success of [these] money-saving and time-saving innovations is critical to the Air Force's ability to operate, particularly in a fiscally constrained environment. We can show case after case of reuse of the SOA environment where we've been able to transition quickly to another operational need, make connections and make data available very rapidly.
More and more, the CIO is taking a leadership role in digital strategy. When it comes to digital transformation, however, the CIO can’t go it alone. Digital transformation requires collaboration, and a joint set of initiatives that combine business and technology. “We’re not just talking about IT for IT’s sake, but about innovation with the business around business capabilities,” says Snyder. Digital disruption, he explains, is no longer just about developing new business models — which was the biggest expectation last year. In 2016, expectations have shifted to focus on digital transformation in the form of new and innovative products and services, as well as new forms of customer engagement. “That’s why digital transformation must be done collaboratively,” says Snyder. “You can’t do this without the rest of the business...it is the business.”
For simple applications, external configuration for dependency addresses may well be sufficient. For applications of any size though, it's likely that we'll want to move beyond simple point-to-point wiring and introduce some form of load-balancing. If each of our services depends directly on a single instance of its downstream services, then any failure in the downstream chain is likely to be catastrophic for our end users. Likewise, if a downstream service becomes overloaded, then our users pay the price for this through increased response times. What we need is load balancing. Instead of depending directly on a downstream instance, we want to share the load across a set of downstream service instances. If one of these instances fails or becomes overloaded then the other instances can pick up the slack. The simplest way to introduce load balancing into this architecture is using a load-balancing proxy.
Make sure all systems are promptly updated with the latest operating system security patches; Enforce anti-malware scanning across all departments, and ensure your malware signature databases are up to date; Implement content-based scanning and filtering on email servers, particularly where access to cloud services such as Gmail, Yahoo Mail, and Outlook.com are permitted from the enterprise network; Restrict users’ access to only those systems that are necessary for their roles. Avoid “access sprawl.”; Use two-factor authentication, so a stolen password isn’t enough to grant access; Ensure user accounts are de-provisioned promptly. There should be no orphaned accounts of former employees, especially if they served in a technical role; and Deploy and maintain a comprehensive backup system, including offsite storage, in the event that files need to be restored.
Quote for the day:
"Be decisive. A wrong decision is generally less disastrous than indecision." -- Bernhard Langer