March 10, 2016

Designing a modern enterprise architecture

The reason enterprise architectures must change is the confluence of high-speed connectivity and decades of exponential Moore's Law improvements in computing power. This has enabled cheap smartphones to saturate the market and utility-scale IT service providers to create cloud services. Together, these technologies have catalyzed dramatic changes in business. Whether you call it the New Economics of Connections (Gartner) or the Unbounded Enterprise (AT&T Bell Labs), it means businesses, and consequently IT systems and applications, will increasingly interact not just with people, but devices, virtual objects, and other software in the form of automated business processes and intelligent devices.


Biggest-Ever Blockchain Trial is Only the Beginning

Grant described the trial in similarly ambitious terms, indicating that it sent four technology providers specifications for the test – Chain, IBM, Intel and Eris (which delivered versions of the concept on its platform and Ethereum) – that included design specs for three specific trading scenarios. "We had [banks] issuing, trading and redeeming commercial paper, and we had every one of those banks do that in the platform," Grant said. He explained that all banks were encouraged to transact with at least one other bank over the course of the trial, with Grant suggesting that "at least 60 trades" were completed in the simulations. No real funds were exchanged as part of the test. Grant suggested that two of R3’s partners declined to participate due to what he called a "significant resource requirement".


Is DevOps good or bad for security?

Miller views that as one of the benefits of DevOps. “Because CD emphasizes having a code review process, small check-ins and rapid mitigation come with it. If you can deploy four or five times a day, you can mitigate something within hours.” The same applies to spotting breaches, says Sam Guckenheimer from Microsoft’s developer tools team. “With DevOps, you're worried about things like mean time to detect, mean time to remediate, how quickly can I find indicators of compromise. If something anomalous happens on a configuration, you have telemetry that helps you detect, and you keep improving your telemetry – so you get better detection, you get better at spotting indicators of compromise and you get better at remediation.” Continuous deployment makes life harder for attackers in two ways, Guckenheimer explains.


Context is king: Aruba founder talks about future of wireless

Speaking about upcoming wireless standards, Melkote said that 802.11ad would rise to prominence within the next two years. The 60GHz technology doesn’t propagate over great distances or through thick barriers, but offers the possibility of very high throughput. “Initially, it was envisioned as a high-speed replacement for cable,” he said. “If you’re trying for coverage, it’s not the right technology, but if you’re trying to provide capacity, it can be a good technology.” But he cautioned that it is still very early in the game where 802.11ad is concerned, and that there aren’t even chipsets yet available. “The big thing that I look for here is the economics – can you get to a price point that is palatable for the end user?” Melkote said.


The Data Science Puzzle, Explained

While one may not agree entirely (or even minimally) with my opinion on much of this terminology, there may still be something one can get out of this. Several concepts central to data science will be examined. Or, at least, central in my opinion. I will do my best to put forth how they relate to one another and how they fit together as individual pieces of a larger puzzle. As an example of somewhat divergent opinions, and prior to considering any of the concepts individually, KDnuggets' Gregory Piatetsky-Shapiro has put together the following Venn diagram which outlines the relationship between the very same data science terminology we will be considering herein. The reader is encouraged to compare this Venn diagram with Drew Conway's now famous data science Venn diagram, as well as my own discussion below and modified process/relationship diagram near the bottom of the post.


The Benefits of Hiring Freelance Big Data Experts

One of the major benefits gained from going the freelance route is flexibility. Instead of hiring a full time data scientist to oversee all big data projects within an organization, the company instead hires on a per project basis. This is especially important for smaller businesses, since the time between big data projects at that level can often be lengthy. Passing over the full time option means a business wouldn’t have to worry about paying a big data expert when they have nothing for them to do. Hiring based on the project means a smarter use of limited resources. This added flexibility also leads to choosing data experts based off of their individual talents. For example, if a big data project requires hiring a data scientist with expertise in sales, the small business can do so. Their fees aren’t based off of a salary but rather on the milestones reached in the project.


Digital Hijackers – the rising threat of ransomware

Ransomware is a cyber version of kidnapping, with the same motives: money. It works like a virus that secretly encrypts files. Victims don’t get the key until paying the ransom. It’s as if instead of a thief stealing your car, they took the car keys and put them in a safe left in your garage. You don’t get the combination to the safe, and use of your car unless you pay up. ... As the attacks have gotten more advanced and correspondingly expensive to develop, they have also become more costly, with an average ransom of about $300 per infected host. What is an extortionate annoyance to someone trying to get their family photo library back can be a significant business expense, both in the ransom itself and the indirect costs of operational disruption and cleanup, when faced with a data center full of affected systems.


Defining 'reachability' on the global Internet

Each geographic market has Internet Service Providers (ISPs) that connect customers to the Internet, and those local ISPs connect to larger ISPs that ultimately connect to geographies all over the world. Your website sits in data centers or in the cloud with its own Internet connectivity. This combined connection path between your website and these ISPs is how you get to different markets. These days, every business is Internet based, which means your customer can come from any market. Even a North American-focused company is still concerned about dozens of important markets. Global companies can be connecting to customers in up to 800 markets. Knowing how well your web assets can reach a market allows you to plan business expansion, plan cloud, CDN and hosting investments, and tune your application and performance metrics by market.


VMware Virtual SAN: The Technology And Its Future

The economics of storage are skewed in favor of all-flash for an increasing number of use cases. For me, our experience with the Virtual SAN cluster deployed as part of the Hands-On Lab (HOL) infrastructure in VMworld 2014 was an eye opener. The storage workload generated by 100s of concurrent, constantly churning Labs is not very cache friendly (no surprise here). As such, the VMware IT team used a large number of spindles for the capacity tier of Virtual SAN to deal with the workload “escaping” the cache. In other words, the spindles were needed for performance, not capacity. We realized that an all-flash hardware would require fewer capacity devices and it would cost less! And that was already the case back in 2014. The main challenge with the high-capacity, low cost SSDs is their low endurance (typically below 1 device-write per day guaranteed for 5 years).


What is IT Service Brokering? Find out in this recent paper

In a very simple and easy-to-understand way, Moore explains the differences between cloud service brokering and service brokering, and why brokerage in IT is needed. He analyzes what makes up a service broker and what parts are IT’s responsibility, such as APIs, micro-services and application services. Moore discusses where to start to become a service broker as well as some initial challenges that IT needs to overcome. Service broker is a new operating model for IT and multiple steps, some substantial and time consuming, are needed. Moore talks about navigating this transition throughout the automation, orchestration and transformation phases. Digital disruption is real, and for IT, among many other aspects, it brings a new type of integration delivery.



Quote for the day:


"There is only one thing that makes a dream impossible to achieve: the fear of failure." -- Paulo Coelho


No comments:

Post a Comment