July 22, 2015

Java 9's New HTTP/2 and REPL

All of this means that support for HTTP/2 is a core piece of Java functionality for the next decade. It also allows us to revisit our old assumptions, rewrite the APIs and provide a "second bite of the apple". HTTP/2 will be a major API for every developer for years to come. The new API makes a clean break with the past, by abandoning any attempt to maintain protocol independence. Instead, the API focuses solely on HTTP, but with the additional understanding that HTTP/2 is not fundamentally changing semantics. Therefore, the API can be independent of HTTP version, whilst still providing support for the new framing and connection handling parts of the protocol.


4 ways to manage an overwhelming number of IT initiatives

While it may be tempting to simply stop entertaining new initiatives, this course of action is fraught with risk. Many IT services can now be provisioned with little more than a credit card, and any gaps can be filled by armies of willing consultants. Hanging out a metaphorical "No room at the inn" sign may cause constituents to go elsewhere. Furthermore, technology is changing very rapidly, and a new initiative may invalidate one or more existing initiatives. A new cloud service being requested by operations could eliminate a costly application upgrade or reporting tool, just as a new request by marketing could finally gain support for a less exciting, but dependent, infrastructure upgrade.


What Is A Creative Data Scientist Worth?

For some time, pockets of IT innovators have been creating industrial art which appeals to the head as well as the heart. People like the late Steve Jobs and Apple AAPL -0.12% design chief Jonathan Ive - true IT artists. I remember laying eyes on their ‘iLamp’ G4 iMac back in 2002. It was so original and ridiculously gorgeous. For the first time in my life, I forgot about MB, GB, or Ghz. I just wanted an iMac. And now, some data science outputs are being considered fine art in their own right. As well as creating competitive advantage, spawning new products, identifying fraud patterns, and changing business processes in ways that, until now, could only live in the imagination, these beautiful, hypnotic images are adding a new dimension; bringing data analytics to life.


What’s behind Linux’s new Cloud Native Computing Foundation?

The CNCF is advancing the discussion to consider how containers should be managed, not just how they’re created. That’s a good thing for the industry, and for end users. Big enterprise buyers aren’t going to really use containers until there are are mature platforms for managing them. ... Because the CNCF is attempting to create a reference architecture for running applications and containers, and Google’s Kubernetes will likely play a big role in that. AWS and Microsoft already have a reference architecture for running containers and they’re not looking to support competitor Google’s.


Unpacking technical jargon in machine learning

Machine learning is a child of statistics, computer science, and mathematical optimization. Along the way, it took inspiration from information theory, neural science, theoretical physics, and many other fields. Machine learning papers are often full of impenetrable mathematics and technical jargon. To make matters worse, sometimes the same methods were invented multiple times in different fields, under different names. The result is a new language that is unfamiliar even to experts in one of the originating fields. As a field, machine learning is relatively young. Large-scale applications of machine learning only started to appear in the last two decades. This aided the development of data science as a profession.


Next-generation endpoint protection not as easy as it sounds

The value of endpoint protection platforms is that they can identify specific attacks and speed the response to them once they are detected. They do this by gathering information about communications that go on among endpoints and other devices on the network, as well as changes made to the endpoint itself that may indicate compromise. The database of this endpoint telemetry then becomes a forensic tool for investigating attacks, mapping how they unfolded, discovering what devices need remediation and perhaps predicting what threat might arise next.


The Importance Of Design Thinking For Big Data Startups

Rather than thinking about competing products, think about competing processes. If you are selling into the marketing department, what is their current process for accomplishing the task you are serving? Your goal should be to make that process faster and more efficient. Additional features are great but if a potential client can do X faster with their current process, your offering of Y & Z doesn’t matter. You may win early but it will be difficult to last. Slack has done a remarkable job of accomplishing this. They are in a super competitive space of workplace collaboration. They did not win because of features; they won because of ease and simplicity.


Information security governance maturing, says Gartner

"The primary reasons for establishing this reporting line outside of IT are to improve separation between execution and oversight, to increase the corporate profile of the information security function and to break the mindset among employees and stakeholders that security is an IT problem," said Scholtz. According to Gartner, organisations increasingly recognise that security must be managed as a business risk issue, and not just as an operational IT issue. There is also an increasing understanding that cyber security challenges go beyond the traditional realm of IT into areas such as operational technology and the internet of things (IoT).


Hadoop for HPC—It Just Makes Sense

An increasing number of companies that already use High Performance Computing (HPC) clusters running a Lustre file system for simulations sees the value of their existing data and future data. They are interested in what that data might reveal running Hadoop analytics on it. But building out a Hadoop cluster with massive amounts of local storage and replicating their data on the Hadoop Distributed File System (HDFS) is a considerably extensive and expensive undertaking, especially when the data already resides in a POSIX compliant Lustre file system. Today, these companies can adopt analytics written for Hadoop and run them on their HPC clusters.


7 Habits of Highly Effective Monitoring Infrastructures

Monolithic monitoring tools, on the other hand, often assume that you’ll never need to export the data they collect for you. The classic example is Nagios, which is, as you probably know, a tool designed to collect availability data at around 1-minute resolution. Because Nagios views itself as a monolithic monitoring tool, a plethora of single-purpose tools have sprung into being, for no other purpose than to take data from Nagios and place it in X, where X is some other monitoring tool from which it is usually even more difficult to extract the monitoring data. What you end up with is the now infamous anti-pattern of overly complex, difficult to repurpose, impossible to scale, single-purpose monitoring systems. Each tool we add to this chain locks us in further to the rest of the tool chain, by making it more difficult to replace any single piece.



Quote for the day:

"Corporate governance is not a matter or right or wrong 'it is more nuanced than that." -- Advocate Johan Myburgh

No comments:

Post a Comment