Daily Tech Digest - April 22, 2019

Fujitsu completes design of exascale supercomputer, promises to productize it

Fujitsu completes design of exascale supercomputer, post-K supercomputer
The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip. A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack. Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.



As attacks get worse and more commonplace, it noted that companies need cybersecurity professionals more and more. But because of a perfect storm of scarce skills and high demand, security jobs come with a high salary, meaning that businesses not only struggle to find the right people, they have to pay top-dollar to get them. All of that means that cyber-criminals are having a field day, as the article illustrates. Attackers take advantage of ill-prepared companies, knowing that they are likely to be successful. It’s clear that the industry does need to improve, for the sake of customers and businesses alike. And to do that, we need good people, with the right skills. The industry has known for a while that those people are not easy to come by – there are simply not enough of them. There are a lot of reasons for that shortage, and it’s worth bearing in mind that it’s not the easiest industry to work in; the stress of the work means that mental health issues are rife.


Node.js vs. PHP: An epic battle for developer mindshare

PHP vs. Node.js: An epic battle for developer mind share
Suddenly, there was no need to use PHP to build the next generation of server stacks. One language was all it took to build Node.js and the frameworks running on the client. “JavaScript everywhere” became the mantra for some. Since that discovery, JavaScript has exploded. Node.js developers can now choose between an ever-expanding collection of excellent frameworks and scaffolding: React, Vue, Express, Angular, Meteor, and more. The list is long and the biggest problem is choosing between excellent options. Some look at the boom in Node.js as proof that JavaScript is decisively winning, and there is plenty of raw data to bolster that view. GitHub reports that JavaScript is the most popular language in its collection of repositories, and JavaScript’s kissing cousin, TypeScript, is rapidly growing too. Many of the coolest projects are written in JavaScript and many of the most popular hashtags refer to it. PHP, in the meantime, has slipped from third place to fourth in this ranking and it’s probably slipped even more in the count of press releases, product rollouts, and other heavily marketed moments.


Network analytics tools take monitoring to the next level

These tools help to identify problems as well as assist with capacity planning. Common tools include Simple Network Management Protocol (SNMP), syslog and Cisco NetFlow. While these tools provide some great information, they're siloed systems that work independently from one another. So, to perform any deep investigative work needed to determine the root cause of a particularly tricky network performance issue, IT staff would waste hours bouncing between tools. Modern network analytics tools provide a remedy to this time-consuming and complicated process. Network analytics software draws on traditional monitoring protocols and methods and then adds more sophisticated data flow collection methods. All collected data is then analyzed in real time using AI. By combining all data sources, the analytics platform can comb through far more information than ever before in order to make accurate network performance conclusions.


A Data Quality Framework for Big Data

A Data Quality Framework for Big Data
Data profiling is a good first step in judging data quality. But it is different for big data than for structured data. Structured methods of column, table, and cross-table profiling can’t easily be applied to big data. Data virtualization tools can create row/column views for some types of big data, where the views can then be profiled using relational techniques. This approach provides useful data content statistics but fails to give a full picture of the shape of the data. Visual profiling shows patterns, exceptions, and anomalies that are helpful in judging big data quality. Most “unstructured” data does have structure, but it is different from relational structure. Visual profiling will help to show the structure of document stores and graph databases, for example. Data samples can then be checked against the inferred structure to find exceptions—perhaps iteratively refining understanding of the underlying structure. Data quality judgment and structural findings should be recorded in a data catalog allowing data consumers to evaluate the usability of the data. With big data, quality must be evaluated as fit for purpose. With analytics, the need for data quality can vary widely by use case. The quality of data used for revenue forecasting, for example, may demand a higher level of accuracy than data used for market segmentation.


Google Expands ML Kit, Adds Smart Reply and Language Identification

In a recent Android blog post, Google announced the release of two new Natural Language Processing (NLP) APIs for ML Kit, a mobile SDK that brings Google Machine Learning capabilities to iOS and Android devices, including Language Identification and Smart Reply. In both cases, Google is providing domain-independent APIs that help developers analyze and generate text, speak and other types of Natural Language text. Both of these APIs are available in the latest version of the ML Kit SDK on iOS (9.0 and higher) and Android (4.1 and higher). ... Smart Reply allows for contextually-aware message response suggestions to be returned within a chat-based application. Using this feature allows for a quick, and accurate, response in a chat session. Gmail users have been using the Smart Reply feature for a couple years now on the mobile and desktop versions of the service. Now developers can include Smart Reply capabilities within their applications.


Can Blockchain Replace EDI In The Supply Chain? header
“Blockchain in B2B integration brings more agility. Today, B2B integration requires that both parties know each other, at least on a technical level, to provide ways to solve issues such as nonrepudiation and acknowledgement,” writes Forrester Research principal analyst Henry Peyret in “The Future of B2B Integration.” “Forrester expects that, in the next three to five years, blockchain technologies could be used to provide additional agility in building dynamic ecosystems.” Although EDI has built a 20-year track record of reliability, the venerable technology’s main weak point is its cost. “If there’s going to be a rationale for replacement, it might just be that blockchain is cheaper,” Fearnley says. But not everyone says the transition from EDI to blockchain is a done deal. “There have been many contenders to overthrow EDI over the years, and none of them have succeeded because EDI does what it does pretty well,” says Simon Ellis, program vice president of supply chain strategies at IDC. He adds, however, “If you can make things more secure and faster, everyone will benefit.”




coffee-cup-java.jpg
Despite that, Oracle stopped providing security updates to Java 8 in January 2019, in an attempt to force organizations into paid licensing agreements. Naturally, running out-of-date, insecure versions of Java is an exceptionally bad idea, presenting a conundrum to IT managers responsible for the deployment of Java applications: Either pay to maintain support for something that was once used for free, or—if even possible—attempt to move an application off of Java entirely. There is a viable third option, however: Using a non-Oracle distribution of Java. Because Java is still fundamentally open source, any organization that wishes to ship its own patched version of OpenJDK can do so. Red Hat—which contributes to Java upstream, and ships a number of their own products built on Java—is doing just that. Red Hat is taking the mantle of OpenJDK maintainer for versions 8 and 11, which will be supported until June 2023 and October 2024, respectively. New features are not expected for either version, as both are essentially in maintenance mode. 




Data center workers happy with their jobs -- despite the heavy demands
Overall satisfaction is pretty good, with 72% of respondents generally agreeing with the statement “I love my current job,” while a third strongly agreed. And 75% agreed with the statement, “If my child, niece or nephew asked, I’d recommend getting into IT.” And there is a feeling of significance among data center workers, with 88% saying they feel they are very important to the success of their employer. That’s despite some challenges, not the least of which is a skills and certification shortage. Survey respondents cite a lack of skills as the biggest area of concern. Only 56% felt they had the training necessary to do their job, and 74% said they had been in the IT industry for more than a decade. The industry offers certification programs, every major IT hardware provider has them, but 61% said they have not completed or renewed certificates in the past 12 months. There are several reasons why. A third (34%) said it was due to a lack of a training budget at their organization, while 24% cited a lack of time, 16% said management doesn’t see a need for training, and 16% cited no training plans within their workplace.




Closing the cyber security gender gap reflects the realities of the larger global cyber environment where there is diversity in gender, politics, social, economic, and culture. The bad guys are not only diverse in their thinking and actions, but also so are potential foreign security partners. As such, different perspectives and experience is a necessary complement to an industry that often hits an obstacle when it comes to language and terminology. More importantly, more inclusion into cyber security starts to tear down antiquated perceptions that the profession is geared toward males. This is almost ironic considering that women have played prominent roles in computers to include programming, designing computer systems to run U.S. census, and the software that supported Apollo 11 missions. Addressing the cultural perception of the cyber security industry is necessary in order to continue to better level employment levels. Part of this requires a review to ensure that compensation levels are equal. According to a 2017 global information security study, women earned less than male counterparts at every level.





Quote for the day:


"Surprise yourself every day with your own courage." -- Denholm Elliott


No comments:

Post a Comment