Daily Tech Digest - August 01, 2018

What is WebAssembly? The next-generation web platform explained
WebAssembly, developed by the W3C, is in the words of its creators a “compilation target.” Developers don’t write WebAssembly directly; they write in the language of their choice, which is then compiled into WebAssembly bytecode. The bytecode is then run on the client—typically in a web browser—where it’s translated into native machine code and executed at high speed. WebAssembly code is meant to be faster to load, parse, and execute than JavaScript. When WebAssembly is used by a web browser, there is still the overhead of downloading the WASM module and setting it up, but all other things being equal WebAssembly runs faster. WebAssembly also provides a sandboxed execution model, based on the same security models that exist for JavaScript now. Right now, running WebAssembly in web browsers is the most common use case, but WebAssembly is intended to be more than a web-based solution. Eventually, as the WebAssembly spec shapes up and more features land in it, it may become useful in mobile apps, desktop apps, servers, and other execution environments.



Improving Testability of Java Microservices with Container Orchestration & Service Mesh


This article shows how container orchestration provides an abstraction over service instances and facilitates in replacing them with mock instances. On top of that, service meshes enable us to re-route traffic and inject faulty responses or delays to verify our services' resiliency. We will use a coffee shop example application that is deployed to a container orchestration and service mesh cluster. We have chosen Kubernetes and Istio as example environment technology. Let’s assume that we want to test the application’s behavior without considering other, external services. The application runs in the same way and is configured in the same way as in production, so that later on we can be sure that it will behave in exactly the same way. Our test cases will connect to the application by using its well-defined communication interfaces. External services, however, should not be part of the test scenario. In general, test cases should focus on a single object-under-test and mask out all the rest. Therefore, we substitute the external services with mock servers.


Three steps to improve data fidelity in enterprises


Data fidelity requires the contextual evaluation of data in terms of security. This means examining the data objects within the context of the environment in which they were created. In order to gather this data, you must not only re-examine what you deem important, but you must do so within the context of the tasks that you are attempting to support. The task support piece is critical because this bounds the problem space in which you can work. If the problem space is not bounded, all of the solutions will remain brittle point solutions that continue to fail when new problems are introduced. The ways systems can fail seems endless, but the ways systems can perform correctly are limited. This characteristic is key in any analysis that requires accurate predictions. Coincidentally, this same characteristic is oftentimes overlooked when attempting to accurately predict outcomes in the cyber domain. Three disciplines can assist in creating the boundaries and gathering the contextual data required to ensure data fidelity: dependency modeling, resiliency and reliability.


Key steps to success with self-service analytics

Gartner predicts that by 2020 the number of data and analytics experts in business units will grow three times the rate of those in IT units. With that in mind, isn’t creating a culture that values data absolute imperative? Creating a community of practice (COP) is not as simple as ‘training’ often sounds. Like Agile methods can quickly turn ‘tragile’ or ‘fragile’ if the team isn’t bought into the approach, self-service will fail if there isn’t a data driven culture that champions best practices. A COP uses training to first promote consumption for the business, and second build SMEs who will champion best practices for future builds. All areas of the enterprise are involved in creating this community: technical SMEs, novice developers and business consumers all interact during technical and tool agnostic sessions. To further growth and development across varying BI maturity, smaller break-out sessions are used to connect business units with similar use cases or audiences, so they can work together on their BI solutions. By creating a community of practice, you are fostering a culture that understands BI best practices and is encouraged to hone and develop new skills.


Which two companies will lead the enterprise Internet of Things?

Which two companies will lead the enterprise Internet of Things?
The biggest opportunities, the survey said, were in platforms supporting manufacturing and service applications. These enterprise IoT platforms, according to data and analytics firm GlobalData, “have become important enablers across a wide swathe of enterprise and industrial operations” by helping businesses become more productive, streamline their operations, and gain incremental revenues by connecting their devices and products to IoT devices sensors that collect a wide variety of environmental, usage, and performance data. The platforms are designed to help businesses collect, filter, and analyze data in a variety of applications that can help organizations make data-driven business, technology, and operational decisions. But which eIoT platforms are best positioned for to lead the “dynamic and highly competitive eIoT market? To find out, U.K.-based GlobalData conducted a “comprehensive analysis … with profiles, rankings, and comparisons of 11 of the top global platforms,” including Amazon, Cisco, GE, Google, HPE, Huawei, IBM, Microsoft, Oracle, PTC, and SAP.


AI can deliver 'faster better cheaper' cybersecurity

"We need to be able to make good cybersecurity services accessible to small and medium businesses, and consumers, and so we see a great opportunity in that regard," Ractliffe said. "Bluntly, we can see 'better faster cheaper' means of delivering cybersecurity through artificial intelligence and automation." Australia's defence scientists are also turning to AI techniques in the military's increasingly complex networked environment. "When we look at a system like a warship, it is now completely networked ... so that in itself creates a vulnerability," said Australia's Chief Defence Scientist Dr Alex Zelinsky at the Defence Science and Technology Group (DSTG). The internet is a "best effort" network. Malicious actors can slow down network traffic, or even divert it to where it can be monitored. This can happen in real time, and the challenge is how to detect that, and respond as quickly as possible. "I think that's where the AI elements come in," Zelinsky said. But one of the challenges of using AI in a protective system, or in the potential offensive systems that Zelinsky hinted that DSTG is working on, is explainability.


Digital trust: Security pros, business execs and consumers see it differently

digital trust today
“We are at a crossroads in the information age as more companies are being pulled into the spotlight for failing to protect the data they hold, so with this research, we sought to understand how consumers feel about putting data in organizations’ hands and how those organizations view their duty of care to protect that data,” said Jarad Carleton, industry principal, Cybersecurity at Frost & Sullivan. “What the survey found is that there is certainly a price to pay – whether you’re a consumer or you run a business that handles consumer data – when it comes to maintaining data privacy. Respect for consumer privacy must become an ethical pillar for any business that collects user data.” Responses to the survey showed that the Digital Trust Index for 2018 is 61 points out of 100, a score that indicates flagging faith from consumers surveyed in the ability or desire of organizations to fully protect user data. The index was calculated based on a number of different metrics that measure key factors around the concept of digital trust, including how willing consumers are to share personal data with organizations and how well they think organizations protect that data.


Disruption: The True Cost of an Industrial Cyber Security Incident

Disruption: The True Cost of an Industrial Cyber Security Incident
The IoT threat facing industrial control systems is expected to get worse. In late 2016, Gartner estimated that there would be 8.4 billion connected things worldwide in 2017. The global research company said there could be approximately 20.5 billion web-enabled devices by 2020. An increase of this magnitude would give attackers plenty of new opportunities to leverage vulnerable IoT devices against industrial control systems. Concern over flawed IoT devices is justified. Attackers can misuse those assets to target industrial environments, disrupt critical infrastructure and jeopardize public safety. Those threats notwithstanding, many professionals don’t feel that the digital threats confronting industrial control systems are significant. Others are overconfident in their abilities to spot a threat. For instance, Tripwire found in its 2016 Breach Detection Study that 60 percent of energy professionals were unsure how long it would take automated tools to discover configuration changes in their organizations’ endpoints or for vulnerability scanning systems to generate an alert.


How to evolve architecture with a reactive programming model


At the top level, the reactive model demands that enterprise architects think in terms of steps rather than flows. Each step is a task that is performed by a worker, an application component or a pairing of the two. Steps are invoked by a message and generate one or more responses. For example, a customer number has to be validated, meaning it's associated with an active account. This step might be a part of a customer order, an inquiry, a shipment or a payment. Historically, enterprise architects might consider this sequence to be a part of each of the application flows cited above. In the reactive programming model, it's essential to break out and identify the steps. Only after that should architects compose them into higher-level processes. It's difficult to work with line organizations to define steps because they tend to think more in terms of workers and roles, which dictated the flow models of the past. If you're dealing with strict, top-down EA, you'd derive steps by looking at the functional components of the traditional tasks, such as answering customer inquiries. 


How Contract Tests Improve the Quality of Your Distributed Systems


In order to fail fast and start getting immediate feedback from our application, we do test driven development and start with unit tests. That’s the best way to start sketching the architecture we’d like to achieve. We can test functionalities in isolation and get immediate response from those fragments. With unit tests, it’s much easier and faster to figure out the reason for a particular bug or malfunctioning. Are unit tests enough? Not really since nothing works in isolation. We need to integrate the unit-tested components and verify if they can work properly together. A good example is to assert whether a Spring context can be properly started and all required beans got registered. Let’s come back to the main problem – integration tests of communication between client and a server. Are we bound to use hand written HTTP / messaging stubs and coordinate any changes with their producers? Or are there better ways to solve this problem. Let’s take a look at a contract test and how they can help us.



Quote for the day:


"If you don’t like the road you’re walking, start paving another one." -- Dolly Parton


No comments:

Post a Comment