This article shows how container orchestration provides an abstraction over service instances and facilitates in replacing them with mock instances. On top of that, service meshes enable us to re-route traffic and inject faulty responses or delays to verify our services' resiliency. We will use a coffee shop example application that is deployed to a container orchestration and service mesh cluster. We have chosen Kubernetes and Istio as example environment technology. Let’s assume that we want to test the application’s behavior without considering other, external services. The application runs in the same way and is configured in the same way as in production, so that later on we can be sure that it will behave in exactly the same way. Our test cases will connect to the application by using its well-defined communication interfaces. External services, however, should not be part of the test scenario. In general, test cases should focus on a single object-under-test and mask out all the rest. Therefore, we substitute the external services with mock servers.
Data fidelity requires the contextual evaluation of data in terms of security. This means examining the data objects within the context of the environment in which they were created. In order to gather this data, you must not only re-examine what you deem important, but you must do so within the context of the tasks that you are attempting to support. The task support piece is critical because this bounds the problem space in which you can work. If the problem space is not bounded, all of the solutions will remain brittle point solutions that continue to fail when new problems are introduced. The ways systems can fail seems endless, but the ways systems can perform correctly are limited. This characteristic is key in any analysis that requires accurate predictions. Coincidentally, this same characteristic is oftentimes overlooked when attempting to accurately predict outcomes in the cyber domain. Three disciplines can assist in creating the boundaries and gathering the contextual data required to ensure data fidelity: dependency modeling, resiliency and reliability.
Gartner predicts that by 2020 the number of data and analytics experts in business units will grow three times the rate of those in IT units. With that in mind, isn’t creating a culture that values data absolute imperative? Creating a community of practice (COP) is not as simple as ‘training’ often sounds. Like Agile methods can quickly turn ‘tragile’ or ‘fragile’ if the team isn’t bought into the approach, self-service will fail if there isn’t a data driven culture that champions best practices. A COP uses training to first promote consumption for the business, and second build SMEs who will champion best practices for future builds. All areas of the enterprise are involved in creating this community: technical SMEs, novice developers and business consumers all interact during technical and tool agnostic sessions. To further growth and development across varying BI maturity, smaller break-out sessions are used to connect business units with similar use cases or audiences, so they can work together on their BI solutions. By creating a community of practice, you are fostering a culture that understands BI best practices and is encouraged to hone and develop new skills.
The biggest opportunities, the survey said, were in platforms supporting manufacturing and service applications. These enterprise IoT platforms, according to data and analytics firm GlobalData, “have become important enablers across a wide swathe of enterprise and industrial operations” by helping businesses become more productive, streamline their operations, and gain incremental revenues by connecting their devices and products to IoT devices sensors that collect a wide variety of environmental, usage, and performance data. The platforms are designed to help businesses collect, filter, and analyze data in a variety of applications that can help organizations make data-driven business, technology, and operational decisions. But which eIoT platforms are best positioned for to lead the “dynamic and highly competitive eIoT market? To find out, U.K.-based GlobalData conducted a “comprehensive analysis … with profiles, rankings, and comparisons of 11 of the top global platforms,” including Amazon, Cisco, GE, Google, HPE, Huawei, IBM, Microsoft, Oracle, PTC, and SAP.
"We need to be able to make good cybersecurity services accessible to small and medium businesses, and consumers, and so we see a great opportunity in that regard," Ractliffe said. "Bluntly, we can see 'better faster cheaper' means of delivering cybersecurity through artificial intelligence and automation." Australia's defence scientists are also turning to AI techniques in the military's increasingly complex networked environment. "When we look at a system like a warship, it is now completely networked ... so that in itself creates a vulnerability," said Australia's Chief Defence Scientist Dr Alex Zelinsky at the Defence Science and Technology Group (DSTG). The internet is a "best effort" network. Malicious actors can slow down network traffic, or even divert it to where it can be monitored. This can happen in real time, and the challenge is how to detect that, and respond as quickly as possible. "I think that's where the AI elements come in," Zelinsky said. But one of the challenges of using AI in a protective system, or in the potential offensive systems that Zelinsky hinted that DSTG is working on, is explainability.
“We are at a crossroads in the information age as more companies are being pulled into the spotlight for failing to protect the data they hold, so with this research, we sought to understand how consumers feel about putting data in organizations’ hands and how those organizations view their duty of care to protect that data,” said Jarad Carleton, industry principal, Cybersecurity at Frost & Sullivan. “What the survey found is that there is certainly a price to pay – whether you’re a consumer or you run a business that handles consumer data – when it comes to maintaining data privacy. Respect for consumer privacy must become an ethical pillar for any business that collects user data.” Responses to the survey showed that the Digital Trust Index for 2018 is 61 points out of 100, a score that indicates flagging faith from consumers surveyed in the ability or desire of organizations to fully protect user data. The index was calculated based on a number of different metrics that measure key factors around the concept of digital trust, including how willing consumers are to share personal data with organizations and how well they think organizations protect that data.
The IoT threat facing industrial control systems is expected to get worse. In late 2016, Gartner estimated that there would be 8.4 billion connected things worldwide in 2017. The global research company said there could be approximately 20.5 billion web-enabled devices by 2020. An increase of this magnitude would give attackers plenty of new opportunities to leverage vulnerable IoT devices against industrial control systems. Concern over flawed IoT devices is justified. Attackers can misuse those assets to target industrial environments, disrupt critical infrastructure and jeopardize public safety. Those threats notwithstanding, many professionals don’t feel that the digital threats confronting industrial control systems are significant. Others are overconfident in their abilities to spot a threat. For instance, Tripwire found in its 2016 Breach Detection Study that 60 percent of energy professionals were unsure how long it would take automated tools to discover configuration changes in their organizations’ endpoints or for vulnerability scanning systems to generate an alert.
At the top level, the reactive model demands that enterprise architects think in terms of steps rather than flows. Each step is a task that is performed by a worker, an application component or a pairing of the two. Steps are invoked by a message and generate one or more responses. For example, a customer number has to be validated, meaning it's associated with an active account. This step might be a part of a customer order, an inquiry, a shipment or a payment. Historically, enterprise architects might consider this sequence to be a part of each of the application flows cited above. In the reactive programming model, it's essential to break out and identify the steps. Only after that should architects compose them into higher-level processes. It's difficult to work with line organizations to define steps because they tend to think more in terms of workers and roles, which dictated the flow models of the past. If you're dealing with strict, top-down EA, you'd derive steps by looking at the functional components of the traditional tasks, such as answering customer inquiries.
In order to fail fast and start getting immediate feedback from our application, we do test driven development and start with unit tests. That’s the best way to start sketching the architecture we’d like to achieve. We can test functionalities in isolation and get immediate response from those fragments. With unit tests, it’s much easier and faster to figure out the reason for a particular bug or malfunctioning. Are unit tests enough? Not really since nothing works in isolation. We need to integrate the unit-tested components and verify if they can work properly together. A good example is to assert whether a Spring context can be properly started and all required beans got registered. Let’s come back to the main problem – integration tests of communication between client and a server. Are we bound to use hand written HTTP / messaging stubs and coordinate any changes with their producers? Or are there better ways to solve this problem. Let’s take a look at a contract test and how they can help us.
Quote for the day:
"If you don’t like the road you’re walking, start paving another one." -- Dolly Parton