Daily Tech Digest - June 21, 2019

Defining a Test Strategy for Continuous Delivery

Image title
Defining the test cases requires a different mindset than implementing the code. It's better that the test cases are not defined by the same person that implemented the feature. Implementing good automated tests requires serious development skills. This is why, if there are people on the team that are just learning to code (for example testers that are new to test automation), it's a good idea to make sure that the team is giving them the right amount of support to skill up. This should be done through pairing, code review, knowledge sharing sessions. Remember that the entire team owns the codebase. Don't fall into the split ownership trap, in which production code is owned by the devs and test code is owned by the testers. This hinders knowledge sharing, introduces test case duplication and can lead to a drop in test code quality. Developers and testers are not the only ones that care about the quality. Ideally, the Product Owner should define most of the acceptance criteria. She is the one that has the best understanding of the problem domain and its essential complexity. So she should be a major contributor when writing acceptance criteria.

Blockchain expert Alex Tapscott sees coming crypto war as 'cataclysmic'

Digital technology has had a profound impact on virtually every aspect of our lives – except for banking. The institutions we rely on as trusted intermediaries to move, store and manage value, exchange financial assets, enable funding and investment and insure against risk, are more-or-less unchanged since the advent of the internet. This is changing, thanks to blockchain. Libra is only the latest in a wave of revolutionary new innovations that is beginning to disrupt the old model. Bitcoin remains the most consequential and important innovation in at least a generation. It laid the ground work for a new internet of value that promises to do to value industries, like financial services, what the internet did to information industries, like publishing. At first, the impact on banks will be muted. In fact, Facebook will need to rely on some existing banking infrastructure to successfully launch Libra. Over time, however, Libra could cut banks out of many aspects of the industry altogether. I share the same deep belief that Bitcoin will do the same.

The downfall of the virtual assistant (so far)

Virtual Assistant
We've talked plenty about the reasons why everyone and their mother wants you to get friendly with their flavor of robot aid — and why that, in turn, has led to what I call the post-OS era, in which a device's operating system is less important than the virtual assistant threaded throughout it. It's no coincidence that Google is slowly expanding Assistant into a platform of its own, and what we're seeing now is almost certainly just the tip of the iceberg. Something we haven't discussed much, though, is a painful reality that often gets overlooked in all the glowing coverage about this-or-that new virtual assistant gizmo or feature. And for anyone who ever tries to rely on this type of talking technology — be it for on-the-go answers from your phone, on-the-fly device control in your home, or hands-free help in your office — it's a reality that's all too apparent. The truth is, for all of their progress and the many ways in which they can be handy, voice assistants still fail far too frequently to be dependable. And the more Google and other companies push their virtual assistants and expand the areas in which they operate, the more pressing the challenge to correct this problem becomes.

Introduction to Reinforcement Learning

Why are we talking about all this? What does this mean to us, except that we need to have pets if we want to become a famous psychologist? What does this all have to do with artificial intelligence? Well, these topics explore a type of learning in which some subject is interacting with the environment. This is the way we as humans learn as well. When we were babies, we experimented. We performed some actions and got a response from the environment. If the response is positive (reward) we repeated those actions, otherwise (punishment) we stopped doing them. In this article, we will explore reinforcement learning, type of learning which is inspired by this goal-directed learning from interaction. ... Another type of learning is unsupervised learning. In this type of learning, the agent is provided only with input data, and it needs to make some sort of sense out of it. The agent is basically trying to find patterns in otherwise unstructured data. This type of problem is usually used for classification or clusterization types of problems.

Cyberwarfare escalation just took a new and dangerous turn

In the murky world of espionage and cyberwarfare, it's never entirely clear what's going on. Does the US really have the capabilities to install malware in Russian energy systems? If so, why would the intelligence agencies be comfortable (as they seem to be) with the story being reported? Is this an attempt to warn Russia and make its government worry about malware that might not even exist? But beyond the details of this particular story, there are at a number of major concerns here -- particularly around unexpected consequences and the escalation of cyberwarfare risks. It's very hard for a company (or a government) to tell the difference between hackers probing a network as part of general reconnaissance and the early stages of an attack itself. So even probing critical infrastructure networks could raise tensions. There's significant risk in planting malware inside another country's infrastructure with the aim of using it in future. The code can be discovered, which is at the very least embarrassing and, worse, could be seen as a provocation. It could even be reverse-engineered and used against the country that planted it.

Nutanix XI IoT: An Overview For Developers

By distributing the computing part of the problem to the edge, we can execute detection-decision-action logic with limited latency. For example, immediate detection might mean a defective product never leaves the production line, much less makes it to the customer. The consequences of receiving a defective item can range from inconvenient to catastrophic. If it is an article of clothing, the article might require a return. While this may have a range of negative consequences to the business, it does not compare to the consequences of having a defective part installed in an aircraft. Edge computing of data created by IoT edge devices can clearly benefit business, but as we mentioned earlier, as the number and diversity of devices grows, so does the workload for developers attempting to write applications for these devices. Configuring devices, networking devices, managing devices and data streams … these are all tasks that distract developers from the primary task at hand: creating the applications that use IoT data to serve the needs of your business.

Blockchain and AI combined solve problems inherent in each

Best known as the technology that powered bitcoin, blockchain offers an immutable record of every transaction, ensuring that all nodes have the same version of the truth and no records are tampered with. That makes it a relatively fail-safe and hack-proof method for storing and transferring monetary value. But to ensure this safety, the nodes have to go through huge calculations to ensure the validity of the transactions. Blockchain's mechanism for ensuring safety is also its weakness, as it limits scalability. The same is true for blockchain's immutability; every record needs to store the entire history of all transactions. The problems associated with AI are different. AI needs data to operate, but getting good data can be problematic. For instance, hackers can alter the data a machine is trained on with a data poisoning attack. Collecting data from clients is also problematic, especially in light of data privacy laws such as Europe's GDPR. Finally, most of the data needed for effective AI is owned by large organizations, such as Google and Facebook.

In an effort to ensure the UK’s resilience to attacks that exploit vulnerabilities in network-connected cameras, the SCC said the minimum requirements were an important step forward for manufacturers, installers and users alike. The work has been led by Mike Gillespie, cyber security advisor to the SCC and managing director of information security and physical security consultancy Advent IM, along with Buzz Coates, business development manager at CCTV distributor Norbain. The standard was developed in consultation with surveillance camera manufacturers Axis, Bosch, Hanwah, Hikvision and Milestone Systems. Speaking ahead of the official launch, Gillespie said that if a device came out of the box in a secure configuration, there was a good chance it would be installed in a secure configuration. “Encouraging manufacturers to ensure they ship their devices in this secure state is the key objective of these minimum requirements for manufacturers,” he said. Manufacturers benefit, said Gillespie, by being able to demonstrate that they take cyber seriously and that their equipment is designed and built to be resilient.

3 top soft skills needed by today’s data scientists

Data scientists who can understand the business context, plus the technical side of the equation, will be invaluable. This kind of “bilingual” talent can turn data streams into a predictive model, and then translate that model into a working reality, such as for financial forecasting. Core skills in storytelling, problem solving, agile development, and design thinking are critical to interoperating within different business contexts as well. The key is to develop T-shaped skillsets, as opposed to being I-shaped. While I-shaped people have a deep, narrow understanding of one area (like data engineering or data science), T-shaped people have both in-depth knowledge in one area and a breadth of understanding of several others. It is easier for T-shaped people to meld their data expertise to a broad range of use cases and industries. ... The communication side will be especially important as data expertise gets pulled into interdisciplinary use cases. Data scientists will have to be able to talk to people with different backgrounds. This goes back to the need to be more T-shaped to effectively translate highly technical ideas to different business contexts.

Using OpenAPI to Build Smart APIs for Dumb Machines

OpenAPI isn’t the only spec for describing APIs, but it is the one that seems to be gaining prominence. It started life as Swagger and was rebranded OpenAPI with its donation to the OpenAPI initiative. RAML and API Blueprint have their own adherents. Other folks like AWS, Google, and Palantir use their own API specs because they predate those other standards, had different requirements or found even opinionated specs like OpenAPI insufficiently opinionated. I’ll focus on OpenAPI here because its surging popularity has spawned tons of tooling. The act of describing an API in OpenAPI is the first step in the pedagogical process. Yes, documentation for humans to read is one obvious output, but OpenAPI also lets us educate machines about the use of our APIs to simplify things further for human consumers and to operate autonomously. As we put more and more information into OpenAPI, we can start the shift the burden from humans to the machines and tools they use. With so many APIs and so much for software developers to know, we’ve become aggressively lazy by necessity. APIs are a product; reducing friction for developers is a big deal.

Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

No comments:

Post a Comment