Change is the key concept of regression testing. The reasons for these changes usually fall into four broad categories: New functionality. This is the most common reason to run regression testing. The old and new code must be fully compatible. When developers introduce new code, they don’t fully concentrate on its compatibility with the existing code. It is up to regression testing to find possible issues; Functionality revision. In some cases, developers revise the existing functionality and discard or edit some features. In such situations, regression testing checks whether the feature in question was removed/edited with no damage to the rest of the functionality; Integration. In this case, regression testing assures that the software product performs flawlessly after integration with another product; and Bug fixes. Surprisingly, developers’ efforts to patch the found bugs may generate even more bugs. Bug fixing requires changing the source code, which in turn calls for re-testing and regression testing.
It doesn’t take much paranoia to see how this is obviously beneficial to the airlines: your type of credit card gives a rough idea on your credit score, your billing address can give an idea of your social status, and even your email address says something about you. Plus, it’s easy to spot if you regularly fly alone. Or are your family with you? Is a certain financially-unconnected person always in the seat next to you? Are you flying to a ‘romantic’ location? Did you book a nice hotel, or are you a cheapskate? Are any of your Facebook friends or Twitter followers on the flight? What have you been looking at on the in-flight WiFi? And what events are happening in the area where you bought your flight to? All this data allows airlines to develop better models of their customers, and therefore give them ever better ways of refining their pricing models. Certain airlines are already running reverse auctions on upgrades, but this could be taken further.
The volatility in cryptocurrencies is well-known and not for the faint-hearted, especially over recent weeks. Blockchain-based payment network Havven sets out to provide the first decentralized solution to price stability. Designed to provide a practical cryptocurrency, Havven uses a dual token system to reduce price volatility. The fees from transactions within the system are used to collateralise the network, secured by blockchain and supposedly enabling the creation of an asset-backed stablecoin. Think of Tether, but not being tied to the dollar. Each transaction generates fees that are paid to holders of the collateral token and as transaction volume grows, the value of the platform increases. Havven is a low-fee and stable payment network that wants to enable anyone anywhere to transact with anyone else. It's an interesting addition to the increasingly crowded crypto space.
Proof-of-work is the main model for cryptocurrency mining and blockchain, especially for Bitcoin. Basically, the way to guarantee the order of transactions is to slow down the system and make it computationally onerous to add a new block – i.e. it takes time and computing capacity. If two blocks are added simultaneously, then it is basically a competition to see who can perform the calculation tasks faster and add more to the chain, because the longer fork wins. The reward for adding a block is to receive some tokens (e.g. Bitcoins). SHA-256 (Secure Hash Algorithm), which came with Bitcoin, is a commonly used model, and there are targets for the hash algorithm value that basically forces it to perform a lot of calculations for each transaction to achieve the targeted value. The benefit of the current algorithm is that the results are easy to check and see whose block is added to the chain. It would probably need quite a lot work to develop models in which miners make some otherwise useful computation for proof of work.
The reason for this fear is that deep-learning programs do their learning by rearranging their digital innards in response to patterns they spot in the data they are digesting. Specifically, they emulate the way neuroscientists think that real brains learn things, by changing within themselves the strengths of the connections between bits of computer code that are designed to behave like neurons. This means that even the designer of a neural network cannot know, once that network has been trained, exactly how it is doing what it does. Permitting such agents to run critical infrastructure or to make medical decisions therefore means trusting people’s lives to pieces of equipment whose operation no one truly understands. If, however, AI agents could somehow explain why they did what they did, trust would increase and those agents would become more useful. And if things were to go wrong, an agent’s own explanation of its actions would make the subsequent inquiry far easier. Even as they acted up, both HAL and Eddie were able to explain their actions.
A key driver behind multi-cloud adoption is increased reliability. In 2017, Amazon's Simple Storage Service went down due to a typo in a command executed during routine maintenance. In the pre-cloud era, the consequences of an error like that would be relatively negligible. But, due to the growing dependence on public cloud infrastructure, that one typo reportedly cost upwards of $150 million in losses across many companies. A multi-cloud app -- or an app designed to run on various cloud-based infrastructures -- helps mitigate these risks; if one platform goes down, another steps in to take its place. ... Infrastructure changes should take days, not months. Regardless of the reason -- to save money, to prevent vendor lock-in or simply to run your app in a development environment without design compromises -- writing code without a specific cloud platform in mind ensures it will run on any server.
You will get a fully automated health checkup every time you take a bath or use the toilet at your house. Body fluids and temperature will be analyzed by sensors and the data will be forwarded to an “AI doctor” that will be able to inform you if there is something wrong with you and how to proceed. Ok, maybe this one will take a little longer than a decade. “ASIMO” alike droids will begin to be sold as “physical personal assistants” – and they’re not so much different from what you can see as the “common” robots in the movie AI; mainly to perfume nursing support to hold population. Cognitive Augmentation – As Maurice Conti explained, we are already “augmented”. Each and every one of us have a smartphone which is connected to the Internet and can easily reach out to a simple service like Google to get immediate knowledge about some unknown fact of life upon needing it.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world. Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution — and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots. While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse.
There's no one single roadblock that exists for the journey, which is ongoing. But the biggest hurdle is one of people, to have your people ready with the skills needed for this. We looked at this and asked: What are the types of skills we need resident in our team to live in this world? Do we want to hire people or leverage contractors? Then we built some programs around efforts to upskill our people; it's incumbent on us to help them learn new skills. But we had a mix of all three [new hires, contractors and upskilled staff]. I don't think it's pragmatic to think you can do one versus the other. I think you need to think all three of those. [On the other hand] just giving it to a provider saying, 'Go figure this out,' is a recipe for disaster. You have to stay very engaged.
Creating an innovative culture requires strong leaders who realise that changes in the culture has to start with themselves. We speak to many executives who think they can change the culture by creating a special team to foster innovation. This is not a "make it so" change. It requires everyone (including the executive) to behave differently in order to change the culture. Most executives and upper management are not motivated to change their behavior, as their rewards system is usually based on short term financial measures and not value delivery to customers and other stakeholders. Organisational risk aversion is another big barrier to innovation. We are frequently asked to provide stories to executives on how their competitors or other organisations much like themselves have implemented innovation. No one wants to be the first to try something new or different for fear of failure.
Quote for the day:
"Leaders who won't own failures become failures." -- Orrin Woodward