While conversational AI tools such as chatbots are now common, voice interfaces have been slower to arrive, according to Hayley Sutherland, a senior research analyst at IDC. But advances in the underlying natural language processing technology has made voice-based assistants accurate enough to support regular interactions. “We've seen huge leaps in natural language processing, even in the last year,” she said. That’s important because it means the assistants are less likely to misunderstand commands, which can quickly annoy users. “If I'm working with a voice assistant and it works 80% of the time, that remaining 20% is a lot in my day-to-day job; that can add up to a lot,” she said. Although advances in natural language processing (NLP) usually come from big tech companies like Microsoft, Amazon and Google with deep pockets for research and development, the availability of voice APIs gives more companies access to the technology. And those firms can create AI assistants better tailored to specific workplace scenarios.
The Visitor Pattern separates data from the operations to be performed on it. In this case, a dialog form needs to update or populate the fields of an object. The Visitor achieves this by having a class with overloaded methods, each accepting a specific object type (class) argument. Thus, when a "Visit" method is invoked with a specific argument type, the correct method is automatically chosen. These Visit methods are responsible for creating the correct dialog window, assigning the argument object as the DataContext of the dialog, and showing the dialog (we're assuming a modal dialog here). The Visitor is injected into the ViewModel (VM) objects in the MainWindow-loaded event handler by property injection (the ViewModel object(s) have a public Visitor property field to hold the reference to the Visitor). I believe this is simpler than using a mediator, since there is no need for events to pass between the layers. This example does not require a Dependency Injection container, although one could be applied with little difficulty. It does not reference Prism Behaviors or other external frameworks. It can be added to an existing code base with no disruption.
Unfortunately, the financial accounting standards board and other regulatory bodies have not yet addressed the implications of these technologies. Blockchain is widely associated with Bitcoin and cryptocurrencies. But blockchain will transform auditing, because blockchain by its nature is an ecosystem of an incredibly secure transactions. Even now, startups and large financial services companies are developing solutions to makeover old school industries, like gas and oil, changing when and how their accounting gets done. If we step back and see what is going on with blockchain, AI, and machine learning, it is quite probable that accounting will be dramatically altered in our lifetimes. As it stands, these innovations remain ahead of the standard-setting process. When and how these bodies can address these advancements remains to be seen. But we are seeing technology disrupt allied fields, like logistics and supply chain management. Our standards bodies have no choice but to get ahead of the story—before the story writes its own plot.
One is that they continue to just need lower-cost, easier to manage and highly scalable solutions. That's why people are shifting to cloud and looking at either public or hybrid/private. Related to that point is I think we're seeing a Cloud 2.0, where a lot of companies now realize the public cloud is not the be-all, end-all and it's not going to solve all their problems. They look at a combination of cloud-native technologies and use the different tools available wisely. I think there's the broad brush of people needing scalable solutions and lower costs -- and that will probably always be there -- but the undertone is people getting smarter about private and hybrid. Point number two is around data protection. We're now seeing more and more customers worried about ransomware. They're keeping backups for longer and longer and there is a strong need for write-once compliant storage.
In one sense, a typical container does not need to have its running state backed up; it is not unique enough to warrant such an operation. Furthermore, most containers are stateless – there is no data stored in the container. It’s just another running instance of a given container image that is already saved via some other operation. Many container advocates are quick to point out that high availability is built into every part of the container infrastructure. Kubernetes is always run in a cluster. Containers are always spawned and killed off as needed. Unfortunately, many confuse this high availability with the ability to recover from a disaster. To change the conversation, ask someone how they would replicate their entire Kubernetes and Docker environment should something take out their entire cluster, container nodes and associated persistent storage. Yes, there are reasons Kubernetes, Docker and associated applications need to backed up. First, to recover from disasters. What do you do if the worst happens? Second, to replicate the environment as when moving from a test/dev environment to production, or from production to staging before an upgrade.
Although not a new concept, we are now looking at the opportunity for those who have private servers with excess capacity to rent that capacity to a cloud service provider that can dole out those compute and storage systems on demand to anyone who needs them. If you’re thinking ride-sharing for servers, you’re not far off. In this scenario the cloud service provider is really just a broker sitting between those needing cloud services and those who have servers that can be shared. You may be leveraging servers that have excess capacity in Las Vegas on Monday and perhaps servers in London on Tuesday. You don’t care since you’re abstracted away from the physical servers, not even knowing location and true ownership. Peer-to-peer networks are nothing new. Indeed in this use case there is a clear benefit for both parties. Those with excess server capacity will make money by renting it, thus there is a revenue stream for server capacity that would normally go unused. Those consuming this service would likely pay less money than they would for most public cloud services, at least it would seem, living up to SLAs preset by the consumers.
"DApps will pool resources across numerous machines globally," said Juniper senior analyst Lauren Foye. "The results are applications which do not belong to a sole entity, [but] rather are community-driven." Bitcoin was arguably the first dApp, enabling anyone in the world to download a bit of open-source code to join a blockchain network and verify transactions using a “mining” algorithm, thereby generating digital currency (cryptocurrency) as a reward. Like a RAIDed storage array, if one of the computers (or nodes) running the dApp software goes down, another node instantaneously resumes the task. Because smart contracts, or self-executing business automation software, can interact with dApps, they're able to remove administrative overhead, making them one of most attractive features associated with blockchain. While blockchain acts as an immutable electronic ledger, confirming that transactions have taken place, smart contracts execute predetermined conditions; think about a smart contract as a computer executing on "if/then," or conditional, programming.
Open source, increasingly, is influencing in analytics space. The analytics space has evolved beyond things like Hadoop and MapReduce, which were very text oriented and big data lake centric, to this understanding that the world is shifting to what is termed small data sprawl. The proliferation of IoT, remote sites and offices, means that organisations want to process or analyse data remotely, while enriching that data with information from the centre. With this change there have been much more vertical offerings that are integrating the analytics with the storage itself. Manley explained: “Somebody doesn’t just want to store data for IoT. The point of IoT is that I’m processing and analysing, and we’re seeing a lot more integrated pipelines, of which storage becomes a component. And open source is by far the most popular way, whether you look at Spark or Elasticsearch, because they can evolve quickly and people can adjust them to meet the specific needs of their particular industry.”
CSS architecture is a complex subject that is often overlooked by developers, as it's possible to encapsulate CSS per component and avoid many of the common pitfalls that relate to CSS. While this 'workaround' can make the lives of developers simpler, it does so at the cost of reusability and extendibility. When a developer defines a CSS class, it automatically affects the global scope modifying all related elements (and their children). This works great for simple applications where developers can predict the results, but can quickly become a problem when the size of the application and the team grows, and unintended results start to happen. Initially, this problem was solved by Block Element Modifier (BEM), which is a methodology and set of naming conventions that helped avoid clashes and gave developers strong indications as to what each class did e.g. form__submit--disabled tells us we are within a form, handling a submit button, and applying the disabled state.
Quote for the day:
"No organization should be allowed near disaster unless they are willing to cooperate with some level of established leadership." -- Irwin Redlener