RPA can assist with compliance by helping create more robust and effective compliance programs. From a reduced volume of legal issues, better retention of employees and customers, and improved business operations, RPA can help in many ways. It enables organisations to take greater control over their own operations and deal with compliance issues more easily if they arise. It also offers higher levels of compliance as, once a process is established as an automated workflow with RPA, it is executed in the same way every time without errors, regardless of whether the process concerns data transfer and migration, invoice processing, or purchase order issuing. This means that RPA empowers companies to establish unparalleled levels of process accuracy, especially compared to the work that can be done by human employees. Consequently, businesses can better maintain higher levels of compliance across all business processes.
For Richard Porter, director of technology and innovation at UK self-driving hub organization Zenzic, project CAVForth is evidence enough that the government will effectively meet its deadline for 2021. "We will have an automated bus service commercially carrying a large number of passengers," he told ZDNet. "It will be up and running by 2021, and it will be the main project through which we will be delivering on that deadline." "We interpreted the government's deadline as a commitment to prove by 2021 that the technology can actually start to deliver commercial services. Then, we can start delivering those services at a significant, visible scale." Is the smart car anticlimax simply due to misinterpretation of the government's commitments? Perhaps. But it is worth noting that the industrial strategy's vision does not include human safety operators monitoring the vehicle; and that none of CAVForth's buses will have such a degree of autonomy. Whether experts agree or not on the politics of the government's promise, there is one point that brings about consensus across the industry: even if connected car technology is looking like it will be ready to go by 2021, the UK – and other countries, for that matter – is still a long way from having all the necessary frameworks to make sure autonomous cars can be deployed safely
The critical message to digest from the Microsoft deep dive into this threat is that not all ransomware is the same. The automated, bot-driven worm-like ransomware that spits out across the interwebs like a cyber-blunderbuss is damaging enough, for sure. However, the Microsoft threat protection intelligence team is warning about the type of hands-on, human-operated, highly targeted threat that is more commonly associated with the credential-stealing and data exfiltration antics of nation-state actors. Indeed, there is a similarity beyond the targeting; some of these ransomware attack methodologies have evolved to exfiltrate as well as encrypt data. DoppelPaymer, which recently hit the headlines when I reported how Lockheed Martin, SpaceX and Tesla had all been caught in the crossfire of one cyber-attack on a business in their supply chains, is an excellent example of the breed. More of that in a moment, though. First, let's look at the attack tactics and techniques Microsoft is alerting users to.
SLIDE doesn’t need GPUs because it takes a fundamentally different approach to deep learning. The standard “back-propagation” training technique for deep neural networks requires matrix multiplication, an ideal workload for GPUs. With SLIDE, Shrivastava, Chen and Medini turned neural network training into a search problem that could instead be solved with hash tables. This radically reduces the computational overhead for SLIDE compared to back-propagation training. For example, a top-of-the-line GPU platform like the ones Amazon, Google and others offer for cloud-based deep learning services has eight Tesla V100s and costs about $100,000, Shrivastava said. ... Deep learning networks were inspired by biology, and their central feature, artificial neurons, are small pieces of computer code that can learn to perform a specific task. A deep learning network can contain millions or even billions of artificial neurons, and working together they can learn to make human-level, expert decisions simply by studying large amounts of data.
The reasons why waterfall methodology is not as successful as agile seem clear. But the underlying causes are not necessarily down to reckless approaches to managing the software development project. The waterfall approach does not arrogantly dismiss early and frequent integration testing. Everyone would love to be able to detect significant risks as early as possible. The issue is with the inability to integrate services and components that are not ready for testing (yet). As we progress on a project, we prefer to utilize the divide-and-conquer approach. Instead of doing the development and building sequentially (one thing at a time), we naturally prefer to save time by doing as many things as possible in parallel. So, we split the teams into smaller units that specialize in performing dedicated tasks. However, as those specialized teams are working, they are aware of or are discovering various dependencies. However, as Michael Nygard says in Architecture without an end state: "The problem with dependencies is that you can't depend on them." So the project starts slowing down as it gets bogged down by various dependencies that are not available for integration testing.
Enter containers, with new challenges and opportunities regarding state retention. In the world of containers, we are taught to be stateless. In container design, including courses I’ve taught, the idea is that a container emerges as an instance, does what it’s programmed to do, and goes away without maintaining state. If indeed it works on data from some external source, it’s handed the data by another process or service, returning the data to another process before being removed from memory. Still, no state maintained. The core issues are that containers, as invented years ago, just could not save state information. There was no notion of persistent storage, so maintaining state was impossible. We were taught early on that containers were for operations that did not require state retention. Some people still argue the need for stateless when building container-based applications, contending that it’s the cleanest approach, and that thinking stateful means thinking in outmoded ways. However, that may not be acceptable to most enterprises developers who are using containers. Traditional applications are not purpose-designed and built for containers.
A key benefit of server-side processing is that it doesn't offload data processing to the client. Instead, the browser does what it's designed to do best, which is rendering static HTML to the client. The browser removes the variability of the user's device processing power from the equation, and server-side processing performance becomes more predictable. Single page applications and responsive web apps that rely heavily on client-side rendering significantly minimize the number of round trips that happen with the server because most of the stage management and page transitions happen on the client. Unfortunately, when a page relies heavily on client-side state management, the server is no longer informed as the end user moves from page to page, clicks on buttons or otherwise interacts with the site. This means key metrics such as time on page, exit page counts and bounce rate are either impossible to collect, or are calculated incorrectly.
Is this the new normal? Can there be any expectation of security and privacy when even the most stringent of data privacy regulations appear to have little effect? Companies, government agencies and consumers must change their behavior if they expect to stem this tide. They must adopt disruptive defenses to make it extremely hard for attackers to compromise data. What is a disruptive defense? It is an uncommon defense, based on existing industry standards, that raises application security to higher levels than what is currently used by most applications. There are six disruptive defenses that, when deployed, create significant barriers to attackers. ... Cryptography represents the last bastion of defense when protecting sensitive data. As such, cryptographic keys are the only objects standing between an attacker and a major headache for your company. While convenient, keys protected in files are protected by passwords and are subject to the same attacks that compromise user passwords. By using cryptographic hardware -- present in all modern systems -- applications create major barriers to attacks.
Picocli is a modern library and framework for building command line applications on the JVM. It supports Java, Groovy, Kotlin and Scala. It is less than 3 years old but has become quite popular with over 500,000 downloads per month. The Groovy language uses picocli to implement its CliBuilder DSL. Picocli aims to be "the easiest way to create rich command line applications that can run on and off the JVM". It offers colored output, TAB autocompletion, subcommands, and some unique features compared to other JVM CLI libraries such as negatable options, repeating composite argument groups, repeating subcommands and sophisticated handling of quoted arguments. Its source code is in a single file so it can optionally be included as source to avoid adding a dependency. Picocli prides itself on its extensive and meticulous documentation. Picocli uses reflection, so it is vulnerable to GraalVM’s Java native image limitations, but it offers an annotation processor that generates the configuration files that address this limitation at compile time. ... We can make our application more user-friendly by using colors on supported platforms. This doesn’t just look good, it also reduces the cognitive load on the user: the contrast makes the important information like commands, options, and parameters stand out from the surrounding text. The usage help message generated by a picocli-based application uses colors by default.
The framework is an expansion to the Distributed Disaggregated Chassis (DDC) white box architecture that AT&T submitted to the Open Compute Project last September. The expansion delivers a dynamically programmable fabric with embedded security at the edge of the network, AT&T said. Specifically, the framework embeds AI and machine learning in the network fabric to prevent attacks. "Security has always been at the forefront of AT&T's network initiatives," said Michael Satterlee, VP of network infrastructure and services for AT&T. "Traditionally, we have had to rely on centralized security platforms or co-located appliances which are either not directly in the path of the network or are not cost effective to meet the scaling requirements of a carrier. This new design embeds security on the fabric of our network edge that allows control, visibility and advanced threat protection." AT&T said the framework -- which uses an open hardware and software design to support flexible deployment models -- also represents its white box approach to network design and deployment.
Quote for the day:
"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson