What makes a service mesh unique is that it is built to accommodate the unique nature of distributed microservice environments. In a large-scale application built from microservices, there might be multiple instances of any given service, running across various local or cloud servers. All of these moving parts obviously makes it difficult for individual microservices to find the other services they need to communicate with. A service mesh automatically takes care of discovering and connecting services on a moment to moment basis so that both human developers and individual microservices don’t have to. Think of a service mesh as the equivalent of software-defined networking (SDN) for Level 7 of the OSI networking model. Just as SDN creates an abstraction layer so network admins don’t have to deal with physical network connections, a service mesh decouples the underlying infrastructure of the application from the abstract architecture that you interact with. The idea of a service mesh arose organically as developers began grappling with the problems of truly enormous distributed architectures. Linkerd, the first project in this area, was born as an offshoot of an internal project at Twitter.
With conventional automation technology, testers have to invest considerable time into learning how to script each test scenario. On the other hand, with autonomous software testing, testers can spend more time training tools and contributing to QA management initiatives, said Theresa Lanowitz, co-founder and analyst at Voke. Autonomous testing frees testers to spend more time, for instance, helping the CIO or CEO tackle critical objectives around bringing AI into the organization to benefit the customer. And when autonomous tools mature, their capabilities will enable testers to spend more time exploring nonfunctional requirements of a project, such as performance and security. Once these tools fulfill their promise and have a proven track record, many software quality engineers will ditch test tools with scripted interfaces. "Capabilities [of traditional test tools] are going to be so far eclipsed by what these autonomous testing tools can do that you will leave that tool behind," she said.
Momenta won’t make cars or hardware, Cao assured. Rather, it gives cars autonomous features by making their brains, or deep-learning capacities. It’s in effect a so-called Tier 2 supplier, akin to Intel’s Mobileye, that sells to Tier 1 suppliers who actually produce the automotive parts. It also sells directly to original equipment manufacturers that design cars, order parts from suppliers and assemble the final product. Under both circumstances, Momenta works with clients to specify the final piece of software. Momenta believes this asset-light approach would allow it to develop state-of-the-art driving tech. By selling software to car and parts makers, it not only brings in income but also sources mountains of data, including how and when humans intervene, to train its codes at relatively low costs. The company declined to share who its clients are but said they include top carmakers and Tier 1 suppliers in China and overseas. There won’t be many of them because a “partnership” in the auto sector demands deep, resource-intensive collaboration, so less is believed to be more. What we do know is Momenta counts Daimler AG as a backer. It’s also the first Chinese startup that the Mercedes-Benz parent had ever invested in, though Cao would not disclose whether Daimler is a client.
While different nations often see matters of national policy in very different terms, there are times of nearly universal agreement. That’s the case today when it comes to commitments to fuel the advancement of artificial intelligence. Governments around the world agree on the importance of investing in AI initiatives. This point is underscored in a recent report by McKinsey Global Institute. The briefing notes that China and the United States are leaders in AI-related research activities and investments, followed by a second group of countries that includes Germany, Japan, Canada and the United Kingdom. Other countries that are on path to AI readiness include Belgium, Singapore, South Korea, Sweden, Brazil, India, Italy and Malaysia. There are lots of reasons for the focus on AI. One of them is economic growth. McKinsey says that its survey data suggests AI adoption could raise global GDP (gross domestic product) by as much as $13 trillion by 2030. This equates to about 1.2 percent additional GDP growth per year. Numbers like these suggest that nations have a lot to gain from AI investments.
The Scottish company does this by using non-recyclable plastic waste to extend the bitumen used in road production. Not only does this give a new lease of life to plastic that would otherwise have been incinerated or ended up in landfill, it also reduces the amount of fossil fuels needed for road production, and results in a higher quality finished product. MacRebur’s roads can be found all over the UK, where the Department for Transport recently assigned £1.6m to extend the use of plastic roads in Cumbria. They have also begun operations in various countries around the world. ... North American based Recleim styles itself as a next generation recycling company. In partnership with Germany recycling technology company Adelmann Umwelt GmbH, it offers closed-loop recycling to businesses and organisations. This involves collecting materials to be recycled, processing them, and repurposing them to be used again. Recleim’s proprietary system includes a logistics operation to recover items from businesses and take them to their de-manufacturing plant, where they are cleanly and safely taken apart. This process recovers 95 per cent of components by weight from items such as refrigerators, other large appliances, and electronics.
Wi-Fi 6 and 5G are competitive with each other for specific situations in the enterprise environment that depend on location, application and device type. IT managers should carefully evaluate their current and emerging connectivity requirements. Wi-Fi will continue to dominate indoor environments and cellular wins for broad outdoor coverage. Some of the overlap cases occur in stadiums, hospitality and other large event spaces with many users competing for bandwidth. Government applications, including aspect of smart cities, can be applicable to both Wi-Fi and cellular. Health care facilities have many distributed medical devices and users that need connectivity. Large distributed manufacturing environments share similar characteristics. The emerging IoT deployments are perhaps the most interesting “competitive” environment with many overlapping use cases. While the wireless technologies enabling them are converging, Wi-Fi 6 and 5G are fundamentally distinct networks – both of which have their role in enterprise connectivity. Enterprise IT leaders should focus on how Wi-Fi and cellular can complement each other, with Wi-Fi continuing as the in-building technology to connect PCs and laptops, offload phone and tablet data, and for some IoT connectivity.
The most critical piece, he says, is that today, most AI systems are built and require pretty substantial investment in data science, requiring some heavy data scientists and engineering types to build the systems and deploy them for enterprise use. “If you want to extend AI to a wide swath of users what we need to get to over time — and it’s not going to happen overnight — is some semi-autonomous tools,” Gold explains. “The equivalent of a word processor or Powerpoint that brings it down to the user level instead of having to go out and buy 5,000 data scientists that you can’t get anyway.” In other words, a tool in which you can define a problem you want to go solve for, or want to get information on, which then goes out and builds the AI system, the learning system, the inference system that will allow you to do that. ... All the major chip players are adding an NNP (neural network processor) to their chips, Gold says, and the next question becomes how to best do that. There are a number of arguments about that as well. Some companies are focusing on the training side, and others are focusing on the inference side, which are two ways of optimizing the architecture. Ultimately, he says, you’ll need both.
The problem, Ormandy writes, start within SymCrypt, which is the primary library for implementing symmetric cryptographic algorithms in Windows 8 and newer operating systems. These algorithms create a single, secret key that is used for both encryption and decryption. The bug essentially creates a never-ending loop within this cryptographic library, Ormandy says. "There's a bug in the SymCrypt multi-precision arithmetic routines that can cause an infinite loop when calculating the modular inverse on specific bit patterns with bcryptprimitives!SymCryptFdefModInvGeneric," Ormandy writes. As part of his research, Ormandy constructed a special X.509 certificate - a recognized public key infrastructure standard - that would trigger the bug by not allowing the system to complete the verification process. Because the certificate is embedded in a secure message or protocol, it can bypass security measures. If one systems triggers the flaw, it can go on to affect an entire fleet of Windows devices, he writes. In addition to a denial-of-service attack, this flaw could also force the Windows devices to reboot, the researcher says.
AIOps can augment enterprise IT ops teams as they cope with ever-larger numbers of increasingly complex IT infrastructure components. But AIOps tools are only as good as the data they're given. The earliest days of AIOps stoked fear that advanced data analytics algorithms attached to automated machines will replace human IT experts, but those fears remain far-fetched at best. Early adopters say AIOps tools are far from a magic bullet, and IT ops jobs are safe, even as organizations use artificial intelligence and machine learning tools to sort through infrastructure monitoring data, reduce alert noise and, in some cases, investigate or resolve the causes of incidents. The effectiveness of AIOps software also remains limited by how solidly human IT pros build the data pipelines that feed it and how well human operators in IT and business interpret its results. "In many situations, we help customers realize they don't actually have the right data in place," said Amer Deeba, COO of Moogsoft, an AIOps software vendor in San Francisco.
"Through their cybercrime laws, the GCC countries have sought to get a stronger grip on social media and to stymie the potential for spillover via online platforms of political unrest from other Arab countries," Hakmeh notes. Other countries are following suit. The Palestinian Authority blocked several news websites in June 2017, a month before a new cybercrime law was enacted. Meanwhile, in Egypt, a 2018 law classified social-media accounts with more than 5,000 followers as media outlets. "Under the new law, social-media users with a large following can be subject to prosecution for spreading false news or inciting crime," Arab News explained. "The law prohibits the establishment of websites without first obtaining a license from the Supreme Council for the Administration of the Media, a government body with authority to legally suspend or block websites in violation of the country's strict laws, and penalize editors with hefty fines."
Quote for the day:
"Always and never are two words you should always remember never to use." -- Wendell Johnson