5G is on its way to serve vertical industries, not just individual customers who are more bothered about experiencing a faster mobile network or richer smart phone functionalities. When it comes to serving vertical industries, security requirements may vary from one service to the other. As the Internet of Things (IoT) continues to gain momentum, more people will be able to remotely operate networked devices and this will surely call for the deployment of a stricter user-authentication method to prevent unauthorized access to IoT devices. For example, biometric identification systems can be installed in smart homes. ... 5G networks are believed to be enhanced by the deployment of new cost-effective IT technologies such as virtualization and Software Defined Network (SDN)/Network Functions Virtualization (NFV). However, 5G services can be equipped with appropriate security mechanisms only if the network infrastructure is robust enough to support the security features. The security of function network elements, in legacy networks, depends, to a large extent, on how well their physical entities could be separated from each other.
As IoT devices grow in popularity, it creates a greater security vulnerability for consumers. Service providers and consumer electronics manufacturers can now leverage the USP standard to perform lifecycle management of connected devices and carry out upgrades to address critical security updates. Newly installed or purchased devices and virtual services can also be easily added, while customer support is improved by remote monitoring and troubleshooting of connected devices, services and home network links. Additionally, the specification enables secure control of IoT, smart home and smart networking functions and helps map the home network to manage service quality and monitor threats. Work on the USP specification was carried out by the Broadband User Services (BUS) Work Area, which is led by Co-Directors John Blackford of Arris, who is also a Broadband Forum board member, and Jason Walls of QA Cafe. AT&T, Axiros, Google, Greenwave Systems, Huawei, NEC, Nokia, and Orange also participated in developing USP.
Neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units,” loosely modeling the way that neurons interact in the brain. Computational models inspired by neural connections have been studied since the 1940s and have returned to prominence as computer processing power has increased and large training data sets have been used to successfully analyze input data such as images, video, and speech. AI practitioners refer to these techniques as “deep learning,” since neural networks have many (“deep”) layers of simulated interconnected neurons. ... Deep learning’s capacity to analyze very large amounts of high dimensional data can take existing preventive maintenance systems to a new level. Layering in additional data, such as audio and image data, from other sensors—including relatively cheap ones such as microphones and cameras—neural networks can enhance and possibly replace more traditional methods. AI’s ability to predict failures and allow planned interventions can be used to reduce downtime and operating costs while improving production yield.
Distributed agile makes it possible to escape any constraints of space or skills and experience in your immediate location. Modern collaboration tools like Slack, Skype, Teams, and Hangouts have made this possible. You can actually work together on stories without being in the same place and ask questions without disturbing your coworkers’ flow. Trust, rapport and communication are still essential. That’s why distributed agile works best when you have at least two teammates in any given location, they meet face to face periodically, and understand each other’s language and culture well. It’s helpful to have the whole team within a short flight and similar time zones so you can easily collaborate physically as well as virtually when needed. That team solidarity makes all the difference when you’re trying to crack a tough problem, get business or user feedback, or just onboard new team members. Agile works best when there is fast, frequent communication through standups and other formal and informal collaboration.
Protecting data, intellectual property (IP), and finances has become an increasing priority at the board room level as fraudsters proliferate and constantly adapt to more sophisticated controls and monitoring. While most organizations are susceptible to seemingly boundless criminal ingenuity, those lacking antifraud controls are predictably worse off, suffering twice the median fraud losses of those with controls in place. However, even organizations with antifraud controls can have their investigative efforts impeded by several factors. Reliance on rules-based testing is a primary culprit. Rules-based tests typically assess and monitor fraud risks across a single data set, giving only a yes or no answer. Information silos further impede analytics-aided investigative efforts. Organizations often struggle to balance the need for locally-tailored processes with the potential benefits of integrated data sharing, unintentionally creating barriers to investigative exploration as a result. The vast and growing volumes of unstructured data amassing in organizations, such as videos, images, emails, and text files.
It’s a different story, though, for workers located remotely like in the Asia-Pacific region. For those individuals, the experience can be very frustrating. I have first-hand experience with this. Prior to being an analyst, I spent some time as a consultant, and I remember trying open PowerPoint and Word documents out of region and it would often take minutes. Sometimes the process would go “not responding,” necessitating the need to shut down the application and start over. The most frustrating part was that there was no way of telling whether the file was still being downloaded or if the process died. I would often “open” the files and then go do something else for a while and come back and hope they finished opening. Bandwidth speeds have increased, but so have the size of Office documents. This is the situation that remote City & Guilds workers were facing. For example, users in Wellington, New Zealand, saw extremely slow response times when accessing files from the corporate Share Point drive, leading to a number of user complaints and a loss of productivity.
In the future, enterprises will be able to feed automatically generated transcripts of business conversations into virtual assistants like IBM Watson or Google Assistant, helping those machines learn how to assist workers or customers better. "If you have your VP of marketing provide an overview of what a particular product does, that video is captured, that audio is converted into text, that text becomes searchable, and, ultimately, that text can be fed into machine intelligence systems," Vonder Haar said. Vendors are continually improving their speech-to-text tools, but enterprises shouldn't wait until those platforms are perfect before experimenting with them, said Jon Arnold, principal of Toronto-based research and analysis firm J Arnold & Associates. "To me, the big takeaway is these platforms definitely provide a lot of exciting possibilities," Arnold said. "Do some harmless in-house trials, get a feel for it, because the use cases will come out of the woodwork once you start getting comfortable with it."
Knowing where to focus your likely very limited resources is key, and can be tackled by performing application risk assessments and threat modeling. By better understanding where your product or service may have unacceptable risk exposure, you can focus your time and resources appropriately. - Vijay Bolina, Blackhawk Network As with any collaborative endeavor that brings together people from different backgrounds, experiences and outlooks, it’s important to acknowledge the possibility of conflict up front and deal with it head-on. Senior leaders should be involved to explain why the DevSecOps ethos is so vital to the company’s future, and hold everyone accountable for advancing its success. - Todd DeLaughter, Automic Software, owned by CA Technologies (NASDAQ: CA) One of the most effective ways to embed security into software is to initiate the security on boot-up. When a user restarts their device or software, the manufacturer should run a series of boot tests to determine any changes in the software and that the software is entirely authentic.
If there is any language that is a known and proven quantity for developers, it’s Java. Enterprise developers, web developers, mobile developers, and plenty of others besides, have made Java ubiquitous and contributed to the massive culture of support around Java. What’s more, the Java runtime, or Java Virtual Machine (JVM), has become a software ecosystem all its own. In addition to Java, a great many other languages have leveraged the Java Virtual Machine to become powerful and valuable software development tools in their own right. Using the JVM as a runtime brings with it several benefits. The JVM has been refined over multiple decades, and can yield high performance when used well. Applications written in different languages on the JVM can share libraries and operate on the same data structures, while programmers take advantage of different language features. Below we profile several of the most significant programming languages created for the JVM.
A service mesh is an infrastructure layer for service-to-service communication. It ensures reliable delivery of your messages across the entire system and is separate from the business logic of your services. Service meshes are often referred to as sidecars or proxies. As software fragments into microservices, service meshes go from being nice-to-have to essential. With a service mesh, not only will you ensure resilient network communications, you can also instrument for observability and control, without changing the application run-time. ... In the direct interpretation it could be used to describe both the network of microservices that make up distributed applications and the interactions between them. However, recently the term has been mostly applied to a dedicated infrastructure layer for handling service-to-service communication, usually implemented as lightweight network proxies (sidecars) that are deployed alongside application code. The application code can treat any other service in the architecture as a single logical component running on a local port on the same host.
Quote for the day:
"You never will be the person you can be if pressure, tension and discipline are taken out of your life." -- Dr James G Bilkey