This year, in response to the new imperatives brought by the COVID-19 pandemic, the rally has focused on evaluating the ability of AI systems to reliably collect and match images of individuals wearing an array of different face masks, with a view of eventually deploying the technology in international airports around the country. ... The results, however, varied greatly from one system to the other: for example, the best-performing technology correctly identified individuals 96% of the time, even when they were wearing a mask. The worst-performing system tested during the rally, for its part, only identified 4% of masked individuals. "This isn't a perfect 100% solution," said Arun Vemury, director of S&T's Biometric and Identity Technology Center, "but it may reduce risks for many travelers, as well as the frontline staff working in airports, who no longer have to ask all travelers to remove masks." Facial recognition is currently used in a select number of US airports as part of a program called Simplified Arrival, which is deployed by the Customs and Border Protection (CBP). Under Simplified Arrival, the identity of international travelers who enter and exit the country can be verified at inspection points in the airport by the snap of a picture, rather than having to present a travel document.
The first thing you need to think about before adopting the multicloud approach is whether you are actually ready for it. There are a number of things you need to have in place. For example, one non-negotiable element of your IT team is a DevOps culture. By being committed to agile processes and cross-team collaboration, you can make sure that you’re able to continuously make any necessary changes or updates to your product while the transition is underway. Not to mention, having a DevOps culture will enable teams to quickly adopt cutting-edge technologies made available by multicloud, like Spinnaker or Kubernetes. Next, you need to understand how to achieve high availability, resilience, and zero downtime strategies within your existing architecture. In addition, any legacy architecture will need to be modernized before launching a multicloud strategy. This will allow you to make use of modern cloud features like microservices and containerization, as well as achieve interoperability between clouds. For instance, applications that need to be split into multiple parts to run in separate clouds must be modernized, as legacy architectures would be unlikely to enable this.
The roadmap document is partly based on 450 responses to a call, in October 2019, for input from what is described as an AI “ecosystem” of individuals interested in artificial intelligence. The introduction states “we need to ‘double down’ on recent investment the UK has made in AI [and] we must look to the horizon and be adaptable to disruption”. It says the council stands ready “to convene workshops with the wider ecosystem to capture more detail and work together to ensure that a future National AI Strategy enables the whole of the UK to flourish”. The Alan Turing Institute has a central place in the document. The council advises the government to “provide assured long-term public sector funding that will give the Turing Institute and others the confidence to plan and invest in strategic leadership for the UK in AI research, development and innovation”. On the skills front, the council advocates a decade-long programme of “research fellowships, AI-relevant PhDs across disciplines, industry-led masters and level 7 apprenticeships”. And it suggests that tracking diversity data to “invest and ensure that underrepresented groups are given equal opportunity and included in all programmes”.
The Microsoft 365 Advanced Audit solution makes a range of data available that is focused on what will be useful to respond to crucial events and forensic investigations. It retains this data for one year (rather than the standard 90-day retention), with an option to extend the retention to ten years. This keeps the audit logs available to long-running investigations and to respond to regulatory and legal obligations. These crucial events can help you investigate possible breaches and determine the scope of compromise. ... In an account takeover, an attacker uses a compromised user account to gain access and operate as a user. The attacker may or may not have intended to access the user’s email. If they intend to access the user’s email, they may or may not have had the chance to do so. This is especially true if the defense in-depth and situational awareness discussed above is in place. The attack may have been detected, password changed, account locked, and more. If the user’s email has confidential information of customers or other stakeholders, we need to know if this email was accessed. We need to separate legitimate access by the mailbox owner during the account takeover from access by the attacker.
To ensure successful data democratization and extract the maximum value from an organization’s investment in data and analytics, data literacy should no longer be ignored. We wouldn’t let people drive cars without passing a test. So, let’s exercise some caution to ensure employees have the necessary training and understanding of data, analysis, and foundational statistical knowledge before reaching conclusions from their data. Building data literacy within an organization will require resources and a structure for ongoing training and development. Upskilling employees and ensuring their knowledge is current should be at the top of the agenda if businesses want to remain competitive. This is critical, especially when you want to use an employee’s analysis and the resulting insights as the basis for making business decisions. ... We often read and hear that artificial intelligence (AI) and machine learning will deliver significant advances in automation and replace jobs in many industries. And while this is certainly a possibility, there are still humans behind the algorithms. And humans carry biases – we all do – so there’s a chance that biases are introduced into the algorithms we are exposed to on a daily basis.
With the rise of AI, conventional electronic computing approaches are gradually reaching their performance limits and lagging behind the rapid growth of data available for processing. Among the various types of AI, artificial neural networks are widely used for AI tasks because of their excellent performance. These networks perform complex mathematical operations using many layers of interconnected artificial neurons. The fundamental operation that uses most of the computational resources is called matrix–vector multiplication. Various efforts have been made to design and implement specific electronic computing systems to accelerate processing in artificial neural networks. In particular, considerable success has been achieved using custom chips known as application-specific integrated circuits, brain-inspired computing and in-memory computing, whereby processing is performed in situ with an array of memory devices called memristors. Electrons are the carriers of information in electronic computing, but photons have long been considered an alternative option. Because the spectrum of light covers a wide range of wavelengths, photons of many different wavelengths can be multiplexed (transmitted in parallel) and modulated (altered in such a way that they can carry information) simultaneously without the optical signals interfering with each other.
Computer vision trains computers to interpret and understand the visual world. Using deep learning models, machines can accurately identify objects in videos, or images in documents, and react to what they see. The practice is already having a big impact on industries like transportation, healthcare, banking and manufacturing. For example, a camera in a self-driving car can identify objects in front of the car, such as stop signs, traffic signals or pedestrians, and react accordingly, said Jung. Computer vision has also been used to analyze scans to determine whether tumors are cancerous or benign, avoiding the need for a biopsy. In banking, computer vision can be used to spot counterfeit bills or for processing document images, rapidly robotizing cumbersome manual processes. In manufacturing, it can improve defect detection rates by up to 90 per cent. And it is even helping to save lives; whereby cameras monitor and analye power lines to enable early detection of wildfires. At the core of machine learning is the idea that computers are not simply trained based on a static set of rules but can learn to adapt to changing circumstances. “It’s similar to the way you learn from your own successes and failures,” said Jung. “Business is going to be moving more and more in this direction.”
Traditionalists often cling to waterfall methodology, which has long been favored in enterprise environments for its rigorous requirements of capture, documentation and governance. While there are times when waterfall may be appropriate, such as instances where customers want to see a clear product roadmap over a set time period, this is rarely the way the world works today. Upstarts are disrupting traditional business models at breakneck speed, with innovative, cutting-edge software applications being rolled out quickly. If an organization is to compete in this climate, it cannot afford the time spent using waterfall to manage and implement DevOps methods and features. That’s like trying to learn to speed row on a frozen lake. We believe that using agile and DevOps practices will help you transition to a faster and higher quality software delivery organization. The faster you can deliver new capabilities and features, the more competitive you’ll be. So, it’s best not to waste time using waterfall to implement DevOps if your ultimate goal is to produce software products that delight customers, ahead of your competition. The goal should always be progress, not perfection. There are many features and capabilities you can implement that will yield positive benefits.
The use of no-code and low-code platforms might give organizations ways to finally catch up on the talent gap that threatens to stall growth, says Katherine Kostereva, CEO and managing partner of low-code platform provider Creatio. She says there are almost 1 million IT jobs that remain unfilled in the UK alone. “The demand for IT staff is going to grow,” Kostereva says. “The only way out is to get technology into the hands of the employees of power users and that is exactly what low-code is doing.” Giving people who primarily come from the business operations side access to these platforms can help narrow the talent demand and address a common point of discord in many organizations. Kostereva says there is a continued misalignment where business teams have their own ideas on how interfaces and business processes should work, while IT teams must contend with limitations on resources and growing backlogs of change requests. The emerging market for low code, she says, can help business professionals take on more developer duties to a certain extent. This may be an inevitable trend as more organizations explore ways to use no-code and low-code platforms.
As the threat grows, it’s important to take action to prevent state sponsored cyber-attacks. For some companies, surviving the impact of this type of cyber-assault simply isn’t possible, says Amanda Finch, Chartered Institute of Information Security CEO. This is partly because fines that come in the wake of an attack can be “crippling”, she warns, adding: “The incident can lead to a loss of confidence from investors and stakeholders. Being cut off from financial resources can stall a company into inactivity, and even cause a collapse.” To protect themselves, organisations need to construct threat models to drive their cyber threat intelligence (CTI) collection plan, says Thornton-Trump. At the same time, Thornton-Trump says, a firm’s CTI team should be equipped to analyse threat actor activity against the organisation’s security controls. “Knowing what a threat actor may use to target the organisation and applying that knowledge can provide a massive defensive advantage.” He explains how the ultimate goal of a CTI program is to understand key mistakes, exploits or unfortunate circumstances that have occurred in the past. “This information can be used to prevent similar attacks on the organisation.”
Quote for the day:
"What good is an idea if it remains an idea? Try. Experiment. Iterate. Fail. Try again. Change the world." -- Simon Sinek