Another important factor that governments and businesses will need to be aware of will be in devising methods to prevent the rise of AI used with malicious intent, i.e. for hacking or fraudulent sales. Most cyber-experts predict that cyberattacks powered by AI will be one of the biggest challenges of the 2020s, which means that regulations and preventative measures should be implemented as with any other industry: designed specifically for the application. Stringent qualification processes will also need to be addressed for certain industries. For example, Broadway show producers have been driving ticket sales through an automated chatbot, with the show Wicked boasting ROI increases of up to 700 percent. This has also allowed producers to sell tickets for 20 percent higher than the average weekly price. Regulations will need to address the fact that AI and bots have the potential to take advantage of consumers’ wallets, which means that policymakers will need to work closely with firms that are gradually beginning to rely on chatbots to make sure that consumer rights are not being breached.
Through smart home devices, homeowners are able to remain connected to their property 24/7, whether at home, work or on holiday. In turn, this constant connectivity instils a psychological shift in householders, encouraging them to take a more proactive approach to home security and protection. ... For example, while water damage may not top the list of worries from homeowners, it can cost thousands of pounds to repair and is one of the most common types of domestic property damage claims. However, with a leak sensor installed, escaping water can be caught quickly and customers will even be alerted via a notification to their smartphone. This knowledge is critical, as homeowners are able to call out a plumber on the same day – at a fixed fee – and contain the damage. This proactivity benefits both sides. For insurers, responsible and safe homeowners pose less of a risk, resulting in lower premiums. It’s a win win all round. Moreover, the additional information gained from the steady stream of signals sent to the insurer from in-home sensors and monitors can allow claim handlers to remain better informed in the event of an incident.
It is important to recognize that principles-based regulation is not a euphemism for “deregulation” or a “light-touch” approach—far from it. Principles-based regulation is a different way of achieving the same regulatory outcomes as rules-based regulation. But it simply does so in what is, in many cases, a more efficient and flexible manner. That flexibility also prevents subversion of those outcomes through the kind of loopholes that revealed the inherent vulnerability of rules-based regulation in the run up to the financial crisis. Of course, in practice, it is rare for to have either a purely principles-based or a purely rules-based regulation. Rather, they represent two ends of the regulatory spectrum. Every principles-based regulatory regime has some rules, and every rules-based regime has some element of principle. For this reason, we frequently see hybrid regulatory systems of principles and rules.
"Domestically, our private and public sectors will use AI decisively to generate economic gains and improve lives. Internationally, Singapore will be recognised as a global hub in innovating, piloting, test-bedding, deploying and scaling AI solutions for impact," said the SNDGO, which is part of the Prime Minister's Office. To kick off its efforts, the government identified five national projects that focused on key industry challenges, including intelligent freight planning in transport and logistics, chronic disease prediction and management in healthcare, and border clearance operations in national safety and security. These form part of nine sectors that have been earmarked for heightened deployment as AI is expected to generate high social and economic value for Singapore. These verticals include manufacturing, finance, cybersecurity, and government. The national AI strategy also outlined five key enablers that the government deemed essential in building a "vibrant and sustainable" ecosystem for AI innovation and adoption. A robust data architecture, for instance, would be necessary for the public and private sectors to manage and exchange information securely, so AI algorithms can have access to quality datasets for training and testing.
In his book, The Shallows, Nicholas Carr demonstrates how our internet usage has rewired our brains. We think superficially, skimming, glancing and scanning rather than reading or processing more deeply. Cal Newport, in his book Deep Work, advocates for focusing, contemplating and concentrating. His contention is this distraction-free thinking has become increasingly rare and is a skill we must learn (or relearn). In fact, empathy—so critical to our humanity—is impossible without deeply considering others’ situations. And the ability to solve problems and develop ideas cannot happen effectively without depth of thought. Tell stories. While communicating facts tends to engage limited portions of the brain, hearing a story engages multiple parts of the brain. One study in particular, using an MRI found participants had greater understanding and retention of concepts based on the engagement of multiple parts of the brain. Other researchers, including Dr. Paul Zak, have demonstrated hearing stories that include conflicts and meaningful characters tend to engage us emotionally. The resulting release of oxytocin leads us to trust the messages and morals the story is trying to convey.
Check Point's research and development expenses increased 20% year over year while selling and marketing expenses rose nearly 10.5%. Both of these metrics outpaced the company's actual revenue growth. In fact, Check Point has stepped up its investment in both of these line items in the past year or so, and the positive impact is visible on the company's subscription growth. The company is now looking to get into lucrative cybersecurity niches as well. Check Point recently announced the acquisition of Internet of Things (IoT)-focused cybersecurity start-up Cymplify. Check Point will integrate Cymplify's expertise into its Infinity cybersecurity architecture so that clients can protect their IoT devices -- such as smart TVs, medical devices, and IP cameras -- against cyberattacks. This should open up a big growth opportunity for Check Point because according to IHS Markit, cybersecurity is the fastest-growing IoT niche. The firm predicts that the IoT data security market will grow from $3 billion in revenue this year to $7 billion in 2022 as more original equipment manufacturers (OEMs) move to secure their IoT devices.
"When we've done our tests on our 5G network, they're typically 1,000 to 10,000 times less than what we get from other devices. So when you add all of that up together, it's all very low in terms of total emission. But you're finding that 5G is in fact a lot lower than many other devices we use in our everyday lives." Wood added there is no evidence for cancer or non-thermal effects from radio frequency EME. "There's some evidence for biological effects, but none of these are non-adverse," Wood told the committee. "So they've really looked at all of the research they need to set a safety standard, and in summary what they said is that, if you follow the guidelines, they're protective of all people, including children." On the issue of governmental revenue raising from its upcoming spectrum sale, Optus said it would be wrong of government to view it as a cash cow, as every dollar spent on spectrum is not used on creating networks. "Critically, in order to achieve the coverage and deployment required, 5G networks will require significant amounts of spectrum," the Singaporean-owned telco wrote.
Starting from the very beginning of the process, CIO’s can help AI be “good” by ensuring that the data being used to create the algorithms is ethical and unbiased, itself. Gathering and using data from ethical sources significantly reduces the risk of harbouring toxic datasets which may infect systems with problematic biases further down the line. This is especially crucial for highly regulated industries, which will need to identify biases already present and remedy accordingly. Using insurance as an example, CIO’s should take care not to include data that heavily features one particular demographic, gender etc., which might augment averages and inform non-representative policies. Collecting a rich sample of ethical, GDPR compliant, representative data from consenting customers actually benefits the accuracy of the AI it powers, and it also reduces the work needed to “clean” it.
The suit can lift upwards of 30kg. While it won’t do the lifting on its own, it can take that weight off from its wearer. It offers support in the form of hydraulically-controlled artificial muscles which are housed in an aluminum backpack linked to the waist joints. The pack provides two axes of movement: one for bending at the waist and another for supporting the thighs. Controlling the suit can be done in two ways. The wearer can either blow into a tube or touch a control surface with their chin, thus creating a hands-free control system for the exoskeleton. The muscle suit is wrapped inside a custom, water-repellent bag. This protects the device from the elements and gives it a softer appearance. ... Many other Japanese companies have also taken the challenge of producing suits to assist in physical labor. Companies like HAL have already placed a stable foothold in the exoskeleton industry with their series of robotic suits. Nevertheless, the Muscle Suit is an awe-inspiring invention by this venture company from the Tokyo University of Science.
Yes—at least in some circumstances, both researchers said. Bordes’s group, for example, is creating a benchmark test that can be used to train a machine learning algorithm to automatically detect deepfakes. And Rossi said that, in some cases, A.I. could be used to highlight potential bias in models created by other artificial intelligence algorithms. While technology could produce useful tools for detecting—and even correcting—problems with A.I. software, both scientists emphasized people should not be lulled into complacency about the need for critical human judgment. “Addressing this issue is really a process,” Rossi told me. “When you deliver an A.I. system, you cannot just think about these issues at the time the product is ready to be deployed. Every design choice ... can bring unconscious bias.” You can read more about our discussion and watch a video here. ... “Yes, it is true that A.I. is only as good as the data it has been fed,” she said. But, she argued, this potentially gave people tremendous power.
Quote for the day:
"Whenever you see a successful business, someone once made a courageous decision." -- Peter F. Drucker