“AI is the next wave of innovation, and overlooking this opportunity will pose a threat to a region’s economic and national security,” said Center for Data Innovation director Daniel Castro in a statement. “The EU has a strong talent pool and an active research community, but unless it supercharges its current AI initiatives, it will not keep pace with China and the United States.” The center chose to focus on six categories: talent, research, development, adoption, data, and hardware. Based on a 100-point scale, researchers found that the U.S. led overall with 44.2 points, China was second at 32.3, and the European Union placed third with 23.5. The study found that the U.S. shows clear leadership in four of the six categories: talent, research, development, and hardware. China leads in adoption and data. The findings would appear worrisome for the EU, which has placed great emphasis on its AI efforts in recent years. But the region does place second in four categories: talent, research, development, and adoption. Of those, it is particularly strong in the research category.
"It is common for attackers to find common utilities such as FTP clients or video conversion software, package or wrap malicious code into the installer, and then upload their packed installer to a free software download site, knowing that users may find their malicious version of the software installer before they find the legitimate original," Murphy says. Additionally, employees are increasingly feeling entitled to work from anywhere and have access to anything at any time, according to Nick Bennett, director of professional services at FireEye Mandiant. "Employees [also] feel entitled to use work assets for non-work activities, and they are bypassing protections that are in place, making themselves more susceptible to phishing attacks," Bennett says. The issue is twofold. Employees are using corporate-issued workstations for personal use, even if they are at home. When they bring that workstation back to the enterprise, they are also putting the business at risk, Bennett explains. In addition, "employees are also using non-corporate assets to access the corporate network on a device that is unmanaged by enterprise," Bennett says.
While machine learning is a powerful tool, it is not a magical box. Bias in the data or the model can train machine learning systems in inappropriate ways. Strange butterflies may be lurking within, bringing up artifacts that may be due to factors that have nothing to do with the data. The lack of transparency in the process can make determining what exactly a model returns problematic, because emergent behaviors may be lurking in the background that are often difficult to ferret out. Finally, the deeper the learning, the more energy is required to maintain multiple levels of abstraction, and that energy can often be significant enough to make using such systems uneconomical. This is not to say that such tools are useless - most of the evolution of machine learning systems in the last decade have proven highly useful and effective for a wide variety of applications, and form an integral part of the artificial intelligence toolkit. The danger comes in thinking that such systems are truly intelligent, rather than simply the clever application of high speed, and occasionally non-linear solutions.
Technology – as in all industries – has an important role to play in the transformation of insurance. New opportunities are being created by the likes of the IoT, telematics, advanced data analytics, and technologies which support consumption-based offerings and insurance for the sharing economy. These technological advantages are particularly welcome in a sector which has faced increasing regulation. As Paton notes, many modern regulatory challenges began with the EU’s Solvency II Directive in 2009, and other decrees have followed. “The big impact of Solvency II from an innovation perspective was that it diverted the insurer’s ability to invest in new areas,” he says. “It was followed by the Insurance Distribution Directive and a regulatory scrutiny on conduct in areas such as personal lines. So now with things like point of renewal, it’s necessary to make people aware of how much their insurance premium has gone up.” “There will be more initiatives of this type,” Paton adds, “right through to changes in expectations around operational resilience, cybersecurity and third party supply management.”
Although moving away from blame culture and the idea that people are the problem should be a goal of today’s security teams, most organizations still see employees as a chink in company defenses. A massive 98 percent of respondents agreed with the statement that: ‘The human employee is the weakest link when it comes to cybersecurity’. Over two-thirds agreed with this idea strongly. Social engineering, phishing, and business email compromise – all attacks which rely on people falling prey to manipulation and trickery – were listed amongst the top threats organizations are most concerned about. This dim view of the role people play in security likely feeds into why only 13 percent of organizations would rate internal cybersecurity awareness as very good. Also, 40 percent of respondents said awareness was merely adequate, suggesting there is still much work to be done around improving education, raising awareness and reducing people-based risks as a result. While organizations may still view humans as the problem, security teams also recognize that people-based problems require people-based solutions. 85 percent of the companies surveyed stated they were utilizing awareness training to reduce human error.
Most agile teams do sprint retrospectives at least once a month, to iterate and improve on their software development process and workflow. However, a lot of those same teams rely only on their feelings to “know” if they have actually improved. But you need an unbiased reference system if you want to compare how two sprints went. ... Commit frequency and active days serve the same purposes. An active day is a day in which an engineer contributed code to the project, which includes specific tasks such as writing and reviewing code. Those two alternative metrics are interesting if you want to introduce a best practice to commit every day. It’s also a great way to see the hidden costs of interruptions. Non-coding tasks such as planning, meetings, and chasing down specs are inevitable. Teams often lose at least one day each week to these activities. Monitoring the commit frequency enables you to see which meetings have an impact on your team’s ability to push code. It’s important to keep in mind that pushing code is actually the primary way your team provides value to your company.
While even the experts can’t predict when widespread adoption of the technology will place, they can suggest what it will take before widespread adoption is possible. The most important lesson is understanding: understanding that blockchain technology should work as part of an ecosystem of technologies. It doesn’t matter what industry you’re in, no matter what, it starts with customer experience. And, the CX depends on a choreography of technology. In terms of display, there has to be a web and touch experience, or an AI component with chatbots, for example. Similarly, with blockchain, it’s not just one thing that’s going to impact the use case. “One obvious pitfall of blockchain, is that people look at it as the only solution for realising anything and everything. But, this is wrong. You have to really understand the end-to-end experience and see where blockchain technology fits in,” said Jitendra Thethi, AVP, Altran.
This focus on needs and outcomes turns out to be a great way to integrate the key frameworks and models of the Lean-Agile landscape. This is a worthwhile goal; it’s worth noting that Agile is often introduced not right-to-left, but left-to-right – not needs-first, but backlog-first or solution-first. When the focus is on ploughing through backlogs of requirements, the likely result is mediocrity (or worse), hardly a great advertisement for Agile. And the dissonance of imposing solutions on teams rather than seeking to meet their and the wider organisation’s needs is potentially fatal to Agile adoptions. Right to Left should not be understood as an attack on branded process frameworks; neither does it elevate any one framework over the others. However, as well as calling into question how they are often rolled out, I do voice the regret that they are so often described in left-to-right terms, leading me to wonder how Agile is then supposed to be understood as a departure from 20th century ways of thinking and working. I demonstrate that Scrum and even SAFe are readily described in right-to-left terms – "iterated self organisation around goals" is the five-word summary
“When it comes to healthcare and voice design, we have several challenges we face every day,” Freddie Feldman, voice design director at Wolters Kluwer Health, said at The Voice of Healthcare Summit at Harvard Medical School last week. “HIPAA is a big topic on everyone’s mind nowadays, and it is one we take seriously. The first thing most people think about when they hear HIPAA is securing servers platforms, but there is more to it. We have to consider things like the unintended audience for a call.” He said that due to the nature of voice, even leaks not expressly prohibited by HIPAA can be inappropriate. For example, if the voice technology is intended for home use and gives a message from the radiology department to the house, then it’s giving away too much information, he said. Much of it comes down to appropriate use. For example, putting the speakers into a hospital room setting poses a different set of challenges. “I think as far as smart speakers and virtual assistants [go], Amazon right now only has HIPAA-eligible environments, so basically turning on and off HIPAA for specific skills, enabling HIPAA for a particular voice app or voice skill,”
The largest and most basic need in the data science hierarchy is the need for data collection. While every bank and credit union collects data daily on transactions, product use, customer demographics, and even external insights from social media and other sources, an organization needs to determine what specific insight may be needed to get a complete picture. Are you collecting insight on channel use, geolocational data and consumer beliefs and behaviors? While you can build a plan for future collection, the success of any machine learning or AI initiative hinges on the scope and quality of data collected. As important as the collection of the right data is important, Rogati stresses that it is equally important to have an ongoing flow of real-time data that is easy to access, store and analyze. This can be a major challenge for financial services organizations that are notorious for having data silos. Beyond internal data flows, it is important that any external or unstructured data can also be collected, stored and analyzed. While once a major problem, cloud technology has simplified some of the storage challenges.
Quote for the day:
"To be a good leader, you don't have to know what you're doing; you just have to act like you know what you're doing." -- Jordan Carl Curtis