The problem here [is] the apathy from companies who are breached. We don’t do enough to train people who work with networked computers how to handle them. … But couple that with a lack of spending by companies on basic security, and you have a situation that is ripe for exploitation. ... We've got all kinds of alphabet agencies and other miscellaneous government spooks … so why not just let them take the gloves off and sort things out. From a marketing perspective I don't think it's too hard to spin the attacks against hospitals, etc. [to get] about half of the country behind it. … Once a few bodies pile up I think that people will start to get the message. It won't stop the state actors targeting the state or military, but that's a separate ball game anyway. … [And] I think people would generally be on board with our own government agencies using U.S. companies and utilities for practice to help find and patch vulnerabilities. Normally the laws prevent well-meaning individuals from doing those things … but if the government does it there's a lot less protest, particularly since they're probably already spying on most of the country anyway.
The calculus for the AI industry is the same as the private healthcare industry in the US. Extricating biased black box AI from the world would probably put dozens of companies out of business and likely result in hundreds of billions of dollars lost. The US industrial law enforcement complex runs on black box AI – we’re unlikely to see the government end its deals with Microsoft, Palantir, and Amazon any time soon. So long as the lawmakers are content to profit from the use of biased, black box AI, it’ll remain embedded in society. And we also can’t rely on businesses themselves to end the practice. Our desire to extricate black box systems simply means companies can’t “blame the algorithm” anymore, so they’ll hide their work entirely. With transparent AI, we’ll get opaque developers. Instead of choosing not to develop dual use, or potentially dangerous AI, they’ll simply lawyer up. ... When things go wrong and AI runs amok, the lawyers will be there to tell us the most company-friendly version of what happened. Most importantly, they’ll protect companies from having to share how their AI systems work.
Overall, IT leaders see this as a good thing. More than three out of four, 78%, feel the shift in technology spending is a positive for their organizations. "Decentralized IT spending is likely to deliver a number of positive outcomes, as a shift in roles offers tactical decision-making among business units via SaaS and IaaS procurement," the IDG authors state. This "leaves time for IT leaders to focus on more strategic tasks. However, these positive outcomes can only be achieved if IT departments build the right framework to enable different business units to procure their own technology." User-driven IT really works out if, in the process, IT costs are kept transparent. If business users can weigh the costs of their technology against the benefits they are receiving, that's a real plus. "A critical part of this framework is ensuring that the costs, security and compliance requirements of software purchases are visible and understood throughout the procurement process. Only by doing this can the LOB feel empowered to procure and 'own' their technology, while reducing the burden on senior IT leaders."
Our research demonstrates clear, actionable paths forward to help resolve the epidemic of workplace exclusion. Even the most effective recruiting strategy for diversity won’t lead to long-term change if new talent isn’t supported to succeed. Fortunately, our findings show that we are not powerless in the face of exclusion. Individuals coping with left-out feelings can adapt these new evidence-based tools of gaining perspective from others, mentoring those in a similar condition, and thinking of strategies for improving the situation. For team leaders and colleagues who want to help others feel included, our research suggests that serving as a fair-minded ally — someone who treats everyone equally — can offer protection to buffer the exclusionary behavior of others. They can also share stories about how they have coped with similar challenges and see what suggestions teammates have for improving the situation. These strategies would help workers not only navigate tricky workplace dynamics,but also drive their own version of change, especially when the system isn’t working for everyone.
The future of insurance was a front-line topic this week from three disparate sources- the UK, the US, and a setting in Bolivia. Not the entire globe speaking but certainly diverse locales, the coverage touched on insurance prospects that have global reach, and the sources were uniform in the principal that innovation is important, but customer and risk changes are more so. Consider insurance in the near future- more intangible risks, data sources that embrace forward-looking techniques and breadth of external data, customers that expect more than an annual ‘touch’ from their agents/carriers (if there are agents at all), and product/service evolution that is measured in days, not years. Denise Garth of Majesco writes in a recent article, “Are Insurers Prepared to Meet Future Customer Needs?”, that today’s consumers are becoming accustomed to fluid transactions- order on an app, have delivery (and possible put away) within a day, track the purchase on one’s phone, tablet, television, smart watch, or computer.
To fully understand AIoT, you must start with the internet of things. When “things” such as wearable devices, refrigerators, digital assistants, sensors and other equipment are connected to the internet, can be recognized by other devices and collect and process data, you have the internet of things. Artificial intelligence is when a system can complete a set of tasks or learn from data in a way that seems intelligent. Therefore, when artificial intelligence is added to the internet of things it means that those devices can analyze data and make decisions and act on that data without involvement by humans. ... In a smart retail environment, a camera system equipped with computer vision capabilities can use facial recognition to identify customers when they walk through the door. The system gathers intel about customers, including their gender, product preferences, traffic flow and more, analyzes the data to accurately predict consumer behavior and then uses that information to make decisions about store operations from marketing to product placement and other decisions.
The corporate, academic, and military proponents of “ethical AI” have collaborated closely for mutual benefit. For example, Ito told me that he informally advised Schmidt on which academic AI ethicists Schmidt’s private foundation should fund. Once, Ito even asked me for second-order advice on whether Schmidt should fund a certain professor who, like Ito, later served as an “expert consultant” to the Pentagon’s innovation board. In February, Ito joined Carter at a panel titled “Computing for the People: Ethics and AI,” which also included current and former executives of Microsoft and Google. The panel was part of the inaugural celebration of MIT’s $1 billion college dedicated to AI. Other speakers at the celebration included Schmidt on “Computing for the Marketplace,” Siegel on “How I Learned to Stop Worrying and Love Algorithms,” and Henry Kissinger on “How the Enlightenment Ends.” As Kissinger declared the possibility of “a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms,” a protest outside the MIT auditorium called attention to Kissinger’s war crimes in Vietnam, Cambodia, and Laos, as well as his support of war crimes elsewhere.
In the manufacturing sector, IoT deployments provide an enormous amount of data, and businesses that can tap into that data can step into a whole new world that allows them to look back into the past, observe the present, as well as glimpse into the future — or at least, get a 360-degree view of what the future can bring! A globally-recognized company, Boeing is using Edge and IoT to optimize manufacturing by gathering large amounts of data from operational aircraft. A twin-engine Boeing 737 for example, creates 333 gigabits of information per minute. All that data can be used by Boeing to build simulations and models – fueling use-cases such as predictive maintenance, software improvements, and new product development. One of Dell Technologies Edge and IoT’s clients, Brembo, an Italian manufacturer, a world-leader and innovator in the field of high-performance disc brake technology, embarked on the development of an advanced smart manufacturing plant. By installing Dell Technologies IoT Gateways to gather data from sensors that provide real-time production-line performance, company leaders can drive their business forward with data-driven decision-making.
As of September 24, 2019, Microsoft officials said that more than 900 million active devices were running Windows 10. That figure includes 40-50 million Xbox One consoles, an insignificant number of HoloLens and Surface Hub devices, and a rapidly shrinking population of Windows Phones. After making those adjustments, let's call it 850 million Windows PCs. That number has been increasing by about 100 million every six months, and usage statistics I've reviewed show that the pace is ticking up slightly as the Windows 7 deadline nears. Given those trends, it's reasonable to project that the number of active Windows 10 devices will be over a billion by the end of the first calendar quarter of 2020. But how does that number compare to the current Windows installed base? After reviewing all the available evidence, I'm convinced that the current installed base of Windows PCs as we head into 2020 is down significantly since its peak and is probably close to 1.2 billion today.
Companies should ask themselves: “Is data the core asset that I monetize?” or “Is data the glue that connects the processes that have made my products or services successful?” This is especially urgent as companies start to use third-party data sources to train their algorithms — data about which they know relatively little. Companies as also need to ask themselves: What is the quality of the internal and external data we’re using to train, and to input, algorithms?; What unknown and unintended biases could our data train into algorithms? How will machines know under which biases they operate if we don’t share how algorithms arrive at its answers?; and What will the impact of this automation be on our business, people, and society? How can we detect and quickly mitigate unanticipated impacts? ... In the end, the answer around ethics in AI seems to boil down to transparency; having the applications able to demonstrate what data was used, who trained the AI, and making clear how the AI came to its answers.
Quote for the day:
"The secret of success is to know something nobody else knows." -- Aristotle Onassis