In making your case to business leaders on the need to address technical debt, it’s important to adopt a campaign approach. Think like advertisers who measure their impact in terms of reach and frequency–how many people they reach and how many times those people are exposed to their message. While you’re not running an advertising campaign, you need to be prepared to make your case over time, and to reach both the decision makers and the people who influence the decision makers. One email, or one presentation to management, isn’t going to get your message across. Technical debt is inevitable and, in some cases, makes sound business sense–i.e., when speed-to-market is critical, when resources are limited or information is incomplete. If technical debt reaches a certain level, it makes good business sense to forego immediate gratification projects in order to pay it down. That’s why framing technical debt in a manner that business leaders understand can make business leaders more inclined to realize the importance of managing it, as they do other risks facing the business.
While the WHO is one of the most high-profile agencies targeted by cybercriminals and nation-state hacking groups, other organizations have seen a dramatic rise in various security incidents, especially around phishing attempts. This week, security firm Zscaler released on report concerning phishing campaigns and malicious domains using COVID-19 as a lure. In January, the company reported about 1,200 of these incidents, but that number increased to 380,000 incidents in March. That's an eye-popping 30,000 percent increase, according to the report. In addition, Zscaler found that since the start of the healthcare crisis in January, about 130,000 suspicious domains have been registered. These domains include keywords such as "test," "mask," "Wuhan" and "kit," according to the report. And while attackers have focused on using COVID-19 as a lure, Brock Bell, principal consultant with the Crypsis Group, an incident response and risk management firm, notes that these tactics are likely to change over time as cybercriminal and hacking groups adjust to their messages based on the news of the day.
Another good use case for ML is contract management, specifically automating the signing process. Software company Conga helps businesses automate contract lifecycle management (CLM) including the need for multiple signatures on a paper document. The platform allows Salesforce users to manage contracts directly in the application, while automating CLM from creation to signature. The software also automates reporting, tracking, and reminders. Conga's Digital Transformation Officer, Aishling Finnegan said that the best approach to using ML is to map technology to a company's existing processes and build an individualized road map for digital transformation. "If you have a more programmatic approach, you're more in control, and it feels less overwhelming," she said, adding that demos of AI software are often too complicated. Finnegan said that automating the contract process is especially important now that entire companies are working remotely. "Sales teams are able to generate vital important documents at home and get them to clients quickly," she said.
One interesting insight comes not from AI, but rather from another technology that aimed to replace human activity - the Automated Teller Machine (ATM). When ATM machines were first put into place in the 1980s, there was widespread concern that it would eliminate the jobs of ordinary bank tellers and bank operations. However, according to Davenport, "One of my favorite statistics is that there are roughly the same number of bank tellers now, as there was in 1980 despite all the ATMs, internet banking, and other such changes." From this perspective he sees AI too not having the same sort of disruptive effects on employment as many might at first assume. From Davenport’s point of view, introducing technology that automates and performs tasks previously accomplished by humans actually creates more jobs for people who take time to learn about how they work. For example, these new machines create opportunities for technicians and programmers and whole new industries that are enabled by new technology.
Of course, most of us would be reluctant to give up on procedural fairness entirely. If a referee penalises every minor infringement by one team, while letting another get away with major fouls, we’d think something had gone wrong — even if the right team wins. If a judge ignores everything a defendant says and listens attentively to the plaintiff, we’d think this was unfair, even if the defendant is a jet-setting billionaire who would, even if found guilty, be far better off than a more deserving plaintiff. We do care about procedural fairness. Yet substantive fairness often matters more — at least, many of us have intuitions that seem to be consistent with this. Some of us think that presidents and monarchs should have the discretion to offer pardons to convicted offenders, even though this applies legal rules inconsistently — letting some, but not others, off the hook. Why think this is justified? Perhaps because pardons help to ensure substantive fairness where procedurally fair processes result in unfairly harsh consequences. Many of us also think that affirmative action is justified, even when it looks, on the face of it, to be procedurally unfair, since it gives some groups greater consideration than others.
Some of the themes that came to light included a lack of hardware to support a larger number of remote workers, the struggle between organizational priorities for quick deployment of remote technology and the commensurate level of security to protect systems, and helping end users understand and abide by security policies outside the office. One respondent commented, “Security at this point is a best effort scenario. Speed has become the primary decision-making factor. This has led to more than a few conversations about how doing it insecurely will result in a worse situation than not doing it at all.” ... “COVID-19 hit us with all the necessary ingredients to fuel cybercrime: 100% work from home [WFH] before most organizations were really ready, chaos caused by technical issues plaguing workers not used to WFH, panic and desire to ‘know more’ and temptation to visit unverified websites in search of up-to-the-minute information, remote workforce technology supported by vendors driven by ‘new feature time to market’ and NOT security, employees taking over responsibilities for COVID-19 affected co-workers, and uncertainty regarding unexpected communication supposedly coming from their employers.”
“There’s a lot of progress that could be achieved by bringing together things like grounded language learning, where we’re jointly trying to understand a model of the world and how high-level concepts are related to each other. This is a kind of joint distribution,” said Bengio. “I believe that human conscious processing is exploiting assumptions about how the world might change, which can be conveniently implemented as a high-level representation. Those changes can be explained by interventions, or … the explanation for what is changing — what we can see for ourselves because we come up with a sentence that explains the change.” Another missing piece in the human-level intelligence puzzle is background knowledge. As LeCun explained, most humans can learn to drive a car in 30 hours because they’ve intuited a physical model about how the car behaves. By contrast, the reinforcement learning models deployed on today’s autonomous cars started from zero — they had to make thousands of mistakes before figuring out which decisions weren’t harmful.
While we were expecting something along the lines of a series of neurons misfiring over a theremin, overall, the songs are fairly impressive. At a low volume, these jams could pass in most environments without raising any eyebrows, however, once you take a more discerning listen or even a slight gander at the lyrics the wheels start to fall off a bit. To assist, the lyrics in the released songs "have been co-written by a language model and OpenAI researchers." The lyrics for the most part pass muster aside from maybe a line or two in the Sinatra nod. This song, in particular, opens with: "It's Christmas time, and you know what that means, Ohhh, it's hot tub time!" The overall quality and clarity of the "rudimentary singing" varies wildly from track to track. As noted in an OpenAI release, "singing voices generated by those models, while often sung in a compelling melody, are mostly composed of babbling, rarely producing recognizable English words." The Sinatra track sounds more or less like ol' Blue Eyes. The country ode to Alan Jackson passes and in all honesty could potentially even inconspicuously slide right in the middle of a few classic saloon hits.
First and foremost, firms have been putting technical design ahead of economic design. They prioritize hiring technical teams and developing code, and then delay important discussions about the value that the product delivers and users’ incentives to adopt it. By the time the team addresses incentive design, teams have boxed themselves in to a narrow set of economic design options that are compatible with the existing code, or face deleting and rewriting huge chunks of the platform. Firms want to make a return on their investments, and these questions reflect that desire. However, they betray a fundamental misunderstanding of the economics of blockchain networks and the path to creating long-term monetization. Like social networks, blockchain consortia derive much of their value from network effects: that the value of the network to each participant increases with each additional participant. Many teams are familiar with this concept, which was popularized by Google Chief Economist Hal Varian and UC Berkeley Professor Carl Shapiro in the late 1990s.
More than just data analytics, more than just big data insight, more than just the ability to handle new streams of raw unstructured data and more than just knowing how to drive a database while blindfolded, data scientists have to understand business and be flexible super-performers. So what core attributes make a good data scientist? “The work of data scientists is, by definition, experimental. They need to be allowed to experiment and the outcomes may or may not be successful, but do enough experiments in the right areas... and you will find the value,” said Asplen-Taylor. “Considering problem solving experimentation further, data scientists need to follow not to lead i.e. they need to be given a problem to fix, which means they need business analysts to define the problem… and, after their experimentation phase, they need someone to test the outcome of their projects, validate the results (so they are not marking their own homework) and they need IT people who will put their models into a production environment…”
Quote for the day:
"Don't measure yourself by what you have accomplished. But by what you should have accomplished with your ability." -- John Wooden