Testing machine learning interpretability techniques
Originally, researchers proposed testing machine learning model explanations by their capacity to help humans identify modeling errors, find new facts, decrease sociological discrimination in model predictions, or to enable humans to correctly determine the outcome of a model prediction based on input data values. Human confirmation is probably the highest bar for machine learning interpretability, but recent research has highlighted potential concerns about pre-existing expectations, preferences toward simplicity, and other bias problems in human evaluation. Given that specialized human evaluation studies are likely impractical for most commercial data science or machine learning groups anyway, several other automated approaches for testing model explanations are proposed here (and probably other places too): we can use simulated data with known characteristics to test explanations; we can compare new explanations to older, trusted explanations for the same data set; and we can test explanations for stability.
As the leader “learns” more about her/his team/member(s), thanks to observation and coaching results, he/she will start to gain (or maybe lose) confidence in the progress of her team/member(s). Depending on the progress magnitude, the leader can dose the level of enablement of the team and/or the individuals. Good progress means more enablement, until the team becomes what we call Autonomous;the best of the breed in the industry, acknowledged by many since the time of Hirotaka Takeuchi and Ikujiro Nonaka, the inventors of the “New New Product Development Cycle”. But what if progress is slow, or below the acceptable norm? Practically, this is the tough part of the story. The answer will certainly depend on a deeper look into the reasons, as well as the organizational ability, to practice patience in developing its people. But in all cases, there is a given threshold for those who neither have the guts nor the desire to improve.
For a strategy to be sound, it should be preceded by a warts-and-all look at the effectiveness and maturity of the as-is position and a clear line of sight of where it needs to get to. This requires a deep understanding of the business within which security operates, alongside measuring the effects of the myriad security jigsaw pieces across the organisation. This almost never happens. If it did, security teams would recognise that investment needs to be made primarily and almost solely on fixing the crap that is already there. How can this be? Well, let’s go through some of the jigsaw pieces that just about every organisation will have in its security picture. Policy – we all have policy. If you work in government, you will have more policy than you can shake a stick at, and in other organisations or industries, hopefully less so. However, almost every policy is the equivalent of the Ten Commandments: thou shalt not commit adultery; thou shalt not share thy password.
'It's going to create a revolution': how AI is transforming the NHS
Computer engineers are fond of asserting that data is the fuel of AI. It is true: some modern approaches to AI, notably machine learning, are powerful because they can divine meaningful patterns in the mountains of data we gather. If there is a silver lining to the fact that everyone falls ill at some point, it is that the NHS has piles of data on health problems and diseases that are ripe for AI to exploit. Tony Young, a consultant urological surgeon at Southend University hospital and the national clinical lead for innovation at NHS England, believes AI can make an impact throughout the health service. He points to companies using AI to diagnose skin cancer from pictures of moles; eye disorders from retinal scans; heart disease from echocardiograms. Others are drawing on AI to flag up stroke patients who need urgent care, and to predict which patients on a hospital ward may not survive. “I think it’s going to create a revolution,” he says.
The art of finding a good data scientist
In the heated competition for data science talent, it’s important to fish in the ponds where not everyone else is fishing, so we’ve found ourselves focusing less on the expected targets like those Stanford and MIT computer science types and more on schools that seem to produce graduates with a robust outlook on applying science in daily life. Carnegie Mellon University and the University of California, Berkeley, are among the institutions that have particularly impressed us. In fact, on May 10, Carnegie Mellon announced it would launch the nation’s first Bachelor of Science program in AI this fall. Many U.S. universities offer an AI track within their computer science or engineering programs, but Carnegie Mellon is establishing a distinct undergraduate major, with a practical focus. Meanwhile, the University of California, San Diego, announced it will begin limiting enrollment in the data science major it started in fall 2017 due to overwhelming demand. What a terrific indication of the soaring interest in data science and a much-needed boost for the pipeline of data science expertise.
Nokia to build & test 5G apps in China with Tencent
5G presents an opportunity to revisit Nokia’s role once again, both as a network services provider as well as a developer of services to run on those networks. “This collaboration with Tencent is an important step in showing webscale companies around the globe how they can leverage the end-to-end capabilities of Nokia’s 5G Future X portfolio,” said Marc Rouanne, president of Mobile Networks at Nokia. “Working with them we can deliver a network that will allow them to extend their service offer to deliver myriad applications and services with the high-reliability and availability to support ever-growing and changing customer demands.” For Tencent, the company already has a huge number of users, and last year it was part of a consortium (with Alibaba, Didi and Baidu) that took at $12 billion stake in mobile operator China Unicom. That partnership will give the company — which has made its fortune in software — messaging apps, games and other services — a stronger place in building services that are more tightly integrated with networks. And this deal with Nokia will extend that kind of work specifically in the area of 5G.
Reskilling facilitates agile IT in the digital era
Reskilling happens organically around agile software development at John Hancock, says Derek Plunkett, who runs application development for the financial services firm's retirement plan services. There, application developers, engineers, quality assurance analysts, cybersecurity talent and other IT staffers work with an array of business workers in small, nimble teams to build various digital products and services, including the company's websites and retirement calculators, says Plunkett. Key to this endeavor is ensuring that IT's culture is aligned around building the best business outcomes for the company's plan participants. "We want to be strategic partners and in order to do that, we need to understand the goals of the business,” Plunkett says, adding that he doesn’t employ a formal rotational program. John Hancock’s IT is moving toward a more engineering-focused, startup culture, which includes pair programming, where two developers code from one keyboard and computer.
UK announces creation of London cybercrime court
The purpose-built court will deal with civil, business, and property cases. Lord Chancellor David Gauke said the deal represents a "message to the world that Britain both prizes business and stands ready to deal with the changing nature of 21st-century crime." "This is a hugely significant step in this project that will give the Square Mile its second iconic courthouse after the Old Bailey," added Catherine McGuinness, Policy Chairman of the City of London Corporation. "I'm particularly pleased that this court will have a focus on the legal issues of the future, such as fraud, economic crime, and cybercrime." According to the Office for National Statistics' latest Crime Survey for England and Wales(CSEW), 4.7 million incidents of criminal fraud and cybercrime were experienced by UK residents in the past year, with bank and credit card fraud forming the majority of cases. Norton suggests that in 2017, £130 billion was stolen from the general public by cybercriminals, of which £4.6 billion in losses were experienced specifically by British consumers.
Data Citizens: Why We All Care About Data Ethics
In the world of data citizenship, these mechanisms are less well defined. Even discovering that bias exists can be challenging since so many data science outcomes are proprietary knowledge. It may not be obvious to anyone who does not have the resources to conduct a large-scale study that hiring algorithms are unintentionally leading to vicious poverty cycles, or that criminal risk assessment software is consistently poor at assessing risk, but great at categorising people by race, or that translation software imposes gendered stereotypes even when translating from a non-gendered language. These are, of course, all examples that have been discovered and investigated publically but many others exist unnoticed or unchallenged. In her book “Weapons of Math Destruction,” Cathy O'Neil describes one young man who is consistently rejected from major employers on the basis of a common personality test.
Top six security and risk management trends
New detections technologies, activities and authentication models require vast amounts of data that can quickly overwhelm current on-premises security solutions. This is driving a rapid shift toward cloud-delivered security products. These are more capable of using the data in near real time to provide more-agile and adaptive solutions. “Avoid making outdated investment decisions,” advised Mr. Firstbrook. “Seek out providers that propose cloud-first services, that have solid data management and machine learning (ML) competency, and that can protect your data at least as well as you can.” ... The shift to the cloud creates opportunities to exploit ML to solve multiple security issues, such as adaptive authentication, insider threats, malware and advanced attackers. Gartner predicts that by 2025, ML will be a normal part of security solutions and will offset ever-increasing skills and staffing shortages. But not all ML is of equal value. “Look at how ML can address narrow and well-defined problem sets, such as classifying executable files, and be careful not to be suckered by hype,” said Mr. Firstbrook.
Quote for the day:
"Leadership does not always wear the harness of compromise." -- Woodrow Wilson
No comments:
Post a Comment