Police departments use predictive algorithms to strategize about where to send their ranks. Law enforcement agencies use face recognition systems to help identify suspects. These practices have garnered well-deserved scrutiny for whether they in fact improve safety or simply perpetuate existing inequities. Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals—even mistaking members of Congress for convicted criminals. But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms. Risk assessment tools are designed to do one thing: take in the details of a defendant’s profile and spit out a recidivism score—a single number estimating the likelihood that he or she will reoffend. A judge then factors that score into a myriad of decisions that can determine what type of rehabilitation services particular defendants should receive, whether they should be held in jail before trial, and how severe their sentences should be.
Almost 90 percent of the business leaders surveyed as part of the study believed that cognitive diversity in the workplace is extremely important for running a successful organization. Managers in the contemporary workplace want employees to think differently and experiment with their typified ways of problem solving. While expecting such cognitive diversity was a bit difficult in the past, the role AI can play in the workforce means that organizations can expect greater rewards in the future. AI mechanisms will help augment human efforts in the workplace and stimulate cognitive diversification that benefits the organization. The study also revealed that 75 percent of respondents expected AI to create new roles for employees. This is a clear indication that AI is not going to replace human jobs, but will instead increase efficiency and shift humans’ roles and even create new positions for employees that provide meaningful work better suited to humans’ strengths.
To put it bluntly, it appears the insurance sector has not been able to keep up with cyber threats. As new threats pop-up in cyberspace, new policies typically lag behind in a confused state. A lack of visibility of their client’s cyber health also challenges insurers. This is very important for insurers, for example, if somebody wants health insurance, proving whether or not they smoke or that there’s no hereditary diseases which run in their family is vital in establishing how much their premium should be. The visibility issue isn’t just one affecting insurers. Many firms don’t have the tools to adequately assess and respond to the rising levels of cyber risk they’re exposed to. A recent report from the insurer Hiscox claimed that nearly three-quarters (73%) of global firms are “cyber-novices” when it comes to the quality and execution of their security strategy. If it’s the case (and it is) that cyber insurance policies are confusing and have room for improvement, the best thing a company can do is first to understand the cyber risks they face, and then secure a bespoke policy to meet their needs.
Unfortunately, this is par for the course for private-sector businesses and NGOs. Sometimes the breach is to get a critical piece of political or military information to be used later. Sometimes it's to steal intellectual property or research so that the hacking nation can get a competitive boost in the economic and/or military might. Sometimes it's to cull some personal information about someone with the right security clearance — which may mean orchestrating a super-breach, compromising several million other accounts along the way. Notably, these breaches aren't about anything so pedestrian as identity theft or credit card fraud. Instead, the goal is to use the information gleaned as a jumping-off point — to allow escalated access to yet more critical information. This is especially the case with healthcare organizations, where the right juicy health-record tidbit about a well-placed employee (or family member thereof) of a government arm can be used to extort some small amount of extra information or escalated access, turning that employee into an inside-attack threat.
König and the AI research team showed that quantum outperforms classical computing and that quantum effects can “enhance information-processing capabilities and speed up the solution of certain computational problems.” In their research, the team demonstrated that parallel quantum algorithms running in a constant time outperform classical computers. The scientists showed that quantum computers only required a fixed number of steps for problem solving and was better at “solving certain linear algebra problems associated with binary quadratic forms.” Forward-thinking organizations recognize the synergistic boost that the combination of quantum computing and artificial intelligence may herald. Microsoft CEO Satya Nadella stated in a WSJ Magazine interview, “What’s the next breakthrough that will allow us to keep up this exponential growth in computing power and to solve problems—whether it’s about climate or food production or drug discovery?
This isn't easy. Interoperability and standards simply don't exist in IoT or Edge Computing. This makes life miserable for anyone working in these areas. It's the LF Edge's founders hope that this pain will bring vendors, OEMs, and developers together to create true open standards. For the broader IoT industry to succeed, the fragmented edge technology players must work together to advance a common, constructive vision. Arpit Joshipura, the Linux Foundation general manager for Edge and IoT, said, "In order for the broader IoT to succeed, the currently fragmented edge market needs to be able to work together to identify and protect against problematic security vulnerabilities and advances common, constructive vision for the future of the industry. LF Edge is realizing this vision with five projects. These support emerging Edge applications in non-traditional video and connected things that require lower latency (up to 20 milliseconds), faster processing, and mobility.
A global organisation that produces medical devices for the healthcare market used IoT technology to monitor and record the usage of every individual device for product development and preventative maintenance. Regardless of the relatively benign purpose, because of the nature of these medical devices and the broad approach to data collection, the usage data that the developers were collecting was inherently sensitive. Healthcare data is classified as “special category” data by GDPR as well as others, which brings with it additional prohibitions over its use and heightened penalties for its mishandling. More concerning was that neither the patients, the healthcare professionals nor the business were aware of the collection and use of the data. No framework was in place to govern its collection, use or storage. No processes were documented. Furthermore, the business had not yet appointed a data protection officer. Once the legal teams began their GDPR preparations, they quickly discovered this data use.
In the banking industry’s quest towards open banking, standardisation has now become the name of the game towards global applicability. There is a consistent push and pull between whether these standards should come from regulators or industry players. On one hand, regulators can future-proof standards in that they could design the standards based on principles that ensure safety in the ecosystem. On the other hand, industry players may be better suited to producing standards or platforms that could better encourage innovation and growth of the industry as they are often instrumental in making it happen. Many of the standards listed below only apply to one region or another, but as the interchange fee regulation in the EU being implemented in Australia shows, there is something to be said about the ripple effect of regulations, particularly when regulators attempt to implement what works in other countries. The following is a list of initiatives, regulations and standards that have been listed in the World Payments Report 2018, by Capgemini and BNP Paribas.
Employees who are considered “essential” are still on the job, but the loss of supporting staff could prove to be costly, in both the short and long term. More immediately, the shutdown places a greater burden on the employees deemed essential enough to stick around. These employees are tasked with both longer hours and expanded responsibilities, leading to a higher risk of critical oversight and mission failure, as weary agents find themselves increasingly stretched beyond their capabilities. The long-term effects, however, are quite frankly, far more alarming. There’s a serious possibility our brightest minds in cybersecurity will consider moving to the private sector following a shutdown of this magnitude. Even ignoring that the private sector pays better, furloughed staff are likely to reconsider just how valued they are in their current roles. After the 2013 shutdown, a significant segment of the intelligence community left their posts for the relative stability of corporate America. The current shutdown bears those risks as well. A loss of critical personnel could result in institutional failure far beyond the present shutdown, leading to cascading security deterioration.
Most data was of a similar breed in the past. By and large, it was structured and easy to collate. Not so today. Now, some data lives in on-premises databases while other data resides in cloud applications. A given enterprise might collect data that is structured, unstructured, and semi-structured. The variety keeps widening. According to one survey, enterprises use around 1,180 cloud services, many of which produce unique data. In another example, we integrated over 400 applications for a major enterprise IT firm. The process of integrating all this wildly disparate data alone is too great a task for legacy systems. Within a legacy data architecture, you often have to hand-code your data pipelines, which then need repairing as soon as an API changes. You might also have to oversee an amalgam of integration solutions, ranging from limited point-to-point tools to bulky platforms that must be nurtured through scripting. These traditional approaches are slow, fraught with complexity, and ill-matched for the growing variety of data nowadays.
Quote for the day:
"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer