“There’s a couple of aspects to it: the ability to coordinate approval and review from the same location, and the ability to organize and connect our policy to the rest of our risk and compliance data points,” said Nicholas Melas, senior global policy manager at Revolut. According to Evgeny Likhoded, CEO and founder of ClauseMatch, the platform allows for real-time content collaboration and workflow management, and lets users map the content across the platform. The tool also uses natural language processing and machine learning to suggest relevant content. ClauseMatch allows Revolut to coordinate input, approvals and workflow in the same location without having to provide users with links to different passwords; it also lets the company automate the policy-approval process. This offers two benefits: consistency of approach and the ability to make changes to policies with minimal legwork. While ClauseMatch streamlines policy approvals and changes, it doesn’t eliminate humans completely, emphasized Melas. Instead, it takes away menial tasks, allowing staff members to focus on more complex roles, including oversight and verification.
The backdoors were inserted for law enforcement use into carrier equipment like base stations, antennas and switching gear, the Journal said, with US officials reportedly alleging they were designed to be accessible by Huawei. "We have evidence that Huawei has the capability secretly to access sensitive and personal information in systems it maintains and sells around the world," Robert O'Brien, national security adviser, reportedly said. O'Brien also called less-expensive Chinese solutions "tempting of a gift to turn down" for some countries, according to CNN, but that they come "with a price" of the Chinese company having access to information on the network. Huawei denied the reports, saying it's the US government that's been "covertly accessing telecom networks worldwide, spying on other countries." "US allegations of Huawei using lawful interception are nothing but a smokescreen," Huawei said in an emailed statement Wednesday. "Huawei has never and will never covertly access telecom networks, nor do we have the capability to do so."
Cyber security and data security risks have climbed to the top of UK plcs’ boardroom agenda to become a top five issue following recent high profile cyber attacks, such as the ransomware attack on Travelex. This shift is largely because the business consequences of such an event can be catastrophic — loss of revenue and major disruption, plus steep fines due to GDPR, damage to reputation and a hit on the share price. Depending on the severity of the breach it is possible that jobs, including those of the CEO and CISO, could be put at risk. The c-suite must live up to its responsibility for protecting the business by taking whatever action is necessary to prevent it suffering from an attack. But what form should this action take? The C-suite needs to ensure the right cyber security policies and procedures are in place, as well as a response plan should the worst happen.
One of the biggest challenges when it comes to the implementation of AI today is its inflexibility and lack of adaptability. AI algorithms can be trained on huge amounts of data, when available, and can be fairly robust if all data is captured for their training beforehand. But unfortunately, this is not how the world works. We humans are so adaptable because our brains have figured out that lifelong learning (learning every day) is key, and we can’t rely solely on the data we are born with. That’s why we do not stop learning after our first birthday: We continuously adapt to changing environments and scenarios we encounter throughout our lives and learn from them. As humans, we do not discard data, we use it constantly to fine-tune our own AI. Humans are a primary example of edge learning-enabled machines. In fact, if human brains acted in the same way as a DNN, our knowledge would be restricted to our college years. We would go about our 9-to-5s and daily routines only to wake up the next morning without having learned anything new. Traditional DNNs are the dominant paradigm in today’s AI, with fixed models that need to be trained before deployment.
Ericsson’s eSIM solution is comprised of Ericsson Secure Entitlement Server (SES) and Ericsson eSIM manager (GSMA certified SM-DP+) serving the onboarding of eSIM consumer devices. The solution provides a fully automated end-to-end device and subscription orchestration procedure managing the device detection, user authorization for onboarding the eSIM device, creation of user and subscription profile, provisioning handling of both eSIM device and network elements as well as updating the Service Provider’s back office system as relevant. It contributes with simplified user experience process for end users and, at the same time for the Service Provider, saving operational expenses for handling eSIM devices over their life cycle management. Ericsson’s eSIM solution will give the Service Provider the opportunity to launch many attractive services for a wide range of eSIM devices. Users can instantly enable new services on their new eSIM device with minimum efforts. The need to pre provision, create batch processes or use middleware solutions for eSIM profiles, is removed.
“There seem to be countless stories of ways that bias in AI is manifesting itself, and there are many thought pieces out there on what contributes to this bias,” says Fay Payton, a professor of information systems/technology and University Faculty Scholar at NC State. “Our goal here was to put forward guidelines that can be used to develop workable solutions to algorithm bias against women, African American and Latinx professions in the IT workforce. “Too many existing hiring algorithms incorporate de facto identity markers that exclude qualified candidates because of their gender, race, ethnicity, age and so on,” says Payton, who is co-lead author of a paper on the work. “We are simply looking for equity – that job candidates be able to participate in the hiring process on an equal footing.” Payton and her collaborators argue that an approach called feminist design thinking could serve as a valuable framework for developing software that reduces algorithmic bias in a meaningful way. In this context, the application of feminist design thinking would mean incorporating the idea of equity into the design of the algorithm itself.
Security experts predict that 2020 could be the year hackers really begin to unleash attacks that leverage AI and machine learning. “The bad [actors] are really, really smart,” said Burg of EY Americas. “And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad. And this is one of the reasons why I think this space is going to get increasingly dangerous. Incredibly powerful tools are being used to basically do the inverse of what the defenders [are] trying to do on the offensive side.” In an experiment back in 2016, cybersecurity company ZeroFox created an AI algorithm called SNAPR that was capable of posting 6.75 spear phishing tweets per minute that reached 800 people. Of those, 275 recipients clicked on the malicious link in the tweet. These results far outstripped the performance of a human, who could generate only 1.075 tweets per minute, reaching only 125 people and convincing just 49 individuals to click.
“One of the main abilities of Emotet is that it stays topical, and we will see campaigns similar to those leveraging fear of the coronavirus throughout the year. As the US enters tax season, for example, Emotet is gearing up to offer the public help to file the forms on their behalf. “The email messages will not be sophisticated and can contain a link to download infected files or will have an attachment of a fake W9 form. We can anticipate that malware campaigns related to tax season will continue towards the filling date in April.” The best way for users to protect themselves against threats exploiting the coronavirus is to trust only official government or health service guidance, or legitimate news services. In IT terms, standard guidance to use antivirus programs with automatic updates, to download and apply patches and software updates, and to not open suspicious or unsolicited emails, applies.
The hyper cloud providers (AWS, Azure, GCP) are able to offer a smaller total cost of ownership while delivering superior features from scalability to security. It doesn’t make financial sense to build everything in-house when you can get it off the shelf for only the time you need it. The cloud vendors are constantly innovating with solutions such as servers that only cost for the time they are used, as opposed to the time they are up and waiting for requests. They are also able to attract talent specialized in e.g. scalability and security in ways that would be impossible for every other vendor on their own. ... Most machine learning experimentation starts from understanding your data on your laptop and doesn’t require that much computation power. But very quickly you will run into the need more than your local CPU can provide you with. The cloud is by far the more scalable place to do machine learning. You’ll get access to the latest GPUs or, even TPUs that you wouldn’t be able to afford and maintain on your own.
By taking a look under the hood, they've found that many apps are sending data that goes beyond what people agree to under privacy policies and permissions requests. "In the end, you're left with a policy that's essentially meaningless because it doesn't describe what's accurately happening," said Serge Egelman, director of usable security and privacy research at the International Computer Science Institute. "The only way to answer that question is going in and seeing what the app is doing with that data." Sometimes, the data is just headed to advertisers, who think they can use it to sell you products. Phone location data can be a gold mine for advertisers, who tap it to figure out where people are at certain times. But it may also be going to government agencies that leverage the technology to surveil people using data collected by apps that never disclosed what they were doing. Recently, The Wall Street Journal reported that government agencies were using such data to track immigrants. These researchers are shining a light on a hidden world of data tracking, and raising concerns about how much information people are giving away without knowing it.
Quote for the day:
"The most important quality in a leader is that of being acknowledged as such." -- Andre Maurois