Daily Tech Digest - February 29, 2020

Why your brain is not a computer

The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it. Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology.

How Machine Learning Can Strengthen Insider Threat Detection

To mitigate insider threats, experts suggest that enterprises develop their own risk algorithms by coupling machine learning capabilities with behavioral analytics to understand discrepancies in employee activities. Companies can use human resources data to help create these new algorithms, said Dawn Cappelli, CISO of Rockwell Automation. "The key is having HR data. You can build your risk models by taking the contextual employee data along with their online activity and create risk algorithms." But the real challenge is refining and contextualizing this data in order to correctly identify potential threats, said Solomon Adote, CISO for the state of Delaware. "Data without context might not tell you the full story," Adote said. "It has to be about identifying what is abnormal about a particular activity." Once the data is contextualized, Adote noted, enterprises then can use this information to create alerts, advise employees about their activities and make them aware that the company is aware of what's happening internally. "That's sometimes all you need to prevent a significant catastrophe," Adote said.

Microsoft's Blazor for building mobile apps gains traction

While the mobile bindings project is still considered experimental, it is encouraging for fans of Blazor that Microsoft appears set to update it regularly and fix bugs. Microsoft has previously demonstrated how Blazor can be used to build all types of apps, including server-based web apps, offline web apps with WebAssembly, progressive web apps, hybrid .NET native apps that render to Electron and work offline, as well as native desktop and mobile apps. Blazor fan Chris Sainty, a UK-based developer, has posted a helpful explanation of Blazor on the Stack Overflow blog. He details what sets Blazor and its mobile bindings apart from other popular JavaScript UI frameworks, such as Angular and ReactJS, and how it leans towards web developers' existing work processes. "By using different renderers Blazor is able to create not only web based UIs, but also native mobile UIs as well," he notes. "This does require components to be authored differently, so components written for web renderers can't be used with native mobile renderers. However, the programming model is the same. Meaning once developers are familiar with it, they can create UIs using any renderer."

AI is helping Microsoft rethink Office for mobile

Office for Android screenshots
Microsoft this week launched an Office app that replaces Word, Excel, and PowerPoint on Android and iOS. Merging three apps into one, while adding more features, is quite the achievement. The new Office app is not just for consuming content and maybe a little light editing on the side, but actually creating content on the go. Most interestingly, a lot of these features fundamentally require AI and machine learning to achieve this new mobile productivity paradigm. Microsoft has been adding AI-driven features to its once most profitable product line for years now — we did a recap of just a handful last year. This week’s Office launch, however, showed Microsoft’s embrace of AI as not merely augmenting what you can already do with the productivity suite, but added new use cases altogether. Most of the new features are not simply traditional desktop features ported to mobile. They are use cases that are better on mobile, or not even possible on desktop. Office lets you take a picture of a document and turn it into a Word file.

3 ways AI is transforming the insurance industry

In the car insurance sector, insurers use telematics to collect real-time driving data from vehicles. As opposed to the past, where they had to rely on basic information about the vehicle and driver to craft their insurance policies, they can now analyze telematics data with machine learning algorithms to create personalized risk profiles for drivers. Many insurers use this data to give discounts to drivers who have safe driving habits and penalize dangerous behavior such as speeding, hard braking, harsh acceleration, and hard cornering. The same data can help reconstruct accident scenes and enable insurers to better understand and assess what happened, which results in much faster claims processing. In the health insurance sector, service providers use machine learning to help patients choose the best health insurance coverage options to fit their needs. Data collected from wearables such as fitness trackers and heart rate monitors help insurers monitor track and reward healthy habits such as regular exercise, and encourage preventive care by providing healthy nutrition tips.

Seven cybersecurity and privacy forecasts for 2020

cybersecurity forecasts 2020
Over the past ten years, personal medical devices such as insulin pumps, heart and glucose monitors, defibrillators and pacemakers have been connected to the internet as part of the Internet of Medical Things (IoMT). At the same time, researchers have identified a growing number of software vulnerabilities and demonstrated the feasibility of attacks on these products. This can lead to targeted attacks on both individuals and entire product classes. In some cases, the health information generated by the devices can also be intercepted. So far, the healthcare industry has struggled to respond to the problem – especially when the official life of the equipment has expired. As with so many IoT devices of this generation, networking was more important than the need for cybersecurity. The complex task of maintaining and repairing equipment is badly organized, inadequate or completely absent. Through the development of software and hardware platforms, vehicles and transport infrastructure are increasingly connected. 

Turner explained, "In our own research we have shown that it is conceivable that the roots of trust pre-installed in all iOS devices can be a very fertile ground for attacking mobile devices in the way that the FTI Consulting report outlined. It is also very convenient that Apple does not allow for third party monitoring of their devices or operating systems, allowing attackers to completely remove any forensic evidence by merely forcing a shutdown of the device, with nearly all evidence destroyed once it is finished rebooting." But, you can't stop some cyberattacks from happening. "Unfortunately, in the case of zero-day exploits like the ones that were probably used in the Bezos case, even the best threat defense tools cannot protect users from that class of attacks. We have worked with several organizations to build programs to protect executives from these types of attacks, but they require resources and operational discipline to be effective," he said. Turner said that anyone without a properly maintained mobile device, meaning security updates installed within three weeks of release, is at risk. First and foremost, get rid of WhatsApp on anyone's phone at your company.

The Need for a 'Collective Defense'

Breaching private companies can create doorways into government networks as the two heavily rely on each other, he notes in an interview with Information Security Media Group. For example, Granicus, one of the largest IT service providers for U.S. federal and local government agencies, recently left a massive Elasticsearch database exposed to the internet. Alexander says private sector organizations need to share anonymized information on cybersecurity issues with the government so that further attacks can be prevented. "In cyber, each company works by itself and shares what is important. But you don't get the whole picture so you don't see what's going on," Alexander says. A "collective defense" approach means the entire cybersecurity community would work together, he explains. The Cybersecurity Information Sharing Act of 2015 provides a legal framework for government agencies and private sector organizations to voluntarily share cybersecurity information and other security data, Alexander points out.

4 fundamental microservices security best practices

Defense-in-depth is a strategy in which several layers of security control are introduced in an application. Sensitive services get layers of security cover, so a potential attacker who has exploited one of the microservices in the application may not be able to do so to another microservice or other layers of the application. Rather than depending on a single, seemingly robust security measure, use all the security measures at your disposal to create layers of security that potential attackers will have to break through. For instance, even if you already have a strong network perimeter firewall in place, ensure that you still practice strong token-based identification, keep addresses of sensitive microservices private and maintain a strong monitoring layer that diligently identifies unusual behavior. In a typical microservices-based application, it's ideal that service consumers do not communicate with microservices directly.

Three things CISOs need to do differently in 2020

Cybersecurity and secure nerwork concept. Data protection, gdrp. Glowing futuristic backround with lock on digital integrated circuit.
The entire security team needs to be a learning organization to attract talent and keep up with new threats and new defenses, Michaux said. Developing this attitude will let prospective employees know that they are joining a company that is open to innovation and experimentation, not one that hyper-risk-averse and slow moving. To reinforce this culture, security leaders should think small and act fast and use the cloud to break things, rebuild, and improve. "Security teams have to realize that it's OK to break things as long as you learn something from it and quickly and apply that knowledge productively," said Caleb Queern, a director of KPMG cybersecurity services. CISOs should take an honest look at automation in 2020 as well. Ask what artificial intelligence can handle and what requires human attention. The goal should be to automate at least 50% of the basic controls of the security environment. Finally, security professionals should be able to read and write basic code. This has two benefits: it will earn the respect of DevOps engineers and it will help security pros know when to influence the development process.

Quote for the day:

"Even the demons are encouraged when their chief is "not lost in loss itself." -- John Milton

Daily Tech Digest - February 28, 2020

Google says Microsoft Edge isn't secure.

"Google recommends switching to Chrome to use extensions securely," says a pop-up. Oh, so Edge is insecure? That's terrible. Oddly, when I tried the browser, I found it a touch faster and privacy-friendlier than Google's. It didn't seem so insecure. Why would Google be so worried on my behalf? Worse, Techdows reported that Google is also offering more desperate warnings for users of Google Docs, Google News and Google Translate. The essential message: don't pair these with Edge. This verged on terrible mean-spiritedness, I feared. After all, Edge is based on Google's own Chromium platform. Just as I was about to punish Google by using Bing for a day, another piece of troubling information assaulted me. According to PC World, Microsoft is apparently telling those who use Edge and go to the Chrome web store to get an extension: "Extensions installed from sources other than the Microsoft Store are unverified, and may affect browser performance." Can't we rely on anything these days? Naturally, I instantly contacted Google to ask in what way Edge was insecure. Without pausing for breath or to curse at the new space bar issues with my MacBook Air, I asked Microsoft why extensions from the Chrome store might make Edge a little edgy.

Multi-Runtime Microservices Architecture

Multi-Runtime Microservices Architecture
One of the well-known traditional solutions satisfying an older generation of the above-listed needs is the Enterprise Service Bus and its variants, such as Message Oriented Middleware, lighter integration frameworks, and others. An ESB is a middleware that enables interoperability among heterogeneous environments using a service-oriented architecture. While an ESB would offer you a good feature set, the main challenge with ESBs was the monolithic architecture and tight technological coupling between business logic and platform, which led to technological and organizational centralization. When a service was developed and deployed into such a system, it was deeply coupled with the distributed system framework, which in turn limited the evolution of the service. This often only became apparent later in the life of the software. Here are a few of the issues and limitations of each category of needs that makes ESBs not useful in the modern era. In traditional middleware, there is usually a single supported language runtime, which dictates how the software is packaged, what libraries are available, how often they have to be patched, etc.

Intel takes aim at Huawei 5G market presence

note 10 5g node
Intel on Monday introduced a raft of new processors, and while updates to the Xeon Scalable lineup led the parade, the real news is Intel's efforts to go after the embattled Huawei Technologies in the 5G market. Intel unveiled its first ever 5G integrated chip platform, the Atom P5900, for use in base stations. Navin Shenoy, executive vice president and general manager of the data platforms group at Intel, said the product is designed for 5G's high bandwidth and low latency and combines compute, 100Gb performance and acceleration into a single SoC. "It delivers a performance punch in packet security throughput, and improved packet balancing throughput versus using software alone," Shenoy said in the video accompanying the announcement. Intel claims the dynamic load balancer native to the Atom P5900 chip is 3.7 times more efficient at packet balancing throughput than software alone. Shenoy said Ericsson, Nokia, and ZTE have announced that they will use the Atom P5900 in their base stations. Intel hopes to be the market leader for silicon base station chips by 2021, aiming for 40% of the market and six million 5G base stations by 2024.

Can Deutsche Bank’s PaaS help turn the bank around?

Can Deutsche Bank’s PaaS help turn the bank around?
It’s a rapid success story for a highly leveraged and highly regulated international bank – which is in the midst of a turnaround effort and that registered a loss of €5.7 billion ($7.4 billion) last year – and one that even has management considering whether Fabric is good enough to sell to rival banks to eventually turn its technology investments into a revenue stream. A key problem Fabric helped solve was one that confronted the bank’s new leadership when it arrived in 2015: a sizeable virtual machine (VM) estate that was only being utilised at a rate of around eight percent. “The CIOs got together and realised they had a problem to fix because this is just money that’s bleeding out to the organisation,” platform-as-a-service product owner at Deutsche Bank, Emma Williamson, said during a recent Red Hat OpenShift Commons event in London. So the bank set out to drastically modernise its application estate around cloud native technologies like containers and Kubernetes, all with the aim of cutting this waste tied to its legacy platforms and help drive a broader shift towards the cloud.

The 9 Best Free Online Data Science Courses In 2020

The 9 Best Free Online Data Science Courses In 2020
You don't have to spend a fortune and study for years to start working with big data, analytics, and artificial intelligence. Demand for "armchair data scientists" – those without formal qualifications in the subject but with the skills and knowledge to analyze data in their everyday work, is predicted to outstrip demand for traditionally qualified data scientists in the coming years. ... Some of these might require payment at the end of the course if you want official certification or accreditation of completing the course, but the learning material is freely available to anyone who wants to level up their data knowledge and skills. ... As it is a Microsoft course, its cloud-based components focus on the company's Azure framework, but the concepts that are taught are equally applicable in organizations that are tied to competing cloud frameworks such as AWS. It assumes a basic understanding of R or Python, the two most frequently used programming languages in data science, so it may be useful to look at one of the courses covering those that are mentioned below, first.

Microsoft Makes Progress on PowerShell Secrets Management Module

The idea behind the module is that it has been difficult for organizations to manage secrets securely, especially when running scripts across heterogeneous cloud environments. Developers writing scripts want them to run across different platforms, but that might involve handling multiple secrets and multiple secrets types. The team sees PowerShell serving as a connection point between different systems. Consequently, it built an abstraction layer in PowerShell that can be used to manage secrets, both with local vaults and remote vaults, Smith explained in a November Ignite talk. The module helps manage local and remote secrets in unified way, Smith added. It might be used to run a script in various environments, where just the vault parameter would need to be changed. Scripts could be shared across an organization, and it wouldn't be necessary to know the local vaults of the various users. Keys could be shared with users in test environments, but deployment keys could be individualized. It would be less necessary to hard-code secrets into scripts. The PowerShell Secret Management Module is being designed to work with various vault extensions.

Facebook sues SDK maker for secretly harvesting user data

Facebook website
According to court documents obtained by ZDNet, the SDK was embedded in shopping, gaming, and utility-type apps, some of which were made available through the official Google Play Store. "After a user installed one of these apps on their device, the malicious SDK enabled OneAudience to collect information about the user from their device and their Facebook, Google, or Twitter accounts, in instances where the user logged into the app using those accounts," the complaint reads. "With respect to Facebook, OneAudience used the malicious SDK – without authorization from Facebook – to access and obtain a user's name, email address, locale (i.e. the country that the user logged in from), time zone, Facebook ID, and, in limited instances, gender," Facebook said. Twitter was the first to expose OneAudience's secret data harvesting practices on November 26, last year. Facebook confirmed on the same day. In a blog post at the time, Twitter also confirmed that beside itself and Facebook, the data harvesting behavior also targeted the users of other companies, such as Apple and Google.

Product Development with Continuous Delivery Indicators

Data-Driven Decision Making – Product Development with Continuous Delivery Indicators
Software product delivery organizations deliver complex software systems on an evermore frequent basis. The main activities involved in the software delivery are Product Management, Development and Operations (by this we really mean activities as opposed to separate siloed departments that we do not recommend). In each of the activities many decisions have to be made fast to advance the delivery. In Product Management, the decisions are about feature prioritization. In Development, it is about the efficiency of the development process. And in Operations, it is about reliability. The decisions can be made based on the experience of the team members. Additionally, the decisions can be made based on data. This should lead to a more objective and transparent decision making process. Especially with the increasing speed of the delivery and the growing number of delivery teams, an organization’s ability to be transparent is an important means for everyone’s continuous alignment without time-consuming synchronization meetings.

Can Machines And Artificial Intelligence Be Creative?

Can Machines And Artificial Intelligence Be Creative?
If AI can enhance creativity in visual art, can it do the same for musicians? David Cope has spent the last 30 years working on Experiments in Musical Intelligence or EMI. Cope is a traditional musician and composer but turned to computers to help get past composer’s block back in 1982. Since that time, his algorithms have produced numerous original compositions in a variety of genres as well as created Emily Howell, an AI that can compose music based on her own style rather than just replicate the styles of yesterday’s composers. In many cases, AI is a new collaborator for today’s popular musicians. Sony's Flow Machine and IBM's Watson are just two of the tools music producers, YouTubers, and other artists are relying on to churn out today's hits. Alex Da Kid, a Grammy-nominated producer, used IBM’s Watson to inform his creative process. The AI analyzed the "emotional temperature" of the time by scraping conversations, newspapers, and headlines over a five-year period. Then Alex used the analytics to determine the theme for his next single.

Educating Educators: Microsoft's Tips for Security Awareness Training

The key challenge is creating an engaging, relatable training course that effectively teaches employees the concepts they need to know, Sexsmith said. Sexsmith pointed to a few tricks he uses in his programs. One of these is the "Social Proof Theory," a social and psychological concept that describes how people copy other people's behavior – if your colleagues are doing a training, you'll do it, too. Gamification also helps: "People want to learn; people want to master skills, but there's also a competitive nature around that," he said. Some trainings use videos that make security concepts more accessible. One problem, he said, is lessons that aren't reinforced aren't retained. Humans forget half of new information learned within an hour and 70% of new information within a day. "By lunchtime, you're going to forget 50% of the stuff I'm up here saying," he joked to his morning audience. To fight this, Microsoft uses a training reinforcement platform called Elephants Don't Forget to help employees build muscle memory around new concepts. During the gap between trainings, the program sends participants two daily emails with a link to questions tailored to the course.

Quote for the day:

"Eventually relationships determine the size and the length of leadership." -- John C. Maxwell

Daily Tech Digest - February 27, 2020

Unpatched Security Flaws Open Connected Vacuum to Takeover

iot robot vacuum cleaner
Researchers have discovered several high-severity vulnerabilities in a connected vacuum cleaner. The security holes could give remote attackers the capability to launch an array of attacks — from a denial of service (DoS) attack that renders the vacuum unusable, to viewing private home footage through the vacuum’s embedded camera. The Ironpie M6, which is available for $230 on Amazon, comes equipped with a corresponding mobile app and a security camera. The vacuum cleaner is built by artificial intelligence home robot company Trifo, and was first launched IronPie at CES 2019. Researchers on Wednesday said that they uncovered six flaws, stemming from the vacuum’s mobile app and its connectivity protocol, at RSA Conference 2020, this week in San Francisco. “The most severe vulnerability allows attackers to access any video stream from any Trifo device across the world,” Erez Yalon, director of security research with Checkmarx, told Threatpost. “Through this vulnerability, every single user – whether in a home or office setting as shown in our PoC video – is at risk of a hacker obtaining a live video feed. Needless to say, this represents a total loss of privacy.”

The Amazing Ways Goodyear Uses Artificial Intelligence And IoT For Digital Transformation

The Amazing Ways Goodyear Uses Artificial Intelligence And IoT For Digital Transformation
Regardless if it's an autonomous, electric, or a traditional vehicle, they all need a solid foundation of the right tire for the specific demands of the vehicle. Goodyear uses internet of things technology in its Eagle 360 Urban tire. The tire is 3D printed with super-elastic polymer and embedded with sensors. These sensors send road and tire data back to the artificial intelligence-enhanced control panel that can then change the tread design to respond to current road conditions on the fly and share info about conditions with the broader network. If the tire tread is damaged, the tire moves the material and begins self-repair. Goodyear’s intelligent tires are in use on a new pilot program with Redspher, a European transportation and logistics company operating in 19 countries. The fleet benefits from the tire's ability to monitor and track tire pressure, vehicle data, and road conditions. This data is then analyzed by Goodyear’s algorithms to gain insights about maintenance needs and ways to improve the safety and performance of the fleet.

Google Teaches AI To Play The Game Of Chip Design

One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscaler’s key technologies, talked about in his keynote address at this week’s 2020 International Solid State Circuits Conference in San Francisco. Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender – particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.

JFrog touts DevSecOps edge in CI/CD tools

Most CI/CD tools integrate with package managers for similar purposes. But JFrog could differentiate its Pipelines product based on its experience developing the Artifactory artifact repository manager, as well as its messaging. "Everyone is really doing the same thing -- transforming code into software packages and then shipping those packages to production," said Tom Petrocelli, an analyst at Amalgam Insights. "But there are security advantages as a side effect of the way [JFrog thinks]." This relates to the fact that enterprise DevOps shops in the Linux world increasingly use package managers to centralize corporate governance, explained Charles Betz, an analyst at Forrester Research. "There's a heck of a lot of digital management that revolves around artifacts when you don't own the source code, when that code is written by open source communities and vendors," Betz said.

Hidden cost of cloud puts brakes on migration projects

More than half (58%) of the IT decision-makers surveyed believe the cloud over-promised and under-delivered, while 43% admit that the cloud is more costly than they thought. Only 27% of IT decision-makers surveyed claim they have been able to reduce labour and logistical costs by moving to the cloud. Mark Cook, divisional executive officer at Capita, said: “Every migration journey is unique in both its destination and starting point. While some organisations are either ‘born’ digital or can gather the resources to transform in a relatively short space of time, the majority will have a much slower, more complex path. “Many larger organisations will have heritage technology and processes that can’t simply be lifted and converted, but will need some degree of ‘hybrid by design’,” he added. When asked what unforeseen factors had delayed cloud migration projects, 39% had cost as the main factor, followed by workload and application rearchitecting issues (38%) and security concerns (37%).

IoT Can Put Your Data at Risk, Here’s How

ai and big data
The data processed by IoT devices is potentially extremely sensitive. With office and home security systems increasingly mediated by IoT (doorbells and surveillance cameras being just a couple of examples), criminal attacks can pose a serious problem. The huge volume of data habitually collected by IoT devices was exposed this year when a database owned by the Chinese firm Orvibo, who offer a smart home appliance platform, was found to have no password protection despite containing logs relating to 2 million worldwide users, including individuals and hotel chains. The data included insufficiently-protected user passwords, reset codes, precise locations, and even a recorded conversation. Botnets are another way for cybercriminals to wreak havoc using IoT devices. Botnets consist of, as their name suggests, networks of bots running on Internet-connected devices. They are primarily known for their role in DDoS (Distributed Denial of Service) attacks, in which a stream of network requests is sent to a network that a malicious entity wishes to bring down.

DesignOps — scaling design to create a productive internal environment for IBMers

DesignOps — scaling design to create a productive environment for IBMers image
DesignOps is a collective term for creating a productive workforce, by addressing challenges such as: growing and evolving design teams, finding and hiring people with the right skills, creating efficient workflows and improving the quality and impact of design outputs. It’s a method of optimising people, processes and workflow, and at IBM, the practice has been deployed to increase efficiency, productivity and general well-being among the the whole organisation, including the thousands-strong IT team. Satisfying this level of individuals and teams is no easy feat, which is why IBM has a specific department dedicated to creating great experiences for IBMers. Kristin Wisnewski — who is on the advisory board for Information Age’s Women in IT Summit in New York on March 25th 2020 at the Grand Hyatt Hotel — leads the CIO Design team at IBM as vice president, whose purpose is to create a productive internal environment at IBM. “We’re here to create, design and improve the experience of employees in their daily jobs. Our team is made up of 140 people, and so it is a big mission to help the hundreds of thousands of employees here at IBM,” she said.

Cloud misconfigurations are a new risk for the enterprise

businessman touching Cloud with Padlock icon on network connection, digital background. Cloud computing and network security concept
Cloud misconfigurations are becoming another risk for corporations. At RSA 2020, Steve Grobman, senior vice president and chief technology officer at McAfee, explained how easy it is to take advantage of cloud misconfigurations, an expensive security problem for corporations. He compared cyber security to infectious disease control: an imperfect science. ... In addition to making sure cloud configurations are secure, security teams have to address tomorrow's security risks today, Grobman said. Advances in quantum computing will be a double-edged sword with the downside being the threat to existing encryption systems. "Nation-states will use quantum computing to break our public key encryption systems," he said. "Our adversaries are getting the data today and counting on quantum to unlock in tomorrow." Grubman said that companies need to think about how long data will need to be protected. "Even in 2020, there are documents in the National Archives in relation to the Kennedy assignation that still have redacted information due to national security concerns of today," he said.

Data Science Is A Team Sport: Oracle’s New Cloud Platform Provides The Playing Field

Data Science
Unlike other data science products that focus on helping individual data scientists, Oracle Cloud Infrastructure Data Science helps improve the effectiveness of data science teams with capabilities like shared projects, model catalogs, team security policies, and reproducibility and auditability features. “Data scientists are experimenters. They want to try stuff and see how it works,” says Pavlik. “They grab sample datasets, they pull in all kinds of open source tools, and they're doing great stuff. What we want to do is let them keep doing that, but improve their productivity by automating their entire workflow and adding strong team support for collaboration to help ensure that data science projects deliver real value to businesses.” The starting point for data science to deliver value is doing more with machine learning, and being more efficient with the data and algorithms involved.  “Effective machine learning models are the foundation of successful data science projects,” Pavlik says, but the volume and variety of data facing data science teams “can stall these initiatives before they ever get off the ground.”

Getting closer to no-battery devices

The technique being exploited takes advantage of backscattering. That's a way of parasitically using radio signals inherent in everyday environments. In this case, the chip piggybacks on existing Wi-Fi transmissions to send its data. This method of sending data is power-light, because the carrier needed for the radio transmission is already created—it doesn’t need new energy for the message to be sent. Interestingly, two principal scientists involved in this backscattering project, which was announced by UC San Diego's Jacobs School of Engineering, have also been heavily involved in the development of "wake-up" radios. Wake-up is when a Wi-Fi or other radio comes alive to communicate only when it has something to transmit or receive. The technology uses two radios. One radio is for the wake-up signaling; that radio's only purpose is to listen for a signature. The second is a more heavy-duty radio for the data send. Power is saved because the main radio isn't on all the time. Dinesh Bharadia, now a professor of electrical and computer engineering at UC San Diego, was at Stanford University working on a wake-up radio that I’ve written about.

Quote for the day:

"The greatest good you can do for another is not just share your riches, but reveal to them their own." -- Benjamin Disraeli

Daily Tech Digest - February 25, 2020

5G's impact: Advanced connectivity, but terrifying security concerns

Despite the enthusiasm, professionals are also concerned about some of the negative aspects of 5G, specifically security and cost. The top barriers to adopting 5G in the next three years included security concerns (35%) and upfront investment (31%), the report found. The relationship between 5G and security is complex. Overall, the majority of respondents (68%) do believe 5G will make their businesses more secure. However, security challenges are also inherent to the network infrastructure, according to the report. These concerns involve user privacy (41%), the number of connected devices (37%), service access (34%), and supply chain integrity (29%). On the connected devices front, some 74% of respondents said they are worried that having more connected devices will bring more avenues for data breaches. With that said, the same percentage of respondents understand that adopting 5G means they will need to redefine security policies and procedures. To prepare for both security and cost challenges associated with 5G, the report recommended users seek external help. The partners businesses will most likely work with include software and services companies (44%), cloud companies (43%), and equipment providers (31%). 

What if 5G fails? A preview of life after faith in technology

"If it gets to a point where it's a broad decoupling of the developed from the emerging economies," said Sec. Lew, "that's not good for anyone. The growth of emerging economies would not be very impressive if they didn't have very active, robust trading relationships with developed economies. And the costs in developed economies would go up considerably, which means that the impact on consumers would be quite dramatic." "We know, from the early days when there was CDMA and GSM," remarked Greg Guice, senior vice president at Washington, DC-based professional consultancy McGuireWoods, "that made it very difficult to sell equipment on a global basis. That not only hurt consumers, but it hurt the pace of technology." He continued: I think what the companies that are building the equipment, and seeking to deploy the equipment, are trying to figure out is, in a world where there may be fragmentation, how do we manage this? I don't see people Balkanizing into their own camps; I think everybody is trying to preserve, as best they can, international harmonization of a 5G platform. Those efforts are in earnest.

Greenpeace takes open-source approach to finish web transformation

“The vision is to help people take action on behalf of the planet,” said Laura Hilliger, a concept architect at Greenpeace who is a leading member of the Planet 4 project. “We want to provide a space that helps people understand how our ecological endeavours are successful, and to show that Greenpeace’s work is successful because of people working collectively.” She met Red Hat representatives after work was already underway on the project in May 2018, which culminated in consultants, technical architects and designers from the company coming in to do a “design sprint” with Greenpeace exactly a year later. This helped Red Hat better understand Planet 4 users and how they interact with the platform, as well as the challenges of integration and effectively visualising data. Hilliger said variations in the tech stacks deployed across Greenpeace’s 27 national and regional offices, on top of its 50-plus websites and platforms, had created a complex data landscape that made integrations difficult.

Evolution of the data fabric

Personally, the fabric concept also began to change my thinking when discussing infrastructure design, for too long it was focussed on technology, infrastructure and location, which would then be delivered to a business upon which they would place their data. However, the issue with this was the infrastructure could then limit how we used our data to solve business challenges. Data fabric changes that focus, building our strategy based on our data and how we need to use it, a focus on information and outcomes, not technology and location. Over time as our data strategies evolved with more focus on data and outcomes, it became clear that a consistent storage layer while a crucial part of a modern data platform design, does not in itself deliver all we need. A little while ago I wrote a series of articles about Building a Modern Data Platform which described how a platform is multi-layered, requiring not just consistent storage but also must be intelligent enough to understand our data as it is written and provide insight, apply security and do these things immediately across our enterprise.

Legal Tech May Face Explainability Hurdles Under New EU AI Proposals

Horrigan noted the transparency language in the European Commission’s proposal is similar to the transparency principles outlined in the EU’s General Data Protection Regulation (GDPR). While the European Commission is still drafting its AI regulations, legal tech companies have fallen under the scope of the GDPR since mid-2018. Legal tech companies have also fielded questions regarding predictive coding’s accuracy and transparency with technology-assisted review (TAR), Horrigan added. TAR has become increasingly accepted by courts after then-U.S. Magistrate Judge Andrew Peck of the Southern District of New York granted the first approval of TAR in 2012. In Peck’s order, he discussed predictive coding’s transparency that provides clarity regarding AI-powered software’s “black box.” “We’ve addressed the black box before with technology-assisted review and we will do it again with other forms of artificial intelligence. The black box issue can be overcome,” Horrigan said. However, Hudek disagreed. While Hudek said the proposed regulation doesn’t make him hesitant to develop new AI-powered features to his platform, it does make it more challenging.

Thinking About ‘Ethics’ in the Ethics of AI

Ethics by Design is “the technical/algorithmic integration of reasoning capabilities as part of the behavior of [autonomous AI]”. This line of research is also known as ‘machine ethics’. The aspiration of machine ethics is to build artificial moral agents, which are artificial agents with ethical capacities and thus can make ethical decisions without human intervention. Machine ethics thus answers the value alignment problem by building autonomous AI that by itself aligns with human values. To illustrate this perspective with the examples of AVs and hiring algorithms: researchers and developers would strive to create AVs that can reason about the ethically right decision and act accordingly in scenarios of unavoidable harm. Similarly, the hiring algorithms are supposed to make non-discriminatory decision without human intervention. Wendell Wallach and Colin Allen classified three types of approaches to machine ethics in their seminal book Moral machines.

Cisco goes to the cloud with broad enterprise security service

cloud security expert casb binary cloud computing cloud security by metamorworks getty
Cisco describes the new SecureX service as offering an open, cloud-native system that will let customers detect and remediate threats across Cisco and third-party products from a single interface. IT security teams can then automate and orchestrate security management across enterprise cloud, network and applications and end points. “Until now, security has largely been piecemeal with companies introducing new point products into their environments to address every new threat category that arises,” wrote Gee Rittenhouse senior vice president and general manager of Cisco’s Security Business Group in a blog about SecureX. “As a result, security teams that are already stretched thin have found themselves managing massive security infrastructures and pivoting between dozens of products that don’t work together and generate thousands of often conflicting alerts. In the absence of automation and staff, half of all legitimate alerts are not remediated.” Cisco pointed to its own 2020 CISO Benchmark Report, also released this week, as more evidence of the need for better, more tightly integrated security systems.

Evolution of Infrastructure as a Service

Some would say that IaaS, SaaS, and PaaS are part of a family tree. SaaS is one of the more widely known as-a-service models where cloud vendors host the business applications and then deliver to customers online. It enables customers to take advantage of the service without maintaining the infrastructure required to run software on-premises. In the SaaS model, customers pay for a specific number of licenses and the vendor manages the behind-the-scenes work. The PaaS model is more focused on application developers and providing them with a space to develop, run, and manage applications. PaaS models do not require developers to build additional networks, servers or storage as a starting point to developing their applications. ... IaaS is now enabling more disruption across all markets and industries as the same capabilities available to larger companies are now also available to the smallest startup in a garage. This includes advances in AI and Machine Learning (as a service), data analytics, serverless technologies, IoT and much more. This is also requiring large companies to behave as agile as a startup.

AI Regulation: Has the Time Arrived?

Karen Silverman, a partner at international business law firm Latham & Watkins noted that regulation risks include stifling beneficial innovation, the selection of business winners and losers without any basis, and making it more difficult for start-ups to achieve success. She added that ineffective, erratic, and uneven regulatory efforts or enforcement may also lead to unintended ethics issues. "There's some work [being done] on transparency and disclosure standards, but even that is complicated, and ... to get beyond broad principles, needs to be done on some more industry- or use-case specific basis," she said. "It’s probably easiest to start with regulations that take existing principles and read them onto new technologies, but this will leave the challenge of regulating the novel aspects of the tech, too." On the other hand, a well-designed regulatory scheme that zeros-in on bad actors and doesn't overregulate the technology would likely mark a positive change for AI and its supporters, Perry said.

Functional UI - a Model-Based Approach

User interfaces are reactive systems which are specified by the relation between the events received by the user interface application and the actions the application must undertake on the interfaced systems. Functional UI is a set of implementation techniques for user interface applications which emphasizes clear boundaries between the effectful and purely functional parts of an application. User interfaces' behavior can be modelized by state machines, that, on receiving events, transition between the different behavior modes of the interface. A state machine model can be visualized intuitively and economically in a way that is appealing to diverse constituencies (product owner, testers, developers), and surfaces design bugs earlier in the development process. Having a model of the user interface allows to auto-generate both the implementation and the tests for the user interface, leading to more resilient and reliable software. Property-based testing and metamorphic testing leverage the auto-generated test sequences to find bugs without having to define the complete and exact response of the user interface to a test sequence. Such testing techniques have found 100+ new bugs in two popular C compilers (GCC and LLVM)

Quote for the day:

"There is no 'one' way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer

Daily Tech Digest - February 24, 2020

Why data literacy needs to be part of a company's DNA

Book and computer technology in library
"Companies with lower levels of data literacy in the workforce will be at a competitive disadvantage," said Martha Bennett, vice president and principal analyst, Forrester. "It's also important to stress that different roles have different requirements for data literacy; advanced firms also understand that increasing data literacy is not a once-and-done training exercise, it's a continuous process." These days, everyone in an organization needs to be data literate, and the organization must establish a well-rounded data literacy program to ensure effective decision making. The programs must address the capacity to collect, analyze, and disseminate data tailored to the needs of diverse organizational roles. "Lack of data literacy puts you at a disadvantage, and can lead to potentially disastrous outcomes," Bennett said, "and we're not just talking about a business context here, the same applies in our personal lives." Numbers play a role in daily decisions, both in business and in our personal lives. Quantitative information must be evaluated, whether it's predicting an event, considering the increased risk of developing disease, how people lean politically, or how popular a product or service is.

5 reasons to choose PyTorch for deep learning

5 reasons to choose PyTorch for deep learning
One of the primary reasons that people choose PyTorch is that the code they look at is fairly simple to understand; the framework is designed and assembled to work with Python instead of often pushing up against it. Your models and layers are simply Python classes, and so is everything else: optimizers, data loaders, loss functions, transformations, and so on. Due to the eager execution mode that PyTorch operates under, rather than the static execution graph of traditional TensorFlow (yes, TensorFlow 2.0 does offer eager execution, but it’s a touch clunky at times) it’s very easy to reason about your custom PyTorch classes, and you can dig into debugging with TensorBoard or standard Python techniques all the way from print() statements to generating flame graphs from stack trace samples. This all adds up to a very friendly welcome to those coming into deep learning from other data science frameworks such as Pandas or Scikit-learn. PyTorch also has the plus of a stable API that has only had one major change from the early releases to version 1.3 (that being the change of Variables to Tensors).

AI: It's time to tame the algorithms and this is how we'll do it

To achieve this objective, the Commission wants to create an "ecosystem of trust" for AI. And it starts with placing a question mark over facial recognition. The organisation said it would consider banning the technology altogether. Commissioners are planning to launch a debate about "which circumstances, if any" could justify the use of facial recognition. The EU's white paper also suggests having different rules, depending on where and how an AI system is used. A high-risk system is one used in a critical sector, like healthcare, transport or policing, and which has a critical use, such as causing legal changes, or deciding on social-security payments. Such high-risk systems, said the Commission, should be subject to stricter rules, to ensure that the application doesn't transgress fundamental rights by delivering biased decisions. In the same way that products and services entering the European market are subject to safety and security checks, argues the Commission, so should AI-powered applications be controlled for bias. The dataset feeding the algorithm could have to go through conformity assessments, for instance. The system could also be required to be entirely retrained in the EU.

Why You Should Revisit Value Discovery

Ecosystem Thinking article 3
There are at least two reasons for the shift. The first is because we are in a digital world. Now the cost of creating new products can be extraordinarily low (a developer, a laptop). And the cost factor has given rise to new methodologies like Lean Startup and concepts like Fail Fast, Fail Cheap. As enterprises adopt these techniques, they push more projects into corporate innovation pipelines. More on the impact of that later. The second reason relates to software development and delivery methods. It is now possible, often necessary, to chunk software into smaller and smaller units of work and push these into a live test environment with users relatively quickly. Both of these approaches are creating problems. They reinforce the view that more is better. And both also reinforce a challenging proposition: enterprises can be experimental laboratories. Are you starting to get the picture? More ideas of dubious and yet-to-be tested value find their way into your workflow! Perhaps enterprises can convert this negative into a positive but to do so means stitching together a value discovery process with very good value management and delivery.

More And More Organizations Injecting Emotional Intelligence Into Their Systems

More and more organizations injecting emotional intelligence into their systems: Study - CIO&Leader
A growing number of organizations are injecting emotional intelligence into their systems. These include AI capabilities, such as machine learning and voice and facial recognition, which can better detect and appropriately respond to human emotion, according to Deloitte’s 11th annual Tech Trends 2020 report. The trends also indicate more and more organizations using digital twins, human experience platforms and new approaches to enterprise finance, which can redefine the future of tech innovation. Deloitte’s 11th annual Tech Trends 2020 report captures the intersection of digital technologies, human experiences, and increasingly sophisticated analytics and artificial intelligence technologies in the modern enterprise. The report explores digital twins, the new role technology architects play in business outcomes, and affective computing-driven “human experience platforms” that are redefining the way humans and machines interact. Tech Trends 2020 also shares key insights and prescriptive advice for business and technology leaders so they can better understand what technologies will disrupt their businesses during the next 18 to 24 months.

7 Tips to Improve Your Employees' Mobile Security

(Image: Mirko -- stock.adobe.com)
"A bit of a trade-off has to happen, as they're managing an aspect of something that is personally owned by the employee, and they're using it for all kinds of things besides work," says Sean Ryan, a Forrester analyst serving security and risk professionals. On nights and weekends, for example, employees are more likely to let their guards down and connect to public Wi-Fi or neglect security updates. Sure, some people are diligent about these things, while some "just don't care," Ryan adds. This attitude can put users at greater risk for phishing, which is a common attack vector for mobile devices, says Terrance Robinson, head of enterprise security solutions at Verizon. Employees are also at risk for data leakage and man-in-the-middle attacks, especially when they hop on public Wi-Fi networks or download apps without first checking requested permissions. Mobile apps are another hot attack vector for smartphones, used in nearly 80% of attacks. A major challenge in strengthening mobile device security is changing users' perception of it. Brian Egenrieder, chief risk officer at SyncDog, says he sees "negativity toward it, as a whole."

Recent ransomware attacks define the malware's new age

Over the past two years, however, ransomware has come back with a vengeance. Mounir Hahad, head of the Juniper Threat Labs at Juniper Networks, sees two big drivers behind this trend. The first has to do with the vagaries of cryptocurrency pricing. Many cryptojackers were using their victims' computers to mine the open source Monero currency; with Monero prices dropping, "at some point the threat actors will realize that mining cryptocurrency was not going to be as rewarding as ransomware," says Hahad. And because the attackers had already compromised their victim's machines with Trojan downloaders, it was simple to launch a ransomware attack when the time was right. "I was honestly hoping that that prospect would be two to three years out," says Hahad, "but it took about a year to 18 months for them to make that U-turn and go back to their original attack." The other trend was that more attacks focused on striking production servers that hold mission-critical data. "If you get a random laptop, an organization may not care as much," says Hahad. "But if you get to the servers that fuel their day-to-day business, that has so much more grabbing power."

To Disrupt or Not to Disrupt?

First, consider the choice of technology. Clayton Christensen long distinguished between disruptive technologies and sustaining technologies (which do not). Most companies pursue sustaining technologies as a way of retaining existing customers and keeping a healthy profit margin. The reason to choose a technology that is “worse” initially is its potential to outperform older technologies in the relatively near future. Moreover, disruptive technologies tend to be what established companies either are not good at or do not want to adopt for fear of alienating their customer base. In other words, the very existence of disruptive technologies represents an opportunity for startups. Which brings us to the choice of customer for a disruptive entrepreneur. Christensen noted that, if you want to sell a product that underperforms existing products in some dimension (say, a laptop with less computing power), you need to find either a way of selling at a discount so that a lack of performance can be compensated for or a set of customers who do not strongly value that performance more than some other feature (for example, longer battery life).

New Wi-Fi chip for the IoT devices consumes 5,000 times less energy

A set of ultra-low power Wi-Fi radios integrated in small chips, each measuring 1.5 square millimeters in area
The invention is based on a technique called backscattering. The transmitter does not generate its own signal, but takes the incoming signals from the nearby devices (like a smartphone) or Wi-Fi access point, modifies the signals and encodes its own data onto them, and then reflects the new signals onto a different Wi-Fi channel to another device or access point. This approach requires much less energy and gives electronics manufacturers much more flexibility. With the tiny Wi-Fi chip, the IoT devices will no longer need to charge frequently or need large batteries, but can also allow smart home devices to work completely wirelessly and even without batteries in some cases. The developers note that the new transmitter will significantly increase the operating time on a single charge of various Wi-Fi battery sensors and IoT devices, including, for example, portable video cameras, smart voice speakers, and smoke detectors. Reducing energy consumption in some cases will allow manufacturers of sensors to make their devices even more compact by switching to using less capacious batteries.

The importance of talent and culture in tech-enabled transformations

Many industrial companies may assume that top technology talent is out of reach and that their brand and even location might prevent them from attracting the kind of people they need. But technology professionals are less biased against industrial companies than might be expected. Only 7.4 percent of the respondents to a 2018 survey of technology professionals considered their employer’s industry important. Compensation, the work environment, and professional development—all factors within an industrial company’s control—were the factors that matter most to technology talent ... One leading North American industrial company looking to embark on a tech-enabled transformation prioritized bringing in a chief digital officer (CDO) who had credibility among technologists. The company hired a CDO who previously had led businesses at major technology companies and was able to attract three leading product managers and designers from similar organizations. The company used these new hires—who were intimately familiar with rapid, user-centric design—to signal its commitment to world-class digital development.

Quote for the day:

"If you care enough for a result, you will most certainly attain it." -- William James

Daily Tech Digest - February 23, 2020

Robots are not the job killers we all feared

Not only can digital workers contribute to a more effective workforce overall, they can also make for happier employees. More often than not, automation relieves employees of the tedious parts of their jobs that take considerable time and effort to accomplish. In return, they have more opportunities to pursue projects they truly enjoy and are passionate about. One example of this is at S&P, where financial journalists produce reports on the businesses they are assigned to cover. Their work to develop insightful analyses was hindered by the need to first write lengthy stock reports, until they leveraged Blue Prism’s connected-RPA to automate stock report production. This has given the journalists more time to produce thoughtful analysis, which is not only a more rewarding part of their roles but is also a more valuable offer to S&P’s clients. In some cases, digital workers are even introduced as part of a broader effort to improve employee happiness and engagement. According to our research, 87% of knowledge workers are comfortable with re-skilling in order to work alongside a digital workforce.

FBI recommends passphrases over password complexity

login screen
For more than a decade now, security experts have had discussions about what's the best way of choosing passwords for online accounts. There's one camp that argues for password complexity by adding numbers, uppercase letters, and special characters, and then there's the other camp, arguing for password length by making passwords longer. This week, in its weekly tech advice column known as Tech Tuesday, the FBI Portland office positioned itself on the side of longer passwords. "Instead of using a short, complex password that is hard to remember, consider using a longer passphrase," the FBI said. "This involves combining multiple words into a long string of at least 15 characters," it added. "The extra length of a passphrase makes it harder to crack while also making it easier for you to remember." The idea behind the FBI's advice is that a longer password, even if relying on simpler words and no special characters, will take longer to crack and require more computational resources. Even if hackers steal your encrypted password from a hacked company, they won't have the computing power and time needed to crack the password.

How the IRS Audits Cryptocurrency Tax Returns

How the IRS Audits Cryptocurrency Tax Returns - Filing Expert Shares Example, Insights on AML Focus
The presence of a new crypto question on 2019’s Schedule 1 form has individuals concerned about reporting their crypto assets correctly more than ever, and according to experts, this is for good reason. “That is massive” says Enrolled Agent Clinton Donnelly of Donnelly Tax Law. “This question in the 2019 return … it forces every taxpayer in the United States to make a decision whether or not they’re going to be honest or not on this question, because its a yes or no and when you sign the tax return … it’s in small print, it says ‘under penalty of perjury I have reviewed this return and it’s true, complete and correct,’ so failing to check the box is incomplete.” Donnelly went on to explain that by reporting crypto gains in light of the new question, many crypto holders will inadvertently reveal that they first acquired their digital assets years back, which calls their previous years’ returns into suspicion and makes an IRS investigation more likely. Donnelly’s service has so far seen two cryptocurrency audits with its clients, and the tax professional is interested in learning more about what triggers an IRS investigation.

Why AI companies don’t always scale like traditional software startups

Businessman trying to fit through a very small door.
For AI companies, knowing when you’ve found product-market fit is just a little bit harder than with traditional software. It’s deceptively easy to think you’ve gotten there – especially after closing 5-10 great customers – only to see the backlog for your ML team start to balloon and customer deployment schedules start to stretch out ominously, drawing resources away from new sales. The culprit, in many situations, is edge cases. Many AI apps have open-ended interfaces and operate on noisy, unstructured data (like images or natural language). Users often lack intuition around the product or, worse, assume it has human/superhuman capabilities. This means edge cases are everywhere: as much as 40-50% of intended functionality for AI products we’ve looked at can reside in the long tail of user intent. Put another way, users can – and will – enter just about anything into an AI app. Handling this huge state space tends to be an ongoing chore. Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.

Cloud misconfigurations cost companies nearly $5 trillion

Cloud computing concept on futuristic technology background
"Data breaches caused by cloud misconfigurations have been dominating news headlines in recent years, and the vast majority of these incidents are avoidable," said Brian Johnson, chief executive officer and co-founder of DivvyCloud. Using data from a 2019 Ponemon Institute report that said the average cost per lost record globally is $150, DivvyCloud researchers estimated that cloud misconfiguration breaches cost companies upwards of $5 trillion over those two years. "Breaches caused by cloud misconfigurations have been dominating news headlines in recent years. DivvyCloud researchers compiled this report to substantiate the growing trend of breaches caused by cloud misconfigurations, quantify their impact to companies and consumers around the world and identify factors that may increase the likelihood a company will suffer such a breach," the report said. "Year over year from 2018 to 2019, the number of records exposed by cloud misconfigurations rose by 80%, as did the total cost to companies associated with those lost records," according to the report Unfortunately, the report added, experts expect this upward trend to persist, as companies continue to adopt cloud services rapidly but fail to implement proper cloud security measures.

When Money Becomes Programmable – Part 1

Digital scarcity, when applied to a token such as bitcoin or some other digitally tokenized medium of exchange, allows a new approach to managing our increasingly digitized economy and its micro-economies within. With scarce digital tokens, communities with a common interest in value generation can embed their shared values into the software’s governance and use these meta-assets as instruments of those values. Once they associate scarce tokens with rights to scarce resources, they can develop controls over token usage that help manage that public good. Here’s one hypothetical example: A local government that wants to reduce pollution, traffic congestion, and the town’s carbon footprint might reward households that invest in local solar generation with negotiable digital tokens that grant access to electric mass-transit vehicles but not to toll roads or parking lots. The tokens would be negotiable, with their value tied to measures of the town’s carbon footprint, creating an incentive for residents to use them.

How Fintech Startups Are Disrupting the Payments Industry

How Fintech Startups Are Disrupting the Payments Industry
Banks have invested huge sums to build legacy payment systems. However, financial institutions must now not only design processes and systems that incorporate cutting-edge innovations but also meet higher customer expectations. Legacy infrastructure is incompatible with those of other banks or payment processors. That leads to high fees, long delays and frustration for customers when sending and receiving payments. Tokenization solves the issue of interoperability by leveraging a standard token that participants use to transfer value (or data) quickly and efficiently. In the case of Soramitsu’s Project Bakong, its platform allows participants (i.e. banks) to transact directly using token transfers. This method drastically speeds up settlements by eliminating traditional business processes such as transfer instructions, liquidation and payment confirmations at a later date. Cambodia, Malaysia and Thailand are also experimenting with QR scan codes to improve remittances between these countries. The QR codes are EMVCo compatible and may be used to send and receive payments that are denominated in local currencies.

Banking for Humanity: Technology to Increase the Human Touch

As the Gen Z generation are more concerned with being authentic and persistent, banks will need to understand that there is no difference between offline and online words when it comes to building their omnichannel strategies. Banks can also consider creating educational channels to promote discourse with Gen Z. By digitalising their services, banks can bridge the gap between financial institutions and the older generations as well. Staff can help to assist older customers with self-service devices so that they have greater control over their money. Branch designs also take into consideration the personal consultation aspect that caters to their needs. Likewise, video banking can be used within branches to increase access to financial services and assistance for customers who need help with self-service products and technology whenever they want. A bank’s physical services can be carefully merged with the latest digital technologies.

Understanding the Impact of the Cybersecurity Skills Shortage on Business

womans hands working on laptop reflection of data protection symbol picture id1135823003
The impact of the skills shortage is too powerful to ignore and requires intervention. This is where an effective strategy driven by the CISO comes in. The evolution of the CISO has expanded the role from being a technologist solely focused on managing an organization’s security risks, to also being a business strategist able to reach across organizational boundaries to shape and mobilize resources to enable things like secure digital transformation. In today’s threat landscape, security solutions alone are no longer enough to withstand modern cyber threats. The expanding responsibilities of the CISO and the organizational impact of today’s cybersecurity skills shortage both play a critical role in the success of an organization’s digital transformation efforts and security strategies. While an effective CISO can provide essential guidance, a skills shortage can present uncertainties that can still adversely affect the productivity and morale of the security team – which can directly impact the overall security of the organization. By investing time and efforts into existing team members, security leaders can actively provide more value to their organizations without having to rely solely on seeking new talent.

AI for CRE: Is Cybersecurity A friend or foe?

AI cybersecurity
While AI could help lower cybersecurity spending in terms of money and manpower, it could also cost companies money, too. Last year, Juniper Research predicted that data breaches’ costs would increase from $3 trillion in 2019 to $5 trillion in 2024. A number of factors will play into those costs like lost business, recovery costs and fines, but so will AI. “Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior,” Juniper’s report said. “The research also highlights that the evolution of deep fakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.” Security experts have also pointed to this year as to when hackers will start their attacks that leverage AI and machine learning. “The bad [actors] are really, really smart,” Burg of EY Americas told VentureBeat. “And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad.

Quote for the day:

"Leadership is, among other things, the ability to inflict pain and get away with it - short-term pain for long-term gain." -- George Will