Daily Tech Digest - February 25, 2020

5G's impact: Advanced connectivity, but terrifying security concerns


Despite the enthusiasm, professionals are also concerned about some of the negative aspects of 5G, specifically security and cost. The top barriers to adopting 5G in the next three years included security concerns (35%) and upfront investment (31%), the report found. The relationship between 5G and security is complex. Overall, the majority of respondents (68%) do believe 5G will make their businesses more secure. However, security challenges are also inherent to the network infrastructure, according to the report. These concerns involve user privacy (41%), the number of connected devices (37%), service access (34%), and supply chain integrity (29%). On the connected devices front, some 74% of respondents said they are worried that having more connected devices will bring more avenues for data breaches. With that said, the same percentage of respondents understand that adopting 5G means they will need to redefine security policies and procedures. To prepare for both security and cost challenges associated with 5G, the report recommended users seek external help. The partners businesses will most likely work with include software and services companies (44%), cloud companies (43%), and equipment providers (31%). 


What if 5G fails? A preview of life after faith in technology


"If it gets to a point where it's a broad decoupling of the developed from the emerging economies," said Sec. Lew, "that's not good for anyone. The growth of emerging economies would not be very impressive if they didn't have very active, robust trading relationships with developed economies. And the costs in developed economies would go up considerably, which means that the impact on consumers would be quite dramatic." "We know, from the early days when there was CDMA and GSM," remarked Greg Guice, senior vice president at Washington, DC-based professional consultancy McGuireWoods, "that made it very difficult to sell equipment on a global basis. That not only hurt consumers, but it hurt the pace of technology." He continued: I think what the companies that are building the equipment, and seeking to deploy the equipment, are trying to figure out is, in a world where there may be fragmentation, how do we manage this? I don't see people Balkanizing into their own camps; I think everybody is trying to preserve, as best they can, international harmonization of a 5G platform. Those efforts are in earnest.


Greenpeace takes open-source approach to finish web transformation


“The vision is to help people take action on behalf of the planet,” said Laura Hilliger, a concept architect at Greenpeace who is a leading member of the Planet 4 project. “We want to provide a space that helps people understand how our ecological endeavours are successful, and to show that Greenpeace’s work is successful because of people working collectively.” She met Red Hat representatives after work was already underway on the project in May 2018, which culminated in consultants, technical architects and designers from the company coming in to do a “design sprint” with Greenpeace exactly a year later. This helped Red Hat better understand Planet 4 users and how they interact with the platform, as well as the challenges of integration and effectively visualising data. Hilliger said variations in the tech stacks deployed across Greenpeace’s 27 national and regional offices, on top of its 50-plus websites and platforms, had created a complex data landscape that made integrations difficult.


Evolution of the data fabric


Personally, the fabric concept also began to change my thinking when discussing infrastructure design, for too long it was focussed on technology, infrastructure and location, which would then be delivered to a business upon which they would place their data. However, the issue with this was the infrastructure could then limit how we used our data to solve business challenges. Data fabric changes that focus, building our strategy based on our data and how we need to use it, a focus on information and outcomes, not technology and location. Over time as our data strategies evolved with more focus on data and outcomes, it became clear that a consistent storage layer while a crucial part of a modern data platform design, does not in itself deliver all we need. A little while ago I wrote a series of articles about Building a Modern Data Platform which described how a platform is multi-layered, requiring not just consistent storage but also must be intelligent enough to understand our data as it is written and provide insight, apply security and do these things immediately across our enterprise.


Legal Tech May Face Explainability Hurdles Under New EU AI Proposals


Horrigan noted the transparency language in the European Commission’s proposal is similar to the transparency principles outlined in the EU’s General Data Protection Regulation (GDPR). While the European Commission is still drafting its AI regulations, legal tech companies have fallen under the scope of the GDPR since mid-2018. Legal tech companies have also fielded questions regarding predictive coding’s accuracy and transparency with technology-assisted review (TAR), Horrigan added. TAR has become increasingly accepted by courts after then-U.S. Magistrate Judge Andrew Peck of the Southern District of New York granted the first approval of TAR in 2012. In Peck’s order, he discussed predictive coding’s transparency that provides clarity regarding AI-powered software’s “black box.” “We’ve addressed the black box before with technology-assisted review and we will do it again with other forms of artificial intelligence. The black box issue can be overcome,” Horrigan said. However, Hudek disagreed. While Hudek said the proposed regulation doesn’t make him hesitant to develop new AI-powered features to his platform, it does make it more challenging.


Thinking About ‘Ethics’ in the Ethics of AI

Thinking_about_Ethics_in_the_Ethics_of_AI_Judith_Simon
Ethics by Design is “the technical/algorithmic integration of reasoning capabilities as part of the behavior of [autonomous AI]”. This line of research is also known as ‘machine ethics’. The aspiration of machine ethics is to build artificial moral agents, which are artificial agents with ethical capacities and thus can make ethical decisions without human intervention. Machine ethics thus answers the value alignment problem by building autonomous AI that by itself aligns with human values. To illustrate this perspective with the examples of AVs and hiring algorithms: researchers and developers would strive to create AVs that can reason about the ethically right decision and act accordingly in scenarios of unavoidable harm. Similarly, the hiring algorithms are supposed to make non-discriminatory decision without human intervention. Wendell Wallach and Colin Allen classified three types of approaches to machine ethics in their seminal book Moral machines.


Cisco goes to the cloud with broad enterprise security service

cloud security expert casb binary cloud computing cloud security by metamorworks getty
Cisco describes the new SecureX service as offering an open, cloud-native system that will let customers detect and remediate threats across Cisco and third-party products from a single interface. IT security teams can then automate and orchestrate security management across enterprise cloud, network and applications and end points. “Until now, security has largely been piecemeal with companies introducing new point products into their environments to address every new threat category that arises,” wrote Gee Rittenhouse senior vice president and general manager of Cisco’s Security Business Group in a blog about SecureX. “As a result, security teams that are already stretched thin have found themselves managing massive security infrastructures and pivoting between dozens of products that don’t work together and generate thousands of often conflicting alerts. In the absence of automation and staff, half of all legitimate alerts are not remediated.” Cisco pointed to its own 2020 CISO Benchmark Report, also released this week, as more evidence of the need for better, more tightly integrated security systems.


Evolution of Infrastructure as a Service


Some would say that IaaS, SaaS, and PaaS are part of a family tree. SaaS is one of the more widely known as-a-service models where cloud vendors host the business applications and then deliver to customers online. It enables customers to take advantage of the service without maintaining the infrastructure required to run software on-premises. In the SaaS model, customers pay for a specific number of licenses and the vendor manages the behind-the-scenes work. The PaaS model is more focused on application developers and providing them with a space to develop, run, and manage applications. PaaS models do not require developers to build additional networks, servers or storage as a starting point to developing their applications. ... IaaS is now enabling more disruption across all markets and industries as the same capabilities available to larger companies are now also available to the smallest startup in a garage. This includes advances in AI and Machine Learning (as a service), data analytics, serverless technologies, IoT and much more. This is also requiring large companies to behave as agile as a startup.


AI Regulation: Has the Time Arrived?


Karen Silverman, a partner at international business law firm Latham & Watkins noted that regulation risks include stifling beneficial innovation, the selection of business winners and losers without any basis, and making it more difficult for start-ups to achieve success. She added that ineffective, erratic, and uneven regulatory efforts or enforcement may also lead to unintended ethics issues. "There's some work [being done] on transparency and disclosure standards, but even that is complicated, and ... to get beyond broad principles, needs to be done on some more industry- or use-case specific basis," she said. "It’s probably easiest to start with regulations that take existing principles and read them onto new technologies, but this will leave the challenge of regulating the novel aspects of the tech, too." On the other hand, a well-designed regulatory scheme that zeros-in on bad actors and doesn't overregulate the technology would likely mark a positive change for AI and its supporters, Perry said.


Functional UI - a Model-Based Approach


User interfaces are reactive systems which are specified by the relation between the events received by the user interface application and the actions the application must undertake on the interfaced systems. Functional UI is a set of implementation techniques for user interface applications which emphasizes clear boundaries between the effectful and purely functional parts of an application. User interfaces' behavior can be modelized by state machines, that, on receiving events, transition between the different behavior modes of the interface. A state machine model can be visualized intuitively and economically in a way that is appealing to diverse constituencies (product owner, testers, developers), and surfaces design bugs earlier in the development process. Having a model of the user interface allows to auto-generate both the implementation and the tests for the user interface, leading to more resilient and reliable software. Property-based testing and metamorphic testing leverage the auto-generated test sequences to find bugs without having to define the complete and exact response of the user interface to a test sequence. Such testing techniques have found 100+ new bugs in two popular C compilers (GCC and LLVM)




Quote for the day:


"There is no 'one' way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer


Daily Tech Digest - February 24, 2020

Why data literacy needs to be part of a company's DNA

Book and computer technology in library
"Companies with lower levels of data literacy in the workforce will be at a competitive disadvantage," said Martha Bennett, vice president and principal analyst, Forrester. "It's also important to stress that different roles have different requirements for data literacy; advanced firms also understand that increasing data literacy is not a once-and-done training exercise, it's a continuous process." These days, everyone in an organization needs to be data literate, and the organization must establish a well-rounded data literacy program to ensure effective decision making. The programs must address the capacity to collect, analyze, and disseminate data tailored to the needs of diverse organizational roles. "Lack of data literacy puts you at a disadvantage, and can lead to potentially disastrous outcomes," Bennett said, "and we're not just talking about a business context here, the same applies in our personal lives." Numbers play a role in daily decisions, both in business and in our personal lives. Quantitative information must be evaluated, whether it's predicting an event, considering the increased risk of developing disease, how people lean politically, or how popular a product or service is.



5 reasons to choose PyTorch for deep learning

5 reasons to choose PyTorch for deep learning
One of the primary reasons that people choose PyTorch is that the code they look at is fairly simple to understand; the framework is designed and assembled to work with Python instead of often pushing up against it. Your models and layers are simply Python classes, and so is everything else: optimizers, data loaders, loss functions, transformations, and so on. Due to the eager execution mode that PyTorch operates under, rather than the static execution graph of traditional TensorFlow (yes, TensorFlow 2.0 does offer eager execution, but it’s a touch clunky at times) it’s very easy to reason about your custom PyTorch classes, and you can dig into debugging with TensorBoard or standard Python techniques all the way from print() statements to generating flame graphs from stack trace samples. This all adds up to a very friendly welcome to those coming into deep learning from other data science frameworks such as Pandas or Scikit-learn. PyTorch also has the plus of a stable API that has only had one major change from the early releases to version 1.3 (that being the change of Variables to Tensors).


AI: It's time to tame the algorithms and this is how we'll do it


To achieve this objective, the Commission wants to create an "ecosystem of trust" for AI. And it starts with placing a question mark over facial recognition. The organisation said it would consider banning the technology altogether. Commissioners are planning to launch a debate about "which circumstances, if any" could justify the use of facial recognition. The EU's white paper also suggests having different rules, depending on where and how an AI system is used. A high-risk system is one used in a critical sector, like healthcare, transport or policing, and which has a critical use, such as causing legal changes, or deciding on social-security payments. Such high-risk systems, said the Commission, should be subject to stricter rules, to ensure that the application doesn't transgress fundamental rights by delivering biased decisions. In the same way that products and services entering the European market are subject to safety and security checks, argues the Commission, so should AI-powered applications be controlled for bias. The dataset feeding the algorithm could have to go through conformity assessments, for instance. The system could also be required to be entirely retrained in the EU.


Why You Should Revisit Value Discovery

Ecosystem Thinking article 3
There are at least two reasons for the shift. The first is because we are in a digital world. Now the cost of creating new products can be extraordinarily low (a developer, a laptop). And the cost factor has given rise to new methodologies like Lean Startup and concepts like Fail Fast, Fail Cheap. As enterprises adopt these techniques, they push more projects into corporate innovation pipelines. More on the impact of that later. The second reason relates to software development and delivery methods. It is now possible, often necessary, to chunk software into smaller and smaller units of work and push these into a live test environment with users relatively quickly. Both of these approaches are creating problems. They reinforce the view that more is better. And both also reinforce a challenging proposition: enterprises can be experimental laboratories. Are you starting to get the picture? More ideas of dubious and yet-to-be tested value find their way into your workflow! Perhaps enterprises can convert this negative into a positive but to do so means stitching together a value discovery process with very good value management and delivery.


More And More Organizations Injecting Emotional Intelligence Into Their Systems

More and more organizations injecting emotional intelligence into their systems: Study - CIO&Leader
A growing number of organizations are injecting emotional intelligence into their systems. These include AI capabilities, such as machine learning and voice and facial recognition, which can better detect and appropriately respond to human emotion, according to Deloitte’s 11th annual Tech Trends 2020 report. The trends also indicate more and more organizations using digital twins, human experience platforms and new approaches to enterprise finance, which can redefine the future of tech innovation. Deloitte’s 11th annual Tech Trends 2020 report captures the intersection of digital technologies, human experiences, and increasingly sophisticated analytics and artificial intelligence technologies in the modern enterprise. The report explores digital twins, the new role technology architects play in business outcomes, and affective computing-driven “human experience platforms” that are redefining the way humans and machines interact. Tech Trends 2020 also shares key insights and prescriptive advice for business and technology leaders so they can better understand what technologies will disrupt their businesses during the next 18 to 24 months.


7 Tips to Improve Your Employees' Mobile Security

(Image: Mirko -- stock.adobe.com)
"A bit of a trade-off has to happen, as they're managing an aspect of something that is personally owned by the employee, and they're using it for all kinds of things besides work," says Sean Ryan, a Forrester analyst serving security and risk professionals. On nights and weekends, for example, employees are more likely to let their guards down and connect to public Wi-Fi or neglect security updates. Sure, some people are diligent about these things, while some "just don't care," Ryan adds. This attitude can put users at greater risk for phishing, which is a common attack vector for mobile devices, says Terrance Robinson, head of enterprise security solutions at Verizon. Employees are also at risk for data leakage and man-in-the-middle attacks, especially when they hop on public Wi-Fi networks or download apps without first checking requested permissions. Mobile apps are another hot attack vector for smartphones, used in nearly 80% of attacks. A major challenge in strengthening mobile device security is changing users' perception of it. Brian Egenrieder, chief risk officer at SyncDog, says he sees "negativity toward it, as a whole."


Recent ransomware attacks define the malware's new age

Ransomware
Over the past two years, however, ransomware has come back with a vengeance. Mounir Hahad, head of the Juniper Threat Labs at Juniper Networks, sees two big drivers behind this trend. The first has to do with the vagaries of cryptocurrency pricing. Many cryptojackers were using their victims' computers to mine the open source Monero currency; with Monero prices dropping, "at some point the threat actors will realize that mining cryptocurrency was not going to be as rewarding as ransomware," says Hahad. And because the attackers had already compromised their victim's machines with Trojan downloaders, it was simple to launch a ransomware attack when the time was right. "I was honestly hoping that that prospect would be two to three years out," says Hahad, "but it took about a year to 18 months for them to make that U-turn and go back to their original attack." The other trend was that more attacks focused on striking production servers that hold mission-critical data. "If you get a random laptop, an organization may not care as much," says Hahad. "But if you get to the servers that fuel their day-to-day business, that has so much more grabbing power."


To Disrupt or Not to Disrupt?

First, consider the choice of technology. Clayton Christensen long distinguished between disruptive technologies and sustaining technologies (which do not). Most companies pursue sustaining technologies as a way of retaining existing customers and keeping a healthy profit margin. The reason to choose a technology that is “worse” initially is its potential to outperform older technologies in the relatively near future. Moreover, disruptive technologies tend to be what established companies either are not good at or do not want to adopt for fear of alienating their customer base. In other words, the very existence of disruptive technologies represents an opportunity for startups. Which brings us to the choice of customer for a disruptive entrepreneur. Christensen noted that, if you want to sell a product that underperforms existing products in some dimension (say, a laptop with less computing power), you need to find either a way of selling at a discount so that a lack of performance can be compensated for or a set of customers who do not strongly value that performance more than some other feature (for example, longer battery life).


New Wi-Fi chip for the IoT devices consumes 5,000 times less energy

A set of ultra-low power Wi-Fi radios integrated in small chips, each measuring 1.5 square millimeters in area
The invention is based on a technique called backscattering. The transmitter does not generate its own signal, but takes the incoming signals from the nearby devices (like a smartphone) or Wi-Fi access point, modifies the signals and encodes its own data onto them, and then reflects the new signals onto a different Wi-Fi channel to another device or access point. This approach requires much less energy and gives electronics manufacturers much more flexibility. With the tiny Wi-Fi chip, the IoT devices will no longer need to charge frequently or need large batteries, but can also allow smart home devices to work completely wirelessly and even without batteries in some cases. The developers note that the new transmitter will significantly increase the operating time on a single charge of various Wi-Fi battery sensors and IoT devices, including, for example, portable video cameras, smart voice speakers, and smoke detectors. Reducing energy consumption in some cases will allow manufacturers of sensors to make their devices even more compact by switching to using less capacious batteries.


The importance of talent and culture in tech-enabled transformations


Many industrial companies may assume that top technology talent is out of reach and that their brand and even location might prevent them from attracting the kind of people they need. But technology professionals are less biased against industrial companies than might be expected. Only 7.4 percent of the respondents to a 2018 survey of technology professionals considered their employer’s industry important. Compensation, the work environment, and professional development—all factors within an industrial company’s control—were the factors that matter most to technology talent ... One leading North American industrial company looking to embark on a tech-enabled transformation prioritized bringing in a chief digital officer (CDO) who had credibility among technologists. The company hired a CDO who previously had led businesses at major technology companies and was able to attract three leading product managers and designers from similar organizations. The company used these new hires—who were intimately familiar with rapid, user-centric design—to signal its commitment to world-class digital development.



Quote for the day:


"If you care enough for a result, you will most certainly attain it." -- William James


Daily Tech Digest - February 23, 2020

Robots are not the job killers we all feared


Not only can digital workers contribute to a more effective workforce overall, they can also make for happier employees. More often than not, automation relieves employees of the tedious parts of their jobs that take considerable time and effort to accomplish. In return, they have more opportunities to pursue projects they truly enjoy and are passionate about. One example of this is at S&P, where financial journalists produce reports on the businesses they are assigned to cover. Their work to develop insightful analyses was hindered by the need to first write lengthy stock reports, until they leveraged Blue Prism’s connected-RPA to automate stock report production. This has given the journalists more time to produce thoughtful analysis, which is not only a more rewarding part of their roles but is also a more valuable offer to S&P’s clients. In some cases, digital workers are even introduced as part of a broader effort to improve employee happiness and engagement. According to our research, 87% of knowledge workers are comfortable with re-skilling in order to work alongside a digital workforce.



FBI recommends passphrases over password complexity

login screen
For more than a decade now, security experts have had discussions about what's the best way of choosing passwords for online accounts. There's one camp that argues for password complexity by adding numbers, uppercase letters, and special characters, and then there's the other camp, arguing for password length by making passwords longer. This week, in its weekly tech advice column known as Tech Tuesday, the FBI Portland office positioned itself on the side of longer passwords. "Instead of using a short, complex password that is hard to remember, consider using a longer passphrase," the FBI said. "This involves combining multiple words into a long string of at least 15 characters," it added. "The extra length of a passphrase makes it harder to crack while also making it easier for you to remember." The idea behind the FBI's advice is that a longer password, even if relying on simpler words and no special characters, will take longer to crack and require more computational resources. Even if hackers steal your encrypted password from a hacked company, they won't have the computing power and time needed to crack the password.



How the IRS Audits Cryptocurrency Tax Returns

How the IRS Audits Cryptocurrency Tax Returns - Filing Expert Shares Example, Insights on AML Focus
The presence of a new crypto question on 2019’s Schedule 1 form has individuals concerned about reporting their crypto assets correctly more than ever, and according to experts, this is for good reason. “That is massive” says Enrolled Agent Clinton Donnelly of Donnelly Tax Law. “This question in the 2019 return … it forces every taxpayer in the United States to make a decision whether or not they’re going to be honest or not on this question, because its a yes or no and when you sign the tax return … it’s in small print, it says ‘under penalty of perjury I have reviewed this return and it’s true, complete and correct,’ so failing to check the box is incomplete.” Donnelly went on to explain that by reporting crypto gains in light of the new question, many crypto holders will inadvertently reveal that they first acquired their digital assets years back, which calls their previous years’ returns into suspicion and makes an IRS investigation more likely. Donnelly’s service has so far seen two cryptocurrency audits with its clients, and the tax professional is interested in learning more about what triggers an IRS investigation.


Why AI companies don’t always scale like traditional software startups

Businessman trying to fit through a very small door.
For AI companies, knowing when you’ve found product-market fit is just a little bit harder than with traditional software. It’s deceptively easy to think you’ve gotten there – especially after closing 5-10 great customers – only to see the backlog for your ML team start to balloon and customer deployment schedules start to stretch out ominously, drawing resources away from new sales. The culprit, in many situations, is edge cases. Many AI apps have open-ended interfaces and operate on noisy, unstructured data (like images or natural language). Users often lack intuition around the product or, worse, assume it has human/superhuman capabilities. This means edge cases are everywhere: as much as 40-50% of intended functionality for AI products we’ve looked at can reside in the long tail of user intent. Put another way, users can – and will – enter just about anything into an AI app. Handling this huge state space tends to be an ongoing chore. Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.


Cloud misconfigurations cost companies nearly $5 trillion

Cloud computing concept on futuristic technology background
"Data breaches caused by cloud misconfigurations have been dominating news headlines in recent years, and the vast majority of these incidents are avoidable," said Brian Johnson, chief executive officer and co-founder of DivvyCloud. Using data from a 2019 Ponemon Institute report that said the average cost per lost record globally is $150, DivvyCloud researchers estimated that cloud misconfiguration breaches cost companies upwards of $5 trillion over those two years. "Breaches caused by cloud misconfigurations have been dominating news headlines in recent years. DivvyCloud researchers compiled this report to substantiate the growing trend of breaches caused by cloud misconfigurations, quantify their impact to companies and consumers around the world and identify factors that may increase the likelihood a company will suffer such a breach," the report said. "Year over year from 2018 to 2019, the number of records exposed by cloud misconfigurations rose by 80%, as did the total cost to companies associated with those lost records," according to the report Unfortunately, the report added, experts expect this upward trend to persist, as companies continue to adopt cloud services rapidly but fail to implement proper cloud security measures.


When Money Becomes Programmable – Part 1

Digital scarcity, when applied to a token such as bitcoin or some other digitally tokenized medium of exchange, allows a new approach to managing our increasingly digitized economy and its micro-economies within. With scarce digital tokens, communities with a common interest in value generation can embed their shared values into the software’s governance and use these meta-assets as instruments of those values. Once they associate scarce tokens with rights to scarce resources, they can develop controls over token usage that help manage that public good. Here’s one hypothetical example: A local government that wants to reduce pollution, traffic congestion, and the town’s carbon footprint might reward households that invest in local solar generation with negotiable digital tokens that grant access to electric mass-transit vehicles but not to toll roads or parking lots. The tokens would be negotiable, with their value tied to measures of the town’s carbon footprint, creating an incentive for residents to use them.


How Fintech Startups Are Disrupting the Payments Industry

How Fintech Startups Are Disrupting the Payments Industry
Banks have invested huge sums to build legacy payment systems. However, financial institutions must now not only design processes and systems that incorporate cutting-edge innovations but also meet higher customer expectations. Legacy infrastructure is incompatible with those of other banks or payment processors. That leads to high fees, long delays and frustration for customers when sending and receiving payments. Tokenization solves the issue of interoperability by leveraging a standard token that participants use to transfer value (or data) quickly and efficiently. In the case of Soramitsu’s Project Bakong, its platform allows participants (i.e. banks) to transact directly using token transfers. This method drastically speeds up settlements by eliminating traditional business processes such as transfer instructions, liquidation and payment confirmations at a later date. Cambodia, Malaysia and Thailand are also experimenting with QR scan codes to improve remittances between these countries. The QR codes are EMVCo compatible and may be used to send and receive payments that are denominated in local currencies.


Banking for Humanity: Technology to Increase the Human Touch

As the Gen Z generation are more concerned with being authentic and persistent, banks will need to understand that there is no difference between offline and online words when it comes to building their omnichannel strategies. Banks can also consider creating educational channels to promote discourse with Gen Z. By digitalising their services, banks can bridge the gap between financial institutions and the older generations as well. Staff can help to assist older customers with self-service devices so that they have greater control over their money. Branch designs also take into consideration the personal consultation aspect that caters to their needs. Likewise, video banking can be used within branches to increase access to financial services and assistance for customers who need help with self-service products and technology whenever they want. A bank’s physical services can be carefully merged with the latest digital technologies.


Understanding the Impact of the Cybersecurity Skills Shortage on Business

womans hands working on laptop reflection of data protection symbol picture id1135823003
The impact of the skills shortage is too powerful to ignore and requires intervention. This is where an effective strategy driven by the CISO comes in. The evolution of the CISO has expanded the role from being a technologist solely focused on managing an organization’s security risks, to also being a business strategist able to reach across organizational boundaries to shape and mobilize resources to enable things like secure digital transformation. In today’s threat landscape, security solutions alone are no longer enough to withstand modern cyber threats. The expanding responsibilities of the CISO and the organizational impact of today’s cybersecurity skills shortage both play a critical role in the success of an organization’s digital transformation efforts and security strategies. While an effective CISO can provide essential guidance, a skills shortage can present uncertainties that can still adversely affect the productivity and morale of the security team – which can directly impact the overall security of the organization. By investing time and efforts into existing team members, security leaders can actively provide more value to their organizations without having to rely solely on seeking new talent.


AI for CRE: Is Cybersecurity A friend or foe?

AI cybersecurity
While AI could help lower cybersecurity spending in terms of money and manpower, it could also cost companies money, too. Last year, Juniper Research predicted that data breaches’ costs would increase from $3 trillion in 2019 to $5 trillion in 2024. A number of factors will play into those costs like lost business, recovery costs and fines, but so will AI. “Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior,” Juniper’s report said. “The research also highlights that the evolution of deep fakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.” Security experts have also pointed to this year as to when hackers will start their attacks that leverage AI and machine learning. “The bad [actors] are really, really smart,” Burg of EY Americas told VentureBeat. “And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad.



Quote for the day:


"Leadership is, among other things, the ability to inflict pain and get away with it - short-term pain for long-term gain." -- George Will


Daily Tech Digest - February 21, 2020

Cloud-enabled threats are on the rise, sensitive data is moving between cloud apps

cloud-enabled threats
“We are seeing increasingly complex threat techniques being used across cloud applications, spanning from cloud phishing and malware delivery, to cloud command and control and ultimately cloud data exfiltration,” said Ray Canzanese, Threat Research Director at Netskope. “Our research shows the sophistication and scale of the cloud enabled kill chain increasing, requiring security defenses that understand thousands of cloud apps to keep pace with attackers and block cloud threats. For these reasons, any enterprise using the cloud needs to modernize and extend their security architectures.” 89% of enterprise users are in the cloud, actively using at least one cloud app every day. Cloud storage, collaboration, and webmail apps are among the most popular in use. Enterprises also use a variety of apps in those categories – 142 on average – indicating that while enterprises may officially sanction a handful of apps, users tend to gravitate toward a much wider set in their day-to-day activities. Overall, the average enterprise uses over 2,400 distinct cloud services and apps.


Move beyond digital transformation — and improve your ROI


How do you achieve value across an entire digital enterprise and make sure all investments give you that coveted, but sometimes elusive, ROI? You need to do more than transform. You need to transcend traditional approaches to growth and change. As part of PwC’s 2020 Global Digital IQ research, we studied thousands of companies and their digital behaviors. We found that just 5 percent are getting moderate or significant payback from their digital efforts in all areas measured: growth, profits, innovation, customer experience, brand lift, attracting and retaining talent, disrupting their own industry, using data to improve decisions, cutting costs, and combating new industry entrants. This elite group of companies — what we call Transcenders —achieve real payback across their enterprises. They embrace innovation, and they don’t fear change. If this were high school, they’d reign as prom queen, star quarterback, and valedictorian all rolled into one. What does it take to transcend? Four core differentiators deliver consistent, standout performance. And they’re elements many leaders talk about but don’t always act on or get full value from.



Head shot serious puzzled African American businessman looking at laptop
It's easy to see a digital transformation business strategy as a fun-filled ride into the future and envision the onslaught of high-fives when new technology (and the associated technology leader) have repositioned the company for change, growth and becoming a digital business. Reading the marketing pitch on digital transformation, it's easy to assume that you buy the right technology and perhaps some services, and after a few months, you arrive in the land of rainbows and unicorns. What's often left out of these stories are two salient facts. First, technology by itself has rarely transformed a business. Kodak invented many of the core technologies for digital photography, but chose to shelve them for a variety of reasons, not the least of which was a concern about cannibalizing its core business. The DVD was a widely available technology, but using it to create a novel business model of sending movies by mail helped Netflix-- with its super-easy customer experience--overtake video giant Blockbuster, who clung to its store-based ways.


Home Affairs pushes back against encryption law proposals


The Independent National Security Legislation Monitor (INSLM), Dr James Renwick, went further during public hearings in Canberra this week. Not only did he propose tougher independent oversight of TOLA actions, he repeatedly expressed his concern that the Attorney and the Minister didn't constitute an independent "double lock" for authorising TCNs. Such a double lock is required in the UK, where the equivalent to a TCN must be approved by both the Secretary of State for Home Affairs and the independent Investigatory Powers Commissioner's Office (IPCO). "Leaving aside the personalities and the people who might fill those offices from time to time, nevertheless the Attorney and the Minister for Communications are both members of the same government and the same cabinet," Renwick said on Friday. "There's at least some administrative law which suggests that in those circumstances, they might both be bound by a cabinet decision." Hamish Hansford, DHA's Acting Deputy Secretary for Policy, rejected that view. "Notwithstanding both an Attorney and Minister for Communications are members of a cabinet, they are also independent decision-makers under statute, and they need to exercise those responsibilities independently, if you like," he said.


Looking at the future of identity access management (IAM)


MFA is already popular among some enterprise technologies and consumer applications handling sensitive, personal data (e.g., financial, healthcare), and will continue to transform authentication attempts. A lot has been said about increased password complexities, but human error is still persistent. The addition of MFA immediately adds further security to authentication attempts by having the user enter a temporarily valid pin code or verify their identity by other methods. An area to watch within MFA is the delivery method. For example, SMS notifications were the first stand-out but forced some organizations to weigh added costs that messaging might bring on their mobile phone plans. SMS remains prevalent, but all things adapt, and hackers’ increased ability to hijack these messages have made their delivery less secure. Universal one-time password (OTP) clients, such as Google Authenticator, have both increased security and made the adoption of MFA policies much easier through time-sensitive pin codes. Universal OTPs also do away with the requirement for every unique resource to support its own MFA method.


Forget the Internet of Things. Here’s what IoT really stands for


Intelligence of things looks less like the restroom in Ethiopia, and more like Hartsfield-Jackson Atlanta International Airport, where the world’s largest toilet maker, Toto, has taken things a step further. There, too, the bathrooms are studded with sensors, from the urinals to the faucets. But they don’t just flush automatically, they all report back to central cloud database. The volume of data is astounding – a single toilet may flush 5,000 times per day. In aggregate, the airport can use this data to predict “rush hour” for the airport bathrooms, and deploy custodians before and after to make sure the toilets are clean, the paper towels are stocked, and everything’s running smoothly. “The last decade was about connectivity, and we describe that dynamic with the Internet of Things,” Steve Koenig, vice president of research at the Consumer Technology Association, told Digital Trends. “This decade is really about adding intelligence to different devices, services, etc. We’re confronted with a new IoT: The intelligence of things.” ... “Without intelligence, there is no value,” Kiva Allgood, head of IoT and automotive at Ericsson, told Digital Trends.


How healthcare CIOs can keep their organisations secure

How healthcare CIOs can keep their organisations secure image
For healthcare environments, ransomware poses one of the scariest types of threats in the entire cyber security arena. Physicians-in-training get a taste of the potential reality during routine training exercises at Maricopa Medical Center. As trainees attempt to use diagnostic equipment, like CT scanners, in resuscitating “patient” dummies, they’re greeted with ransomware lockout messages onscreen demanding Bitcoin payments before the equipment can be used again. They must use their intuition to treat the patient instead of the correct equipment. The price for this can be (again, this is a dummy patient) serious brain damage. The Internet of Things (IoT) unlocks huge potential for organisations, including healthcare entities. But this dependence on internet-connected infrastructure also poses a risk. Avoiding ransomware attacks in healthcare requires a multifaceted approach ... The Health Insurance Portability and Accountability Act (HIPAA) was an important step forward for healthcare security and organisations as well as patients.


Cybersecurity: Hacking victims are uncovering cyberattacks faster


"The buzz around the topic leading up to the GDPR deadline helped to get it in front of senior execs outside of the IT team. Many of them saw the importance of GDPR compliance and they supported measures to improve defences and breach identification," Grout said. While the legislation only applies to the European Union, the impact is also felt by global organisations that do business or transfer data in Europe. That appears to have had an impact on the median dwell time across the globe, which is down from 78 days to 56 days. However, one in ten FireEye investigations still involve organisations that had cyber attackers intruding on the network for over two years, indicating that cyber criminals -- and in some cases, nation-state backed hacking operations -- can still remain very stealthy when compromising networks. "Some of them are being targeted by highly skilled APT [Advanced Persistent Threat] groups that are able to hide themselves for a long time after the initial breach," said Grout. One of the most common weaknesses exploited by attackers -- as identified in the report -- is the failure to enforce multi-factor authentication (MFA) on the enterprise network. A lack of MFA means that cyber criminals who successfully breach or steal passwords can easily gain access to networks.


AI’s bias problem: Why Humanity Must be Returned to AI


If an AI system is built in a contrived laboratory environment with data that isn’t representative of the target audience, or worse, patterns in the data reflect prejudice, the AI’s decisions will also be prejudiced. It is incredibly difficult for algorithms to ‘unlearn’ these patterns, so it is important that biases are not built into the algorithm from the first phases of implementation. Origins of bias can be nuanced and hard to spot, ranging from historic impartialities based on race and gender, to a lack of diversity within training sets. As a consequence, certain groups are disproportionately represented. A study by the National Institute of Standards and Technology (NIST) found that facial recognition misidentified African-American and Asian faces ten to 100 times more than Caucasians, while Native Americans were misidentified more than any other group. The study also revealed that women were falsely identified over men, and senior citizens had more than 10 times the issues faced by middle-aged adults.  According to a report by AI Now Institute at New York University, the lack of diverse training data also threatens to worsen the historic underemployment of disabled people.


Achieving SOC 2 Compliance in DevOps

If you are wondering whether AWS complies with SOC 2 at this point, you are not alone. AWS as a cloud environment is designed to comply with SOC 2 requirements; at the very least, the ecosystem offers tools that make compliance easy. SOC 2 compliance is something that AWS takes seriously. In fact, AWS keeps the location of data centers confidential to ensure maximum security. It also offers high resilience with multiple redundancies and automated disaster recovery measures. Through AWS Artifact, you can gain access to all SOC reports, including SOC 2 Security, Availability, and Confidentiality Reports generated by AWS. All controls are provided and you have the complete services in scope list for maximum compliance. AWS has an extensive set of tools for maintaining controls and ensuring compliance. Amazon CloudWatch is a good example of a comprehensive monitoring tool that you can use across the AWS ecosystem. The same is true for AWS CloudTrail and Amazon GuardDuty. You also have AWS Shield offering security measures that are ready to deploy.



Quote for the day:


"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray


Daily Tech Digest - February 18, 2020

Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans

Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans
Digital human beings are photorealistic digitized virtual versions of humans. Consider them avatars. While they don't necessarily have to be created in the likeness of a specific individual (they can be entirely unique), they do look and act like humans. Unlike digital assistants such as Alexa or Siri, these AI-powered virtual beings are designed to interact, sympathize, and have conversations just like a fellow human would. Here are a few digital human beings in development or at work today: Neons: AI-powered lifeforms created by Samsung’s STAR Labs and called Neons include unique personalities such as a banker, K-pop star, and yoga instructor. While the technology is still young, the company expects that, ultimately, Neons will be available on a subscription basis to provide services such as a customer service or concierge. Digital Pop Stars: In Japan, new pop stars are getting attention—and these pop stars are made of pixels. One of the band members of AKB48, Amy, is entirely digital and was made from borrowing features from the human artists in the group. Another Japanese artist, Hatsune Miku, is a virtual character from Crypton Future Media.



Edge computing enables near-real-time application engagements. While local computing is not new, edge computing has emerged because technologies, such as content delivery networks and local edge devices and gateways, can now aggregate IoT sensor and mobile device insights to enable on-demand actions where people and physical processes exist, need them, and benefit from them. Want to dramatically improve customer experience, employee experience, and business achievements? This is powerful empowerment. Edge computing architectures have three major building blocks. Edge computing varies across different solution use cases and value scenarios, so it's difficult to define just a single pattern for everyone. Forrester's research does find three general building blocks core to most scenarios: edge management layers, edge networks, and edge intelligence fabric software. Enterprise and government use cases and case studies of how your peers are empowering their customers and advancing their market value with these empowering technologies. Functions and components of edge computing and the vendor landscape across all industries and the services already offered.


It isn’t just the engineering team that should focus on developing the product offering or key consumer touchpoints. Employees across the organisation are valuable as they all interact with different stages of the customer journey, and can provide valuable insights into pain points. They are capable of delivering a constant flow of new ideas to improve the digital customer experience, asking what will help to add value for your customers while engineering teams actually integrate a process to make it a reality. It’s no longer about the waterfall approach of working in segments, but rather coming together as a collaborative business and empowering the devops team to make the technical decisions needed to make the ideas a reality.  Never underestimate the importance of collaboration in innovation. Giving employees at all levels the opportunity to get involved with their own ideas, perhaps via collaborative brainstorming sessions with the engineering team, can mean the risk of analysis paralysis will be averted, as everyone is involved from the beginning. It is essential for the management team to provide employees with not only the opportunity to share their thoughts about ways to develop the business, but the training to help them use their data and technology to bring these ideas to life.


Pasta
Bala is right to call out that one of the primary benefits of a serverless and "single-purpose microservices" is that "You can use the right tool for the right job rather than being constrained to one language, one framework or even one database." This is immensely freeing for developers, because now instead of writing monolithic applications that likely have very low utilization with spiky workloads, they can build microservices tied to ephemeral serverless functions. When the system is idle, it shuts down and costs nothing to run. Everyone wins. This also can make maintaining code more straightforward. For monolithic applications, updating code can present a major burden because of the difficulty inherent in covering all dependencies. As Ophir Gross has noted, "Spaghetti code is full of checks to see what interface version is being used and to make sure that the right code is executed. It's often disorganized and usually results in higher maintenance efforts as changes in code affect functionality in areas that are challenging to predict during development stages."



DDoS Attacks Nearly Double Between Q4 2018 and Q4 2019

DDoS attackers continued to leverage non-standard protocols for amplification attacks in the last quarter of 2019, researchers found. Adversaries have also adopted Apple Remote Management Service (ARMS), part of the Apple Remote Desktop (ARD) application for remote administration. This tactic was first spotted in June 2019; by October, attacks were widespread. The fourth quarter of 2019 brought multiple high-profile DDoS attacks, including threats against financial organizations in South Africa, Singapore, and nations across Scandinavia. DDoS attacks aimed to cause disruption for the United Kingdom's Labour party and also targeted Minecraft servers at the Vatican. In a more recent case, just last week the FBI warned of a potential DDoS attack targeting a state-level voter registration and information site. "This demonstrates that DDoS is still a common attack method among cybercriminals driven by ideological motives or seeking financial gain, and organizations should be prepared for such attacks and have a deep understanding of how they evolve," researchers said in a statement.


Keeping up with disruptors through hybrid integration


For consumers of the digital era, experience is everything. They expect newfound convenience and flexibility and will have no problem looking elsewhere if this cannot be provided. This begs the question: how can the traditional players hope to keep up if this is the case? However, things aren’t as complex as they seem. One reason these new companies can drive such positive results comes down to the fact there is no reliance on legacy databases, and they can take advantage of existing third-party systems. For example, Citymapper leverages open data from the Transport of London to retrieve journey information and provide real-time visibility over transport schedules, allowing customers to make the best choice of journey based on timings. Meanwhile, Uber uses Google’s APIs to run their mapping software and match customers with the drivers closest to them. From there, the data is stored and used to predict supply and demand, as well as set fares.  In both cases, these services have been built on existing integrations, meaning they don’t run into the same problems as many of the established players.


What Does Facebook's Quite AI Acquisitions Across UK Signify?

AI
Amid all the controversies and roadblocks in its strive to attain AI leadership, the company is moving forward with innovation and tech developments. These developments are a major result of its acquisitions; small but significant. Facebook’s M&A activities are proving to be quite beneficial in its AI journey. Recently, the company acquired Scape Technologies which is a London-based computer vision startup working on location accuracy beyond the capabilities of GPS. Full terms of the deal remain as yet unknown, although a Companies House update reveals that Facebook Inc. now has majority control of the company (more than 75%). Further, a regulatory filings show that Scape’s previous venture capital representatives have resigned from the Scape board and are replaced by two Facebook executives. ... Meanwhile, the acquisition by Facebook, no matter what form it takes, looks like a good fit given the US company’s investment in next-generation platforms, including VR and AR. It is also another — perhaps, worrying — example of US tech companies hoovering up UK machine learning and AI talent early.


Why AI systems should be recognized as inventors


It’s important to note that the Artificial Inventor Project doesn’t want AI systems to own the patents for their creations. Such an interpretation of the case confuses ownership of patent rights with inventorship. Hence the DABUS applications list the AI as the inventor, with the AI’s owner listed as the patent applicant and prospective owner of any issued patents. It will be many years before they learn the full outcome of their applications. The team is appealing the rulings of both the EPO and the UK IPO. Other decisions in jurisdictions including the US, Germany, Israel, Taiwan, China, Korea are still pending, as well as one filed under the Patent Cooperation Treaty, which facilities the patent application process in more than 150 states. The World Intellectual Property Organization and the United States Patent and Trademark Office, meanwhile, have both requested comments on how they could develop policies for such applications. They may need to address any ambiguity over who owns the patents for AI-generated inventions when both the creator of the system and an individual user have contributed to its output. But granting ownership to the person who made the AI operable may be the most straightforward solution.


Mac attacks on the rise

istock-958867880.jpg
"We saw a significant rise in the overall prevalence of Mac threats in 2019, with an increase of over 400% from 2018,'' the report by Malwarebytes Labs stated. Part of that increase can be attributed to an increase in its Malwarebytes for Mac user base, the report noted. To see if that increase reflected what was actually happening in the Mac threat landscape, Malwarebytes said, it examined threats per endpoint on both Macs and Windows PCs. "In 2019, we detected an average of 11 threats per Mac endpoint--nearly double the average of 5.8 threats per endpoint on Windows,'' the report said. Another key finding was that overall, consumer threat detections were down by 2% from 2018, but business detections increased by 13% in 2019, the report said. This resulted in a mere 1% increase in threat volume year-over-year. The sophistication of threat capabilities in 2019 increased, with many using exploits, credential stealing tools, and multi-stage attacks involving mass infections of a target, the report said. While seven of 10 top consumer threat categories decreased in volume, HackTools--a threat category for tools used to hack into systems and computers--increased against consumers by 42% year-over-year, bolstered by families such as MimiKatz, which also targeted businesses, the report said.


4 principles of analytics you cannot ignore


Data are a resource. If you are not analyzing it, it is an unused resource. At SAS, we often say, “Data without analytics is value not yet realized.” Naturally, then, wherever there is data, there needs to be analytics. But what does that mean today when we are generating more data and more diverse data than ever before? And all of that data streams or moves about many different networks. The first principle of analytics is about bringing the right analytics technology to the right place at the right time. Whether your data are on-premises, in a public or private cloud, or at the edges of the network – analytics needs to be there with it. ... You should pay great attention to the quality, robustness and performance of your algorithms. But the value of analytics is not in the features and functions of the algorithm – not anymore. The value is in solving data-driven business problems. The analytics platform is a commodity – everybody has algorithms. But operationalizing analytics is not a commodity. Everybody is challenged with bringing analytics to life. When you deploy analytics in production, it drives value and decisions.



Quote for the day:


"To be able to lead others, a man must be willing to go forward alone." -- Harry Truman