Daily Tech Digest - June 12, 2020

IT Careers: Planning Your Future When the Future Is Uncertain

Right now, a lot of businesses are operating in crisis mode so they're prioritizing cost control out of necessity. Some of those companies will make staff cuts across the board to be "fair." Others realize that because the future is increasingly digital, they'll need to make cuts with a scalpel rather than an axe. Those companies are taking inventory of the skills they have and are comparing that with what they'll need to survive and thrive in the short term and over the long term. "Managing experts and navigating those who live in silos is one of the most challenging and vexing issues of our day," said Vikram Mansharamani ... Mansharamani also recommends planning for several possible futures as opposed to "the future," which is the same advice major consulting firms are providing client companies. In both cases it's wise to do scenario planning for each possible circumstance. "There's a lack of understanding of what the range of possibilities is," said Mansharamani. "A lot of people have thought of career paths as climbing corporate ladders, which I think is wrong." Instead, it might be wiser at times to make a lateral move in order to shift one's career to a different track. Alternatively, one might consider what appears to be a temporary digression as part of a longer-term strategy.


The Future Will Be Both Agile and Hardened

In short, IT became agile but security did not. Then the pandemic hit, which put our situation into stark relief. Overnight, we went from a 10% to 20% remote workforce to more than 90% remote. In a hot second, business continuity became something we did, not something we met about. Peter was robbed and Paul was paid as we diverted budget, changed priorities, and stood up VPNs and reconfigured networks to allow remote access to our critical systems. In a few frenetic weeks, we put many assumptions to the test and learned a lot. Many of our legacy on-premises applications simply aren't elastic enough to support this new remote workforce. Our massive overnight changes shed new light on our security's worst enemy — human error — as system misconfigurations skyrocketed to record highs, leaving us exposed. Predictably, bad actors saw opportunity in the pandemic and took advantage. Now what? As the weeks turn to months, it's increasingly clear that there is no going back. As Satya Nadella, CEO of Microsoft, recently noted, "We've seen two years of digital transformation in two months."


16 Tech Experts Weigh In On The Potential Of Edge Computing

Edge computing has big implications for machine learning. While training a machine learning model can be very data-intensive and may require the scale of public cloud infrastructure, inference and prediction can be pushed to edge devices. This means that inference and prediction can be accomplished at the edge, close to where new data is collected. - Sean Maday, Google ... Edge AI is where edge computing and artificial intelligence come together to provide intelligence to the edge. This is the next gold mine. There is a lot of innovation happening at the edge in terms of low power technology—for example, the way DNN training is done with reinforcement agents. It is this innovation that will bring a revolution to such industries as precision medicine, Industry 4.0 and Intelligent IoT. - Shailesh Manjrekar, WekaIO ... Edge computing will play a key role for companies looking to get ahead in the experience economy. Core benefits like low latency, scalability and security create superior digital experiences. Adoption has been hindered without a standard set of tools to build and deploy edge-enabled apps, but once these emerge, edge computing will transform business and digital services across all verticals. - Kris Beevers, NS1


The second wave of fintech disruption: three trends shaping the future of payments

Fortunately, we are standing on the cusp of fintech’s second major wave of disruption – and this one is going to be the real game-changer. Products, processes and ways of working are designed for digital and, crucially, have payments technology embedded in the user experience from start to finish. If you call an Uber, for example, you never think about the payment – you just request a ride, get in and get out. It’s completely frictionless. Why, then, can we not have that experience in everything we do? When online shopping, sites typically ask me for different information, deliver varying experiences and operate payments in a range of ways. As a consumer that’s frustrating, often confusing and encourages me to take my money elsewhere. Extrapolating services like payments and re-bundling them into the tech stack will help consumer-facing companies overcome many of these issues and provide a far better experience to their customers. Digital wallets will be at the heart of this change. They are the enabling technology that will allow payments to sit in the background, independent of the banking system, making everything more seamless.


Exploding Security Perimeter, Remote Worker Ramp Spotlights SD-WAN Limits

while it’s certainly possible to deploy SD-WAN hardware to every employee, it isn’t always economically or operationally feasible, let alone necessary. Instead, many enterprises are scaling up their use of virtual private networks (VPNs), already used by remote workers, to meet demand. This approach, however, isn’t without challenges, said Fortinet CMO John Maddison, in an interview with SDxCentral. A typical enterprise with 10,000 employees might have had 1,000 workers who needed remote access to the data center, he said. With the onset of the pandemic, “suddenly everybody in the company needs SSL VPN access.” “A lot of our customers actually were able to spin up a teleworker solution very quickly,” Maddison said. Fortinet’s enterprise and data center firewalls, which feature purpose-built security ASICs, can support tens of thousands of concurrent VPN tunnels, which is something Maddison says few others can achieve. “Most of our customers were able to switch on almost 10x worth of SSL VPN in the data center without a drop for their systems,” he said. “A lot of systems, that our competitors have, had a lot of problems because it was just doing that in CPU or through a standalone system.”



3 common misconceptions about PCI compliance

The first misconception primarily impacts vendors. It’s the misconception that just because a piece of equipment doesn’t process or transmit credit card data, it’s not in the scope of PCI. This simply isn’t true. There are essentially two types of systems in scope. One type is any system that directly touches credit card information. The second is any outlying larger connected systems that touch the first type of system. ... The second misconception involves what PCI compliance fundamentally tries to protect. While the PCI DSS guidelines have good recommendations for general security, they’re specifically trying to protect payment-related information. If you’re implementing the controls well, they do a solid job of increasing overall security. But at the end of the day, the scope is intentionally narrow. That’s why one of the biggest issues I see companies struggling with is how to adequately define their card data environment (CDE). Getting the scope right for CDE is the most essential thing you can do, and everything else builds on top of that. This is where understanding the card data flow comes into play. You must be able to articulate how a credit card transaction is created and transmitted from beginning to end.


Amazon puts one-year moratorium on police use of facial recognition software

Much of the dispute over police departments using it boils down to the confidence threshold that users set for Rekognition. After the study from Buolamwini and Raji made headlines, Amazon repeatedly said in documents that all police departments should use it at a 95% threshold. Police departments have already said they do not do this, with most using the software at the 80% threshold that the program is set to at first. All of the studies done by researchers use the 80% threshold as the benchmark. Despite the issues with Rekognition, Amazon has openly sold it widely to police departments and security forces across the world. The company tried to sell the program to the Immigration and Custom Enforcement agency but will not say officially how many police departments are using the software. When pressed on the issue in February, CEO of Amazon's Web Services Andy Jassy told PBS company officials would stop any police department from using Rekognition if they found out it was being misused, but the company has released no further information about how this would work or how they would even know how a police department was using it.


The ten competitive technology-driven influencers for 2020

FinTech disruptors have been finding a way in. Disruptors are fast-moving companies, often start-ups, focused on a particular innovative technology or process in everything from mobile payments to insurance. And, they have been attacking some of the most profitable elements of the financial services value chain. This has been particularly damaging to the incumbents who have historically subsidized important but less profitable service offerings. In our recent PwC Global FinTech Survey, industry respondents told us that a quarter of their business, or more, could be at risk of being lost to standalone FinTech companies within 5 years. ... Around the world, the middle class is projected to grow by 180% between 2010 and 2040; Asia’s middle class is already larger than Europe’s. By 2020, the majority share of the population considered “middle class” is expected to shift from North America and Europe to Asia-Pacific. And over the next 30 years, some 1.8 billion people will move into cities, mostly in Africa and Asia, creating one of the most important new opportunities for financial institutions. These trends are directly linked to technology-driven innovation. 


What is NLP? Why does your business need an NLP based chatbot?

When it comes to Natural Language Processing, developers can train the bot on multiple interactions and conversations it will go through as well as providing multiple examples of content it will come in contact with as that tends to give it a much wider basis with which it can further assess and interpret queries more effectively. So, while training the bot sounds like a very tedious process, the results are very much worth it. Royal Bank of Scotland uses NLP in their chatbots to enhance customer experience through text analysis to interpret the trends from the customer feedback in multiple forms like surveys, call center discussions, complaints or emails. It helps them identify the root cause of the customer’s dissatisfaction and help them improve their services according to that. ... NLP based chatbots can help enhance your business processes and elevate customer experience to the next level while also increasing overall growth and profitability. It provides technological advantages to stay competitive in the market-saving time, effort and costs that further leads to increased customer satisfaction and increased engagements in your business. 


State at the Edge: An Interview with Peter Bourgon

Arguably the hardest part of distributed systems is dealing with faults. Computers are ephemeral, networks are unreliable, topologies change — the fallacies of distributed computing are well-known, and accommodating them tends to dominate the engineering effort of successful systems. And if your system is managing state, things get much more difficult: maintaining a useful consistency model for users requires extremely careful coordination, with stronger consistency typically demanding commensurate effort. This inevitably corresponds to more bugs and less reliability. CRDTs, or conflict-free replicated data types, are a relatively novel state primitive that give us a way to skirt around a lot of this complexity. I think of them as carefully constructed data types, each combined with a specific set of operations. Over-simplifying, if you make sure the operations are associative, commutative, and idempotent, then CRDTs allow you to apply them in any order, including with duplicates, and get the same, deterministic results at the end. Said another way, CRDTs have built-in conflict resolution, so you don’t have to do that messy work in your application.



Quote for the day:

"People will follow you when you build the character to follow through." -- Orrin Woodward

Daily Tech Digest - June 11, 2020

How to decode a data breach notice

Data breach notifications are meant to tell you what happened, when and what impact it may have on you. You’ve probably already seen a few this year. That’s because most U.S. states have laws that compel companies to publicly disclose security incidents, like a data breach, as soon as possible. Europe’s rules are stricter, and fines can be a common occurrence if breaches aren’t disclosed. But data breach notifications have become an all-too-regular exercise in crisis communications. These notices increasingly try to deflect blame, obfuscate important details and omit important facts. After all, it’s in a company’s best interest to keep the stock markets happy, investors satisfied and regulators off their backs. Why would it want to say anything to the contrary? ... Hackers aren’t always caught in the act. In a lot of cases, most hackers are long gone by the time a company learns of a breach. When a company says it took immediate steps, don’t assume it’s from the moment of the breach. Equifax said it “acted immediately” to stop its intrusion, which saw hackers steal nearly 150 million consumers’ credit records. But hackers had already been in its system for two months before Equifax found the suspicious activity. What really matters is when did the security incident start; when did the company learn of the security incident; and when did the company inform regulators of the breach?


Uber researchers investigate whether AI can behave ethically

While reinforcement learning is a powerful technique, it often must be constrained in real-world, unstructured environments so that it doesn’t perform tasks unacceptably poorly. (A robot vacuum shouldn’t break a vase or harm a house cat, for instance.) Reinforcement learning-trained robots in particular have affordances with ethical implications insofar as they might be able to harm or to help others. Realizing this, the Uber team considered the possibility that there’s no single ethical theory (e.g., utilitarianism, deontology, and virtue ethics) an agent should follow, and that agents should instead act with uncertainty as to which theory is appropriate for a given context. The researchers suggest ethical theories can be treated according to the principle of Proportional Say, under which the theories have influence proportional only to their credence and not to the particular details of their choice-worthiness in the final decision. They devise several systems based on this that an agent might use to select theories, which they compare across four related grid-world environments designed to tease out the differences between the various systems.


Realigning Priorities and Building a Bridge Between Security and Development

It’s a multifaceted issue that should be understood from both angles. Misaligned business priorities and processes can create an array of problems, from a lack of innovation for fear of increased risk to unforeseen vulnerabilities falling through the cracks during the development process. And when developers aren’t empowered to improve their skills with educational tools like Security Labs, there’s less of a chance that they’ll feel prepared or appreciated when security comes knocking. To begin addressing these concerns, changes must come from the top-down, trickling through each team to impact their goals and methods for an overall healthier AppSec program. When they have direction, developers and security leaders can find a common ground by building a working relationship that benefits both teams (and ultimately, the entire organization). Three key steps to fixing the misalignment between security and development include: Shifting to a security-focused mindset across the business; Implementing a security champions program to encourage developer participation; and Making it easier for the development team to write secure code.


Working From Home With Robots

To prepare for working from home, the company’s safety team wrote new guidelines for engineers taking Spot back with them, though they mainly involve keeping the public a safe distance from the robots. Seifert recalls one incident when someone who didn’t know Spot came up and gave it a bear hug. “People unfamiliar with robots want to treat Spot like a dog, and calmly approaching a dog before bending over for pets and hugs is a reasonable thing to do,” he says. “Thankfully no one got hurt, but Spot has some really powerful motors and a lot of pinch points.” Now, engineers know to warn anyone who approaches the robots to keep a safe distance. ... Seifert says he gets a few more stares than this. “More than once I’ve witnessed a car drive by, only to see it a few seconds later reverse back into view and then stop for a few minutes while the driver records a video on their cell phone,” he says. But his parents live in a friendly neighborhood, so most neighbors have just gotten used to the sight of him and Spot, out for a walk. Like Seifert, Barry’s workflow involves writing code, loading it into Spot, testing out the robot, and then analyzing the results. But instead of having Spot navigate homemade mazes, he’s been flexing its robotic arm, scattering whatever random items he can find around the house to act as a picking challenge.


Digital transformation: A map for the path forward

Organizations need a new cloud-enabled supply chain to back up the ambition at the digital edge. Moving to cloud-native application development and leveraging API-driven microservice architectures can increase agility and time to value. Once again, there are two distinct journeys, which also have the potential to be interlocked to create compound benefit for the organization. The first journey is to renovate legacy platform architectures and convert the IT supply chain into a more agile and scalable services engine. This is powered by a shift to software-defined and cloud-based service delivery models, which is required to address the siloed nature of legacy back-end architectures. As organizations move to explore the scale of the digital edge, it is possible that the transactional systems that support core functionality―such as ordering, payment, supply chain, ERP, HR, and finance―will struggle to cope with the unpredictable demand. From online shopping to unresponsive e-learning platforms, many of the back-end systems and services that underpin these experiences were not designed to scale on unexpected demand.


Rebooting Education For The Digital Age

“Working in collaboration with businesses across engineering and technology industries, we create exciting projects about these sectors and turn them into free bootcamps for schools. We then map out these projects to national curriculum standards, deliver them through our e-learning platform, and train teachers to sustainably embed them into their subjects.” “Our focus is on creating more exciting projects, personalising the experience for learners, and opening up the platform for other people and organisations to deliver workshops and bootcamps,” he adds. By design, the Dicey Tech model relies on collaborating with universities and other companies to deliver modern learning experiences. The business has a particularly good relationship with Manchester City Council and Manchester Science Partnerships, through which it is helping students from disadvantaged backgrounds experience new ways of learning and teaching, and access equipment and further resources. During the pandemic, Dicey Tech has been putting its 3D printing capabilities to use by making visors for frontline NHS staff. Also conscious of the need to keep children engaged in education at home, the company created a free learning challenge.



Tackling the curve: 7 IT experts share new working predictions for businesses

Steve Blow, UK systems engineering manager at Zerto, points out that: “Google reported that it had blocked more than 18 million COVID-19 related phishing emails every day during the first week of April. It is not surprising that cybercriminals are taking advantage by executing ransomware attacks amidst this pandemic, as many organisations, especially those in healthcare or public sector, face enormous pressures to keep systems up and running.” Blow goes on to explain that: “Cybercriminals often exploit vulnerabilities in employee emails, so it is crucial to have the right cyber-defences in place to avoid a disaster where critical data could be at risk – especially when it comes to government or healthcare organisations. Having appropriate role based access control and an extensive tiered security model will help minimise risk. But, the attack itself is only half of the problem because, without sufficient recovery tools, the resulting outage will cause loss of data and money, as well as reputational harm. “Over the coming months it is important that we see more organisations utilising tools that allow them to roll back and recover all of their systems to a point in time just before an attack.



Turns out artificial brains need "sleep" too, but do they dream?

The researchers found the spiking neural network became increasingly unstable after extended periods of unsupervised dictionary learning. After that fact, the team used spiking neural network computer simulations to better understand exactly what led to this instability. The researchers discovered that the neurons within the system began to fire regardless of the input signals they received after extended training. In an attempt to stabilize the networks, the team implemented various types of noise, with Gaussian noise having the best results. The research team postulates that this is because Gaussian noise may mimic the inputs biological neurons receive throughout slow-wave sleep. "Why is slow-wave sleep so indispensable?" said senior author of the study Garrett Kenyon. "Our results make the surprising prediction that slow-wave sleep may be essential for any spiking neural network, or indeed any organism with a nervous system, to be able to learn from its environment." Although further research is necessary, artificial "sleep" may be imperative to maintaining stability in spiking neural networks. Next, the researchers plan to use this algorithm on Intel's Loihi neuromorphic chip.



DeepMind hopes to teach AI to cooperate by playing Diplomacy

DeepMind, the Alphabet-backed machine learning lab that’s tackled chess, Go, Starcraft 2, Montezuma’s Revenge, and beyond, believes the board game Diplomacy could motivate a promising new direction in reinforcement learning research. In a paper published on the preprint server Arxiv.org, the firm’s researchers describe an AI system that achieves high scores in Diplomacy while yielding “consistent improvements.” AI systems have achieved strong competitive play in complex, large-scale games like Hex, shogi, and poker, but the bulk of these are two-player zero-sum games where a player can win only by causing another player to lose. That doesn’t reflect the real world, necessarily; tasks like route planning around congestion, contract negotiations, and interacting with customers all involve compromise and consideration of how preferences of group members coincide and conflict. Even when AI software agents are self-interested, they might gain by coordinating and cooperating, so interacting among diverse groups requires complex reasoning about others’ goals and motivations.



Minimising corporate security risks with (XaaS) Everything-as-a-service

The sudden demand for remote working as a result of social distancing to reduce the spread of COVID-19 was something that many businesses had not prepared for and left lots of us rushing to find a solution. However, in the hurry to implement a solution, businesses may have failed to carefully consider the potential for cyber threats and as a result, nearly three-quarters of UK businesses now think that home working is putting their organisations at risk. Whatsmore, many organisations have overridden their security rules to ensure workers are quickly set up to work from home. Private end devices such as laptops, tablets and smartphones which are not protected by the corporate network and uniform security standards are being used now more than ever. Not to mention, there are no IT professionals on-site to monitor traffic and watch for suspicious activity. There are a number of solutions that businesses can employ to ensure that their workforce continues to work as normal with all their applications seamlessly integrated, and the security of these solutions must be the number one priority.




Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - June 10, 2020

AI: It’s Implications and Threat

Evolution of the workforce, for example, can pose a risk. AI can replace most of the workforce, which means loss of employment for most of the labour. The uncertainty of how exactly AI would affect the economy can also be challenging for some. Since the world is getting smaller, AI would need to work by rules that stand globally, rules that allow for effective interaction all over the world. Imposition of such rules isn't at all an easy task. Regulation of AI is tricky too; with the introduction of new technologies, the older regulatory rules can easily be obsolete. The development of AI also allows for malpractices, such as hacking or AI trafficking. Built-In-Bias allows for the programmer of the AI to introduce, either intentionally or unintentionally, a Bias. An Artificial Intelligence working with a bias or learning from biased data would also produce biased results. This can give an arbitrary group, in some cases, an unfair advantage over the others, although the outcome of a biased AI being ‘unpredictable’ isn’t any less of a nuisance. “It’s really easy to give AI the wrong problem to solve.” Which, she says, can be quite destructive.


Bankers say artificial intelligence will separate winners and losers

Banks recognise the importance of investing in technology to improve customer services, with AI’s potential to personalise customer experience seen as an attractive prospect. Some 77% of respondents said AI will separate the winners and the losers. Digital advisers and voice-assisted engagement channels will be the destination for a large part of AI investments, said the report. Beyond AI, there has been an increased acceptance that new technology will drive banking over the next five years, with 66% of banking executives agreeing, compared with 42% in the same survey in 2019. Almost half (45%) of the 300 senior executives questioned globally are planning to transform into digital ecosystems to improve customer experience and introduce new revenue streams. This will see a shift in the way banks develop software with an increase in use of DevOps. Most respondents (84%) agreed that DevOps will drive transformation in core banking. The report also said that the Covid-19 pandemic has accelerated digital transformation at banks.


Measuring AI Performance On Mobile Devices And Why It Matters

The key here is to better understand what a specific benchmark metric is actually testing. Does the test represent as close to real-world workloads as possible? An ideal benchmark uses actual applications that a consumer would use, but short of that it could employ the same core software components of popular apps instead, to represent realistic performance expectations. And in this case, that means we need to understand what NNs these benchmark tools are testing against, and what mathematical precision and AI algorithms are being used to process workloads on them. What makes for a good AI benchmark for mobile devices is a relatively deep, nuanced subject, but the long and short of it is virtually all mobile NPUs (Neural Processing Units, or dedicated AI engines) employ either INT8 or quantized mathematical precision, or FP16 floating point precision, to make use of popular NNs like ResNet-34 or Google’s DeepLab-v3 for image classification and segmentation in apps, for example. Is that a cat or a dog? What sort of color balance should be applied in this camera shot? These are the kinds of questions the AI is trying to infer answers for from the phone’s environment, in an imaging workload example at least, though there are many others.


Blazor RenderTree Explained

Blazor is a new single-page application (SPA) framework from Microsoft. Unlike other SPA frameworks such as Angular or React, Blazor relies on the .NET framework in favor of JavaScript. Blazor supports many of the same features found in these frameworks including a robust component development model. The departure from JavaScript, especially when exiting a jQuery world, is a shift in thinking around how components are updated in the browser. Blazor’s component model was built for efficiency and relies on a powerful abstraction layer to maximize performance and ease-of-use. Abstracting the Document Object Model (DOM) sound intimidating and complex, however with modern web application it has become the normal. The primary reason is that updating what has rendered in the browser is a computationally intensive task and DOM abstractions are used to intermediate between the application and browser to reduce how much of the screen is re-rendered.


Email is biggest security risk

With organisations spending big on cloud, and not so much on keeping older on-premises kit up to date, there has been an increase in obsolete and unpatched network devices that contain software vulnerabilities, which NTT said introduces risk and exposes organisations to information security threats. The remarks were made in a report from the global giant that was based on more than 1,000 clients, covering over 800,000 network devices in five regions, across multiple industry sectors. In the report, NTT found 46.3% of organisations' network assets were ageing or obsolete. It said obsolete devices had, on average, twice as many vulnerabilities per device when compared with ageing and current ones, at 42.2 security advisories per device. It said such risk is intensified when a business does not patch a device or revisit the operating system version for the duration of its lifetime, which NTT said many do not do. "In this 'new normal' many businesses will need, if not be forced, to review their network and security architecture strategies, operating, and support models to better manage operational risk," NTT executive vice president of intelligent infrastructure Rob Lopez said, in light of more people working remotely due to the COVID-19 pandemic.


CSO's Guide to 'Employee-First' Security Operations During COVID-19 & Beyond

Schedule regular (if not daily) meetings to ensure issues are being addressed and strategies are being changing as needed in real-time. This team should have full business representation, including executive staff, regional leaders, and security operations representatives. Although many businesses may currently have these teams in place, it's important that proactive planning remains a top priority even as offices begin to reopen. This team, and the lessons they provide, will be crucial for any future pandemics or crises that pose a threat to business continuity, allowing employees to act faster and make informed decisions. Due to the rise of remote work and expanded attack services, phishing attacks have also seen a significant acceleration with employees being enticed by fake password management, executive updates, and GoFundMe messages. To decrease the impact of these attacks, it's important to keep employees informed of the latest threats and how they can protect themselves or seek support if they have become a victim. Employee education is essential, including training on how to lock down home routers with complex passwords and leverage data loss prevention (DLP) technologies.


Managing the Security of Cloud-Native Architectures

It’s a DevOps world—everyone’s trying to move faster. Productivity increases, but so does the security risk. Yesterday, the best practice was to re-architect the code before it went into production on the standard operations platform chosen by IT. Today in the interest of speed, organizations are deploying applications developed on containers straight into production, managing them with Kubernetes and running them somewhere in the cloud (potentially still on-premises, but frequently on a public cloud service). In this model, both the developers and the operations team need to become more security-aware, and security must be fully integrated into the software life cycle. Many of our customers are experimenting with technologies from different vendors, running on multiple cloud providers, and even deploying applications across multiple platforms at once. This keeps your options open for either cost optimization or to utilize the stack that best fits a given need, and avoids vendor lock-in, but can be difficult on developers, particularly at the serverless level where standards are still emerging. 


AI has a big data problem. Here's how to fix it

With data from a pre-COVID environment not matching the real world anymore, supervised algorithms are running out of examples to base their predictions on. And to make matters worse, AI systems don't flag their uncertainties to their human operator.  "The AI won't tell you when it actually isn't confident about the accuracy of its prediction and needs a human to come in," said Barber. "There are many uncertainties in these systems. So it is important that the AI can alert the human when it is not confident about its decision." This is what Barber described as an "AI co-worker situation", where humans and machines would interact to make sure that gaps aren't left unfilled. In fact, it is a method within artificial intelligence that is slowly emerging as a particularly efficient one. Dubbed "active learning", it consists of establishing a teacher-learner relationship between AI systems and human operators. Instead of feeding the algorithm a huge labeled dataset, and letting it draw conclusions – often in a less-than-transparent way – active learning lets the AI system do the bulk of data labeling on its own, and  crucially, ask questions when it has a doubt.



Work from Home: Changing Enterprise Risk and Careers

Interestingly, about three-fourths of those organizations surveyed said they have more than 76% of their employees working from up — that’s up from 25% in 2019. Still, a third of those surveyed said their organization is ill-prepared or not prepared to support remote working. Yet, 75% of businesses transitioned to remote working within 15 days. “Surprisingly, less than a third expressed cost or budget problems, demonstrating the urgency to support their business. Additionally, more than half (54%) expressed that COVID-19 has accelerated migration of users’ workflows and applications to the cloud,” stated the report. How are survey respondents securing their staff who work from home? The survey found the most common to be endpoint security, firewalls, virtual private networks and multi-factor authentication. The 2020 Remote Work-From-Home Cybersecurity Report is based on a survey of 413 security decision makers, conducted in May of this year, within multiple industries, including financial services, healthcare, manufacturing, high-tech, government and education.


Q&A on the Book Learning to Scale

If we go back to its origins, "lean" refers to the study of Toyota management practices outside of Toyota. It was started by a MIT research project in 1985, which compared the Japanese and occidental approaches to automotive manufacturing. At that time Toyota was already showing exceptional performance, and it ended up becoming the world's largest manufacturer 20 years later. What Toyota understood early on is that the western approach to industrialization, with a strong focus on processes and management by objectives, leads to employee disengagement and poor performance. An industrial operation is a very complex system, involving thousands of people for a single car, and subject to tens of thousands of daily problems. You need a skilled and creative workforce to be able to adapt to the resulting complexity. Toyota managers realized that most of these problems were the result of people's misconceptions about their work. They developed the Toyota Production System, which we now call the Thinking People System, as a comprehensive approach to developing team members by helping them study these problems in depth ...




Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - June 09, 2020

Exploring Edge Computing as a Complement to the Cloud

The acceleration of how organizations use edge computing may lead to new possibilities in cloud computing. “[The edge] is a complement to public cloud and your private data center,” said Wen Temitim, CTO for StackPath. “It’s not replacing public cloud.” He said the edge can be where network-sensitive applications run, for example. That will require sizing up how those applications might run differently based on traffic flows and how much of the population needs to be served, Temitin said. “The biggest challenge is rethinking that application architecture.” The first step will be to identify components of the application that need to evolve to run at the edge, he said. The definition of the edge can be relative, Temitim said. For example, hyperscalers and organizations may have a data center-focused edge. Others may see the edge as the collection of Tier 1 carrier hotels where different companies interconnect. Price said Cisco sees different slices of what the edge is; his primary portfolio item is a control center for cellular enablement and management for more than 150 million devices. “The edge is really the devices connecting to the cellular network and managing traffic flows from those customers,” Price said.


Why a Crisis Calls for Bulletproofing Your Applications and Infrastructure

It’s not a time to be taking chances, because any time you have rapid changes in demand, there are a set of ripple effects to other applications — you may get the noisy neighbor effect. In the era of virtualization, you may have had 100 percent of your virtual CPU allocated to your application. If, however, the underlying physical CPU resources were also shared with an unmonitored compute resource hog, your applications would be negatively impacted. The same concepts hold true today, but now it’s writ large in the cloud. Capacity, on-demand cloud architectures are only as performant as the underlying guaranteed, actual resource. Over-size and over-reserve that, and you will waste money. Under-size and you will impact performance — thus, the importance of monitoring everything, all the time. You’ve got extra demands, and it’s impacting your shared infrastructures. The potential for noisy neighbors goes up as you increase your number of apps. It becomes even more imperative to monitor. 


Data Monetization: New Value Streams You Need Right Now

Data and insights don’t have to be sold or exchanged directly. Sometimes baking data or analytics into one of your existing products or services instead can bolster their competitiveness, benefits, and a price premium . For example, a forecasting tool that has access to external datasets such as open data, syndicated data, social media streams and web content, and can automatically generate leading indicators of business performance, will set itself apart from stand-alone “dumb” forecasting tools that consider only a company’s own transaction history. Another good example are IoT-enabled automobile components that continually integrate data collected from other automobiles and drivers, and which can tune their own performance and/or prolong its lifespan. Rather than merely infusing existing products or services with data, go a step further and digitalize them altogether. For example, Kaiser Permanente implemented secure messaging, image sharing, video consultations and mobile apps, and now has more virtual patient visits than in-person doctor visits in some geographies. In addition, it and can get patients with specialists quicker than ever, and 90 percent of physicians say this digitalization has allowed them to provide higher-quality care for their patients. Digitalizing solutions often requires the wholesale redesign of products, services, processes and customer journeys to integrate and take advantage of data.


Refactor vs. rewrite: Deciding what to do with problem software

When a programmer refactors software, the goal is to improve the internal structure of the code without altering its external behavior. For example, developers remove redundant code or break a particularly task-heavy application component into several objects, each with a single responsibility. The Extreme Programming development approach, a concept known as merciless refactoring, stresses the need to continuously refactor code. Theoretically, programmers who refactor continuously make sections of code more attractive with every change. Refactored code should be easily understood by other people, so developers can turn code that scares people into code that people can trust and feel comfortable updating on their own. ... Rather than read and analyze complex, ugly code for refactoring, programmers can opt to just write new code altogether. Unlike refactoring, code rewrites sound relatively straightforward, since the programmers just start over and replace the functionality. However, it isn't nearly that simple. To successfully rewrite software, developers should form two teams: one that maintains the old application and another that creates the new one.


How IT modern operational services enables self-managing, self-healing, and self-optimizing

The classic issue is when there’s a problem, the finger-pointing or blame-game starts. Even triaging and isolating problems in these environments can be a challenge, let alone the expertise to fix the issue. The more vendors you work with the more dimensions you have to manage. And the classic issue, as you point out, is when there’s a problem, the finger-pointing or the blame-game starts. Even triaging and isolating problems in these types of environments can be a challenge, let alone having the expertise to fix the issue. Whether it’s in the hardware, software layer, or on somebody else’s platform, it’s difficult. Most vendors, of course, have different service level agreements (SLAs), different role names, different processes, and different contractual and pricing structures. So, the whole engagement model, even the vocabulary they use, can be quite different; ourselves included, by the way. So, the more vendors you have to work with, the more dimensions you have to manage. And then, of course, COVID-19 hits and our customers working with multiple vendors have to rely on how all those vendors are reacting to the current climate. And they’re not all reacting in a consistent fashion.


You don’t need SRE like Google. You need your own SRE.

You are not replacing your current Ops team, your sys admins with software Engineers. You need your ops team. They know how your custom built infrastructure and systems work. They know its idiosyncrasies. They know when Chicago opens to a ticket to say they are offline again, its the network. Yes, its always the network, but the sysadmins know who to ping at Equinix to get it restored pronto. They know how the option trade desk system slows to a grind on Expiration Friday and you just ignore those tickets from traders that day. And even if you wanted to get rid of all the sys admins, can you afford to hire that many software engineers to replace them all? You can barely fill all your open slots on the dev teams. What you need to do is complement your Ops teams with software engineers who can understand what the teams do day-in, day-out and what tasks are repetitive and typical, and then they can develop tools for automated remediation. These software engineers should be embedded in the ops team, not a separate team on the outside. Think Squads.


Overcome Privacy Shaming During and After Pandemic

Crises elevate the demand for data while increasing the risk of data misuse. Data and analytics leaders can overcome the inherent reluctance around data sharing by developing trusted internal and external data sharing programs. One of the ways to do this is to combat privacy shaming. The hype around third parties encroaching on individual data protection is oftentimes played out through that emotional tactic of privacy shaming to deter and even stop data sharing. Often this leads to a one-size-fits-all mentality that data sharing is bad. Data and analytics leaders must overcome this reluctance by aligning privacy practices with business value resiliency, while maximizing societal benefit. Champion a new culture around data sharing that illustrates how applying privacy awareness to decisions involving personal data sharing creates value. Change the emotional response of privacy shaming to one that is grounded in a proper understanding of organizational data protection requirements and policies. Armed with such knowledge, enterprise leaders will be able to better communicate what privacy is and is not. More importantly, they’ll be better able to convey the need to balance personal data rights with the freedom to conduct business and be innovative to solve complex challenges like coronavirus.


REST API Security Vulnerabilities

Authentication attacks are processes with which a hacker attempts to exploit the authentication process and gain unauthorized access. Bypass attack, brute-force attack (for passwords), verify impersonation, and reflection attack are a few types of authentication attacks. Basic authentication, authorization with default keys, and authorization with credentials are a few protection measures to safeguard our APIs. Cross-site scripts, also known as an XSS attack, is the process of injecting malicious code as part of the input to web services, usually through the browser to a different end-user. The malicious script, once injected, can access any cookies, session tokens, or sensitive information retained by the browser, or even it can masquerade the whole content of the rendered pages, XSS categorizes into server-side XSS and client-side XSS. Traditionally, XSS consist of three types; they are Reflected XSS, Stored XSS, and DOM XSS. Cross-site request forgery, also known as CSRF, sea-surf, or XSRF, is a vulnerability that web applications expose a possibility of the end user forced (by forged links, emails, HTML pages) to execute unwanted actions on a currently authenticated session. 


The World’s Best Banks: The Future Of Banking Is Digital After Coronavirus

“Banking has changed irrevocably as a result of the pandemic. The pivot to digital has been supercharged,” says Jane Fraser, president of Citigroup and CEO of its gigantic consumer bank. “We believe we have the model of the future – a light branch footprint, seamless digital capabilities and a network of partners that expand our reach to hundreds of millions of customers.” ... If there’s any doubt that digital-first banks are the way forward, Velez offers a surprising statistic: Since the pandemic began, Nubank has seen a surge in customers aged sixty and over, the types of clients many bankers once believed would never leave traditional branch networks. Over the past 30-days, for instance, some 300 clients above the age of 90 have become Nubank customers. Digital banks rated well in the United States as well. Online-only Discover and Capital One ranked #23 and #30, while neobank Chime ranked #36. All three beat out mega-lenders JPMorgan Chase, #36 and Citigroup #71. The other big four lenders, Bank of America and Wells Fargo, didn’t make the top-75.


Facilitating Threat Modelling Remotely

To aid in prioritisation of mitigations, Gumbley suggested dot-voting on the threats which have been identified. He suggested that this would "yield good risk decisions for low investment, reflecting the diverse perspectives in the group." Handova wrote that there is "no one-size-fits-all DevSecOps process" across enterprises and development teams. For every risk, he wrote that teams need to "provide the appropriate level of security assurance and security coverage." Handova wrote that this decision may determine whether any resulting investment in security testing will determine the response. That is, whether it is "automated within the DevOps workflow, performed out of band or some combination of the two." Handova cautioned the need to minimise "friction for developers," in order to avoid bypassing security in favour of "expediting coding activities." He wrote of the value of catching security issues "earlier and more easily while developers are still thinking about the code." Handova’s focus was on using test automation to mitigate "the risk of developers not remembering the code context at a later date."



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - June 08, 2020

Once again, Australian government agencies fail cyber security audit

Australian government agencies have turned in yet another poor showing in the latest audit of their information-security controls, but newly implemented cloud technology could help protect them against their ineptitude by locking data against compromise. ... Ten of the examined agencies complied with requirements around restricting administrative privileges, four were using application whitelisting for security protections, and three were on top of patching operating systems and applications. Just two agencies complied with guidance around multi-factor authentication, while just one agency had successfully implemented application hardening and one had successfully implemented controls over the use of macros in productivity suites. Although all entities regularly back up “financially significant data”, their lack of compliance with PSPF guidance around backups—only six entities were conducting daily backups in line with requirements—suggests many remain exposed to cyber attacks such as ransomware, defence against which has been tied to having a strong backup framework and effective data recovery mechanisms. Many of the examined entities cited complexities in existing systems as the reason they had failed to implement so many controls, with many progressing application consolidation plans for “lowering their attack surface and minimising risk.”


Tom Peters does not have all the answers

Peters visited our Salesforce office in Boston and spent 4 hours talking with our employees. He listened with interest and was completely present in the moment. He simply volunteers his time, hoping to teach and be taught. He is fiercely curious, practices radical transparency, and believes every word and every sentence that he tweets about. I have been following Peters for nearly a decade and I have admired his generosity, patience, integrity, benevolence, and unwavering commitment to sharing his knowledge, including mistakes and lessons learned. If I could describe Peters in three words, it would be: Honest, passionate, and caring. ... You will see a masterclass by Peters on humanity, leadership, business core values, and important guiding principles for entrepreneurs and community leaders. Personally speaking, my conversations with Tom Peters are equivalent to reading several meaningful books or attending a couple of semesters of graduate school in humanities. The best teachers are lifelong students. As you watch the video with Peters, you will notice the tall bookshelves behind him and the ladder to the right of the room. I only imagine Peters climbing the ladder to find and re-read his favorite books on the top shelves.


Office Everywhere: Remote Work Going Forward

If creating a positive company culture is at the top of the list, leaders need to find innovative ways to engage employees when after work happy hours are no longer possible. The absence of face-to-face interactions can cause people to feel unmotivated and disengaged, so reinforce your organization’s values through virtual celebrations, team-building exercises, and increased merit recognition to substitute in-person social interactions. Helping teams understand their role in the success of the company will help retain talent even when the market rebounds. Organizations may also look at recruiting efforts as a remote workforce can level the playing field for those who can’t physically be in the office, like parents who are primary caretakers for their children. At home, parents have the flexibility to meet the school bus or drop kids off at practice while also doing their job. Employers who embrace a more flexible and family friendly working environment can attract a diverse set of talent in the long run. Advancements in technology -- pervasive high-speed internet, cloud infrastructure, security, collaboration platforms, devices, and services -- empowers people in the office.


FCC Delays Law Banning Your ISP From Charging You 'Rental Fees' For Hardware You Already Own

Several things here. One, keep in mind this FCC did absolutely nothing for nearly two years as a major telecom monopoly charged users $10 for absolutely nothing. And the very first time they take substantive action on the issue, it involves delaying implementation of a law that actually helps. This is, for those playing along at home, the kind of "hands off approach" to regulation that the FCC loves to (falsely) claim spurs investment and innovation. In reality, finding creative new ways to rip off captive customers is as innovative as US telecom tends to get. Two, there's really nothing about a pandemic that would make it difficult to stop charging people bullshit fees. Three, the FCC's effort to "keep people connected" during this crisis consists of an entirely voluntary, temporary pledge to not kick users offline during the pandemic. It's a pledge many ISPs are simply ignoring, knowing full well the FCC just gutted much of its authority over telecom as part of the net neutrality repeal. Keep in mind the only reason anybody is doing anything about this is thanks to a law that required a miracle to pass.


Chief AI Officer: Executives discuss the role, pitfalls, and business philosophy

From telehealth chatbots to smart elevators, an increasing number of organizations across industries are looking to leverage AI to enhance their business model. As companies begin to adopt these technologies, there's a steep learning curve and numerous legal and ethical concerns to consider. "With the explosion of data we've seen over the last decade, many companies are struggling with how to use AI and automation to better access and utilize all of this information, in a safe, efficient, and ethical way," said Vijay Narayanan, chief AI officer at ServiceNow. "For example, businesses need to ensure customer data is never used without getting their permission first, and bias always needs to be eliminated. The role of the CAIO is to help lead a business through these steps to ensure the technology is used correctly." It's clear that many organizations will look to adopt a CAIO or similar roles to cater to these needs. However, there are pitfalls organizations can make when incorporating a new executive alongside the existing suite. As is the case with any position, cultural fit and philosophy are key. Business philosophy and long-term objectives will certainly play a central role as companies recruit CAIOs or promote individuals internally for this new position. It's imperative that organizations also ensure that the CAIO complements the existing executive suite.


Crank - a New Front-End Framework with Baked-In Asynchronous Rendering

because Crank decouples the idea of local state from rerendering, I think it unlocks a lot of advanced rendering patterns which simply aren’t possible in other frameworks. For instance, you can imagine an architecture where child components have local state but aren’t rerendered, but then rendered all at once by a single parent component which renders in a requestAnimationFrame loop. Components that are stateful but don’t have to rerender every time they’re updated are easy to do in Crank because we’ve decoupled state from rerendering. As an example, you can check out this quick demo I put together wherein I implement the 3D cubes/sphere demo which React and Svelte people were discussing on Twitter last year. I’m excited about Crank’s performance ceiling, because updating a component is just stepping through generators, and there are lots of interesting optimizations that you can do in user-space when state is just local variables and statefulness itself isn’t tightly coupled to a reactive system which forces every stateful component to rerender even if an ancestor component would have rerendered it anyways.


Infrastructure Design Principles For Architecture On AWS Cloud

End-user interacts with our infrastructure, starting from Route53 to get IP addresses for our services. Next, the user contacts CloudFront to get an optimized, cached frontend website. We use a single-page approach for our frontend apps, so they don’t need any server rendering and can be delivered in an efficient way to the user. Our frontend applications contact backend API using API Gateway, which not only caches some responses but also provides throttling and authentication and authorization of the requests. We usually use For secure ingress traffic, we use VPC Link from API Gateway to NLB; then, the traffic gets into the Kubernetes cluster. The cluster itself is configured to be highly available and can auto-scale depending on the load. Depending on the case, applications in the cluster contact multiple backend services such as Redis, Kafka, or RDS. Every time the project doesn’t require stateful services, we suggest going with serverless architecture to provide better OPEX than stateful services. Serverless architecture is very similar to our Kubernetes-based architecture from the user-facing side; the changes are in the backend where we use API Lambdas and sometimes ECS Fargate.


Data Management Hasn’t Failed, but Data Management Storytelling Has

The “need for high-quality data” has been the dominant rallying cry from data practitioners for decades. Redman references his Sloan Management Review piece stating, “Our ultimate goal has been to improve data and information quality by orders of magnitude.” Although it was published in 1995, it reads like it was written yesterday. That’s kind of the problem. These messages and lessons have been the same forever. Business leadership is just not inspired by the concept of “high-quality data.” If Data Quality was a successful way to pitch for senior-level engagement, it would have worked by now. It hasn’t. It never will. Quality is an emotional, subjective, intangible word that evokes soft-focus imagery of hand-crafted products and a Ricardo Montalbán-like voiceover cooing about “fine Corinthian leather.” Similar concepts, such as data hygiene, cleansing, and freshness, are rarely strategic and hardly holistic. Most data hygiene exercises are ad-hoc campaign-based projects isolated to a siloed use case. Although Data Quality metrics are important, and extremely valid within data departments, senior business leaders do not care about Data Quality. They care about results.


Singapore's move to introduce wearable devices for contact tracing sparks public outcry

"The only thing that stops this device from potentially being allowed to track citizens' movements 24 by 7 are: if the wearable device runs out of power; if a counter-measure device that broadcasts a jamming signal masking the device's whereabouts; or if the person chooses to live 'off the grid' in total isolation, away from others and outside of any smartphone or device effective range," he noted.  Others also have voiced their concerns about the potential implementation of wearable devices, taking to Balakrishnan's Facebook page to urge the government against taking this route.  One user, Francis Lum, said: "Can the government explore technologies that doesn't interfere with people's daily living? We are not one big giant high surveillance prison, are we? Too intrusive. This is like an electronic tag for prisoners or offenders." Chong Wen Hao also wrote: "With the rapid advancement of technology, we know that such level of surveillance is unavoidable. Even without this wearable device. it will come sooner or later in other forms. However, the idea of a wearable worn for tracking purposes is just too intrusive from a usability standpoint."


Building AMQP-Based Messaging Framework on MongoDB

With the growing trends of microservices, engineers are looking for more lightweight, independently deployable, and less costly options in the market. Every messaging framework comes with the baggage of additional infrastructure and maintenance headache. In one of my projects there has been a proposal to use the capped collection feature of MongoDB along with its tailable cursor as an alternative option to deploy any real messaging infrastructure. ... Not to mention that this feature of MongoDB is quite old and well-known in the market and you will find a lot of articles around it. However, I believe those articles have just shown the basic way of enabling it without going deep into it. A real messaging framework has lots of challenges than just making an asynchronous way of delivering the messages. In this series of articles, we will try to address them and see if we can really build some messaging infrastructure using MongoDB by considering all the needs of a messaging framework.



Quote for the day:

"Challenges in life always seek leaders and leaders seek challenges." -- Wayde Goodall