Daily Tech Digest - November 19, 2020

Japan Is Using Robots As A Service To Fight Coronavirus And For Better Quality Of Life

Connected Robotics has developed a number of other food-related robots, including a machine that produces soft-serve ice cream, another that prepares deep-fried foods often sold at Japanese convenience stores, and yet another that cooks bacon and eggs for breakfast. Along the way, it was selected for the Japanese government’s J-Startup program, which highlights promising startups in Japan, and has raised over 950 million yen ($9.1 million) from investors including Global Brain, Sony Innovation Fund and 500 Startups Japan. Connected Robotics wants robots to do more than just prepare food in the kitchen. It is collaborating with the state-backed New Energy and Industrial Technology Development Organization (NEDO) to tackle the task of loading and unloading dishwashers. Under the project, one robot arm will prewash dishes and load them into a dishwashing machine, and another arm will store the clean dishes. The company aims to roll out the machine next spring, part of its goal to have 100 robot systems installed in Japan over the next two years. From that point, it wants to branch out overseas into regions such as Southeast Asia.


DevOps Chat: Adopting Agile + DevOps in Large Orgs

The first step towards creating a change is to acknowledge that you need to change, because once everyone acknowledges that you need to change you can start to think about, well, what sort of change do we want to have? And this has to be a highly collaborative effort. It’s not just your CXOs locking themselves in a conference room at a resort for two days and whiteboarding this out. This has to be a truly collaborative exercise across the entire organization. There may need to be some constraints that are in place, and what I’ve typically seen is a board of directors might say, “Look, this is where we want to be five years from now,” not necessarily saying how that needs to be achieved but saying this is where we want to be. That in combination with creating that sense of urgency says, “Look, if we need to hit this sort of a target we need to be able to develop software faster or close out outages quicker or have a more Agile procurement process or whatever else it might be,” and that’s when you start to identify these sort of strategic and tactical initiatives. Because you’re working collaboratively you’re dispersing that cognitive effort, if you will. So it’s not just somebody from one particular vantage point saying, “As a CFO I think we are giving away too much money so we’re going to be cost-cutting only.”


How empathy drives IT innovation at Thermo Fisher Scientific

The development of an instrument here has always involved an incredible amount of scientific innovation and industrial design. Over the last few years, we have had the added complexity of building software and IoT sensors right into the products. We also develop software that enables our customers’ lab operations. My digital engineering organization has agile capabilities that we can carry over to the development we are doing on customer software solutions. The product groups make the decisions about what solution to develop, but our agile teams, who have been developing software for years, now help the product teams with delivery. ... Now that cloud services are mature, we are all in. We are also very excited about artificial intelligence and its transformational potential, particularly in life sciences. We are putting AI into our customer service operations so that agents spend more time helping a customer and aren’t worrying about how quickly they can finish their service call. AI is also becoming very important for gene sequencing and diagnostics in drug manufacturing. We are only scratching the surface there, but by creating hybrid AI teams made up of both IT and product people, we avoid reinventing the wheel.


CAP Theorem, PACELC, and Microservices

These architectural decisions have impacts that extend beyond data centers and cloud providers. They impact how users interact with the system and what their expectations are. It's important in a system that is eventually consistent that users understand that when they issue a command and get a response, that doesn't necessarily mean their command completed, but rather that their command was received and is being processed. User interfaces should be constructed to set user expectations accordingly. For example, when you place an order from Amazon, you don't immediately get a response in your browser indicating the status of your order. The server doesn't process your order, charge your card, or check inventory. It simply returns letting you know you've successfully checked out. Meanwhile, a workflow has been kicked off in the form of a command that has been queued up. Other processes interact with the newly placed order, performing the necessary steps. At some point the order actually is processed, and yet another service sends an email confirming your order. This is the out-of-band response the server sends you, not through your browser, synchronously, but via email, eventually. And if there is a problem with your order, the service that checked out your cart on the web site doesn't deal with it.


How to Evolve and Scale Your DevOps Programs and Optimize Success

First, there is the fact that simple DevOps managerial topologies might be effective at integrating the workflows of small teams dedicated to the development of a particular software product, but they are far less so when applied to a firms’ total software output. Techniques such as normalized production processes make it easier to catch bugs for large firms, but an over-eager application of these techniques runs the risk of homogenizing otherwise highly adapted teams. Secondly, there is the challenge of inconsistency. Because, as we’ve explained above, DevOps techniques generally arise within small, forward-looking teams, they are generally also highly adapted to the particular needs of these same teams. This can mean that managers trying to standardize DevOps across an organization can be faced with dozens 9if not hundreds) of different approaches, many of which have proven to be very effective. Finally, there is a more basic problem – that of communication. Where DevOps works best, it facilitates close and continuous communication between operations and development staff. Achieving this level of communication not only involves that communications technologies and platforms are in place; it also demands that teams be small enough that they can genuinely talk to each other.


Majority of APAC firms pay up in ransomware attacks

The complexity of having to operate cloud architectures also had a significant impact on the organisation's ability to recover following a ransomware attack, according to Veritas. Some 44% of businesses with fewer than five cloud providers in their infrastructure needing fewer than five days to recover, compared to 12% with more than 20 providers doing likewise. And while 49% of businesses with fewer than five cloud providers could restore 90% or more of their data, only 39% of their peers running more than 20 cloud services were able to do likewise. In Singapore, 49% said their security had kept pace with their IT complexity. Their counterparts in India, at 55% were most confident amongst other in the region about their security measures keeping pace with their IT complexity. Just 31% in China said likewise, along with 36% in Japan, 39% in South Korea, and 43% in Australia.  With ransomware attacks expected to continue to increase amidst accelerated digital transformation efforts and the normalisation of remote work, enterprises in the region will need to ensure they can detect and recover from such attacks. Andy Ng, Veritas' Asia-Pacific vice president and managing director, underscored the security vendor's recommended three-step layered approach to detect, protect, and recover.


How to speed up malware analysis

The main part of the dynamic analysis is to use a sandbox. It is a tool for executing suspicious programs from untrusted sources in a safe environment for the host machine. There are different approaches to the analysis in sandboxes. They can be automated or interactive. Online automated sandboxes allow you to upload a sample and get a report about its behavior. This is a good solution especially compared to assembling and configuring a separate machine for these needs. Unfortunately, modern malicious programs can understand whether they are run on a virtual machine or a real computer. They require users to be active during execution. And you need to deploy your own virtual environment, install operation systems, and set software needed for dynamic analysis to intercept traffic, monitor file changes, etc. Moreover, changing settings to every file takes a lot of time and anyway, you can’t affect it directly. We should keep in mind that analysis doesn’t always follow the line and things may not work out as planned for this very sample. Finally, it’s lacking the speed we need, as we have to wait up to half an hour for the whole cycle of analysis to finish. All of these cons may cause damage to the security if an unusual sample remains undetected. Thankfully, now we have interactive sandboxes.


Graph Databases Gaining Enterprise-Ready Features

Enterprise graph database vendor TigerGraph recently unveiled the results of performance benchmark tests conducted on representative enterprise uses of its scalable application. Touted by the company as a comprehensive graph data management benchmark study, the tests used almost 5TB of raw data on a cluster of machines to show the performance of TigerGraph. The study used the Linked Data Benchmark Council Social Network Benchmark (LDBC SNB), which is a reference standard for evaluating graph technology performance with intensive analytical and transactional workloads. The results and performance numbers showed that graph databases can scale with real data, in real time, according to the vendor. TigerGraph claims it is the first industry vendor to report LDBC benchmark results at this scale. The data showed that TigerGraph can run deep-link OLAP queries on a graph of almost nine billion vertices (entities) and more than 60 billion edges (relationships), returning results in under a minute, according to the announcement. TigerGraph’s performance was measured with the LDBC SNB Benchmark scale-factor 10K dataset on a distributed cluster for the analysis. The implementation used TigerGraph’s GSQL query language, which were compiled and loaded into the database as stored procedures.


Solving the performance challenges of web-scale IT

Two other components to be considered when it comes to performance monitoring for web-scale IT are an in-memory database, and a strong testing protocol. Kieran Maloney, IT services director at Charles Taylor, explained: “Utilising an in-memory database and caching are ways to improve the performance of web-scale applications, particularly for things like real-time or near-time analytics. “The majority of cloud infrastructure service providers now offer PaaS services that include in-memory capabilities, which increase the speed at which data can be searched, accessed, aggregated and analysed – examples include Azure SQL, Amazon ElastiCache for Redis, Google Memorystore, and Oracle Cloud DB. “The other consideration for solving performance management is a testing approach for identifying how the actual performance is, and also determine whether there are specific aspects of the application that are not performing, or could be “tuned” to improve performance or the cost of operation. “There are a number of Application Performance Testing tools available that provide business and IT teams with real-time usage, performance and capacity analysis – allowing them to proactively respond and make interventions to improve performance, as required.”


Optimising IT spend in a turbulent economic climate

Having complete transparency and visibility of an organisations entire IT ecosystem is an essential first step to optimising costs. This includes having a full, holistic view across all solutions, whether they are on-premises or in the cloud. The reality, however, is that many businesses have a fragmented view over the technology applications within their organisation, which makes identifying inefficiencies extremely difficult. Even before the shift to remote work, the evolution of department-led technology purchasing had caused many IT teams to lose visibility of their technology estate, including accounting for what’s being used, how much and what tools are left inactive but still paid for. ... Once a clear view of all technology assets has been defined, IT teams can then start to assess the current usage and spend of the organisation. With many employees working from home, it is likely they will be using a variety of new tools to work effectively. Whilst it can be difficult to determine exactly what is being used and by who when many workers are remote, having this information is crucial to effectively reducing redundancies.



Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley

Daily Tech Digest - November 18, 2020

ThreatList: Pharma Mobile Phishing Attacks Turn to Malware

“The reason that mobile devices have become a primary target is because a well-crafted attack can be close to impossible to spot,” said Schless. “Mobile devices have smaller screens, simplified user interfaces, and people generally exercise less caution on them than they do on computers.” Meanwhile, while previously cybercriminals were relying on phishing attacks that attempted to carry out credential harvesting, in 2020, the aim shifted to malware delivery. For instance, in the fourth quarter of 2019, 83 percent of attacks aimed to launch credential harvesting while 50 percent aimed to deliver malware. However, in the first quarter of 2020, only 40 percent of attacks targeted credentials, while 78 percent aimed to deliver malware. And, in the third quarter of 2020, 27 percent targeted credentials, and 81 percent looked to load malware. Researchers believe that this shift signifies that attackers are investing in malware more for pharmaceutical companies. For one, successful delivery of spyware or surveillanceware to a device could result in longer-term success for the attacker. Furthermore, said researchers, attackers want to be able to observe everything the user is doing and look into the files their device accesses and stores. ...”


Don't put data science notebooks into production

Putting a notebook into a production pipeline effectively puts all the experimental code into the production code base. Much of that code isn't relevant to the production behavior, and thus will confuse people making modifications in the future. A notebook is also a fully powered shell, which is dangerous to include inside a production system. Safe operations require reproducibility and auditability and generally eschews manual tinkering in the production environment. Even well intentioned people can make a mistake and cause unintended harm. What we need to put into production is the concluding domain logic and (sometimes) visualizations. In most cases, this isn't difficult since most notebooks aren't that complex. They only encourage linear scripting, which is usually small and easy to extract and put into a full codebase. If it's more complex, how do we even know that it works? These scripts are fine for a few lines of code but not for dozens. You’ll generally want to break that up into smaller, modular and testable pieces so that you can be sure that it actually works and, perhaps later, reuse code for other purposes without duplication. So we’ve argued that having notebooks running directly in production usually isn’t that helpful or safe. It’s also not hard to incorporate into a structured code base.


Why the CMO and CIO are no longer strange bedfellows

The CIO’s mandate is all systems, both customer-facing and internal. We know that more and more this involves capturing and interpreting market and customer data through artificial intelligence derived from data sensors. In turn, IT leaders supply the capabilities needed to meet Line of Business demands for agility and speed. The CMO’s mandate is to apply the derived customer intelligence, needs, and habits, and profile customers down to the individual level, to create an experience that meets the customer wherever, whenever, and on any device. Understanding the customer is therefore central to both mandates. The CIO needs to connect technology capabilities all the way from the customer interaction back to the workload related to the customer, sitting on the chosen infrastructure platform. The CMO needs an entire profile of the customer, and the CIO builds the systems in order to create the profile. In the current climate, businesses who fail to understand the importance of the digital customer experience will undoubtedly fall behind. Embracing the customer as a digital experience is essential for business competitiveness and even survival.


Understanding Microsoft .NET 5

Technically this new release should be .NET Core 4, but Microsoft is skipping a version number to avoid confusion with the current release of the .NET Framework. At the same time, moving to a higher version number and dropping Core from the name indicates that this is the next step for all .NET development. Two projects still retain the Core name: ASP.NET Core 5.0 and Entity Framework Core 5, since legacy projects with the same version numbers still exist. It’s an important milestone, marking the point where you need to consider starting all new projects in .NET 5 and moving any existing code from the .NET Framework. Although Microsoft isn’t removing support from the .NET Framework, it’s in maintenance mode and won’t get any new features in future point releases. All new APIs and community development will be in .NET 5 (and 2021’s long-term support .NET 6). Some familiar technologies such as Web Forms and the Windows Communication Foundation are being deprecated in .NET 5. If you’re still using them, it’s best to remain on .NET Framework 4 for now and plan a migration to newer, supported technologies, such as ASP.NET’s Razor Pages or gRPC. There are plans for community support for alternative frameworks that will offer similar APIs


Top 8 trends shaping digital transformation in 2021

Consumers want consistent engagement with brands across their preferred channels. Seventy-three percent of shoppers use more than one channel during their shopping journey. Per Deloitte, seventy-five percent of consumers expect consistent interactions across all departments of a company. Eighty-six percent of consumers say they want the ability to move between channels when talking to a brand. Ninty-two percent of customers are satisfied using live chat services -- making it the support channel that leads to the highest customer satisfaction. And 78% of consumers use mobile devices to connect with brands for customer service -- the number jumps to 90% of Millennials. Organizations need to invest in new digital methods of customer service. ... Research shows that Lines of business (LoBs) are participating in digital transformation with 68% of LoB users believe IT and LoBs should jointly drive digital transformation. In addition, 51% of LoB users are frustrated at the speed their organizations' IT department can deliver digital projects. Outside of IT, the top three business roles with integration needs include business analysts, data scientists, and customer support.


Q&A on the Book Virtual Teams Across Cultures

Firstly, it is important to understand the meaning of culture. In the book, I go into more detail, but for now we can say that culture is the meaning that a group of people give to understand life and interpret their experience. Culture is a social construct, meaning that it develops through the interaction of people. As humans, we are influenced by many cultures, such as company culture. The book focuses on country or location culture. When we work with people from the same culture, things tend to go smoothly. In general, we understand each other’s communication style, work approach, reactions and ideas. It all makes sense because the assumptions that drive us are similar. However, when we meet someone from a different culture, we may not understand or we may be surprised by their communication style, work approach, reactions and ideas. The assumptions that drive their behavior are fundamentally different. This is what we call culture shock – that feeling of confusion because the other person does not make sense to us. People who work internationally have most likely experienced culture shock. The critical aspect is how we respond to it. 


Can Low Code Measure Up to Tomorrow’s Programming Demands?

There is some disagreement on whether AI and machine learning will be able to write code, says Forrester’s Jeffrey Hammond, vice president and principal analyst serving CIO professionals. “One camp is saying, ‘In the future, AI is going to write a lot of the code that developers might write today,’” he says. That could lead to less demand for developers, with fewer positions to be filled. The counter view, Hammond says, is that software development is a creative process and profession. For all its capabilities, AI has limits that might not match the novel thinking of developers, he says. “Some of the most valuable code that’s written is also the most creative code.” Today AI is used successfully in testing, Hammond says, which many developers might be loath to writing test cases for. He sees market adjacencies to that with development tools such as Microsoft Visual Studio that has a feature that can predict what a developer may type next, then make that available for the developer to click. “You’ve got examples of where these tools are augmenting developers’ working habits and making them more productive,” Hammond says. In the creative space, Adobe Sensei technology can help designers automate tedious tasks, he says, such as stitch together photos or remove undesired artifacts from content.


Vulnerability Prioritization Tops Security Pros' Challenges

This should come as no surprise to anyone working in software development. Software development organizations are using more application security tools than ever before and from the earliest stages of development. Most are on top of detection, but that's only the first step. Next comes prioritization: Once you've detected the security issues, how can you make sure you are addressing the most critical issues first? While prioritization is essential for organizations that want to get ahead of their backlog, they are still struggling to formulate a standardized prioritization process. Even though vulnerability prioritization rated very high on application security professionals' list of top challenges, the WhiteSource survey found that most security and development teams don't follow a shared process for prioritization. The survey asked to what extent the security and development teams in their organization agree on which vulnerabilities need to be fixed, and the results were concerning: 58% of respondents said they sometimes agree, but each team follows ad hoc practices and separate guidelines. Only 31% of respondents said they have an agreed-upon process to determine priorities.


Fast-Tracking AI Ethics Is Dicey And Shortsighted, Especially For Self-Driving Cars

Somehow, there needs to be a balance found that can appropriately make use of the AI Ethics precepts and yet allow for flexibility when there is a real and fully tangible basis to partially cut corners, as it were. Of course, some would likely abuse the possibility of a slimmer version and always go that route, regardless of any truly needed urgency of timing. Thus, there is a chance of opening a Pandora’s box whereby a less-than fully AI Ethics protocol becomes the default norm, rather than serving as a break-glass exception when rarely so needed. It can be hard to put the Genie back into the bottle. In any case, there are already some attempts at trying to craft a fast-track variant of AI Ethics principles. We can perhaps temper those that leverage the urgent version with both a stick and a carrot.The carrot is obvious that they are seemingly able to get their AI completed sooner, while the stick is that they will be held wholly accountable for not having taken the full nine yards on the use of the AI Ethics. This is a crucial point that might be used against those taking such a route and be a means to extract penalties via a court of law, along with penalties in the court of public opinion.


How to boost your enterprise's immunity with cyber resilience

Cyber security and cyber resilience are often used interchangeably. While they are related concepts, they're far from being synonyms, and it's crucial for everyone to understand the difference. Security is like wearing a mask or using other forms of personal protective equipment to reduce your risk of being infected with a virus. Resiliency is, after having been infected, fighting through the illness and giving your body a chance to return to good health. This means that cyber security is the protection and restoration of IT assets—hardware and software, in the cloud and on premises—and the data they contain, to ensure their availability and integrity. Resiliency, on the other hand, focuses on the ability of the business to withstand and recover from these breaches. The scope extends beyond IT and information to business operations and processes. The U.S. National Institute of Standards and Technology (NIST) defines cyber resilience as "the ability of an information system to continue to operate under adverse conditions or stress, even if in a degraded or debilitated state, while maintaining essential operational capabilities; and to recover to an effective operational posture in a time frame consistent with mission needs."



Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." -- Jamie Paolinetti

Daily Tech Digest - November 17, 2020

SD-WAN needs a dose of AIOps to deliver automation

In some ways, SD-WAN exacerbates the troubleshooting problem. It adds a level of resiliency to the network via multi-path networking that can hide outages. This leads to a situation where the network operations dashboard can show everything is "green," but apps are performing poorly. Network performance issues have become glaringly obvious with the rise of video, and they are causing network engineers to constantly scramble to try and remediate issues. Here is where AI can make a difference. AI systems can ingest the massive amounts of data provided by network infrastructure (LAN, WLAN and WAN) to "see" things that even the savviest network engineer can't see. At one time, when networks were fairly simple and traffic volumes were lower, it was possible for a seasoned network professional to "know" a network and quickly find the root of problems through a combination of domain knowledge and rapid inspection of traffic. But not so today as the numbers of devices, applications and volume of information have skyrocketed. One of the big changes is that periodic polling data has been replaced by real-time streaming telemetry that increases data by an order of magnitude or more.


Ripe for digital disruption: Which industries are most at risk and why

The changing demographics favor workers who are much more open to gig work and who place greater trust in digital platforms to create marketplaces. This has opened the door to changes in typically cohesive industries, such as higher education. The increased demand for digital skills has led many students to decouple academic interest and professional credentialing. This will lead to an exodus from costlier schools in favor of boutique schools that cater to narrower interests. Students will earn digital credentials from specific, technology-heavy institutions like Lambda School in their early career, and pursue further growth and learning throughout their career from organizations such as Coursera or LinkedIn Learning. Generation Z has grown up with democratized value creation, like YouTube channels or Twitch streamers that organically found their base and built their audience using digital techniques. These new, digital entities can see the most valuable part of a business process and align themselves to those while sourcing out the other aspects with great velocity. Tesla, for example, has done away with its PR department and is relying on its outspoken CEO to directly message the market.


The seven elements of successful DDoS defence

Because multiple computers from a globally dispersed botnet “zombie army” of hijacked internet-connected devices are attempting to flood a server with fake traffic to knock it offline, DDoS attacks are already more destructive than Denial of Service (DoS) attacks perpetrated from one machine. However, in recent years we’ve monitored a disturbing trend: DDoS used as a smokescreen. The service disruption draws the IT team’s attention away from a separate and more sophisticated incursion, such as account takeover or phishing. The damage of just the DDoS can be bad enough. It takes a targeted website minutes to go down in a strike, but hours to recover. In fact, 91% of organisations have experienced downtime from a DDoS attack, with each hour of downtime costing an average of $300,000. Beyond the revenue loss, DDoS can erode customer trust, force businesses to spend large amounts in compensations, and cause long-term reputational damage; particularly if it leads to other breaches. ... A comprehensive defence is essential, but with attacks ranging from massive volumetric bombardments to sophisticated and persistent application layer threats, what are the most important elements of potential solutions to consider?


Breakdown of a Break-in: A Manufacturer's Ransomware Response

At the 2020 (ISC)² Security Congress, SCADAfence CEO Elad Ben-Meir took the virtual stage to share details of a targeted industrial ransomware attack against a large European manufacturer earlier this year. His discussion of how the attacker broke in, the collection of forensic evidence, and the incident response process offered valuable lessons to an audience of security practitioners. The firm learned of this attack late at night when several critical services stopped functioning or froze altogether. Its local IT team found ransom notes on multiple network devices and initially wanted to pay the attackers; however, after the adversaries raised their price, the company contacted SCADAfence's incident response team. ... Before it arrived on-site, the incident response team instructed the manufacturer to contain the threat to a specific area of the network and prevent the spread of infection, minimize or eliminate downtime of unaffected systems, and keep the evidence in an uncontaminated state. "The initial idea was to try to understand where this was coming from, what machines were infected and what machines those machines were connected to, and if there was the ability to propagate additionally from there," said Ben-Meir in his talk.


Sustainability: The growing issue of supply chain disruption

There is likely to be more disruption ahead as extreme weather events appear to be on the rise. According to McKinsey, climate disruptions to supply chains are going to become increasingly frequent and more severe. Kern said: “It’s a mathematical effect that the number of natural catastrophes has been increasing massively in recent years. If you look at Hurricanes Katrina, Harvey, Irma and Maria as well as the Japanese earthquake and the Thai floods you can see that we are getting loss events far above the previous average of around $50bn. We’re seeing nat cats causing losses up to $150bn of insured value, so as you can imagine this is a very big concern for us.” Baumann pointed out that as well as more extreme weather, other future trends could play a role. He said: “There are several drivers of disruption. The complexity of supply chains is increasing, and more complexity means more potential points of failure. Even simple goods can have as many as ten suppliers. That in turn adds to the risk that transportation and production may be disrupted.” At the same time, practices such as just-in-time delivery or lean manufacturing can also introduce risks, particularly when organisations are focused purely on reducing costs.


Figuring out programming for the cloud

The trick, says Rosoff, is to give the programmer enough of a language to express the authorization rule, but not so much freedom that they can break the entire application if they have a bug. How does one determine which language to use? Rosoff offers three decision criteria: Does the language allow me to express the complete breadth of programs I need to write? (In the case of authorization, does it let me express all of my authZ rules?); Is the language concise? (Is it fewer lines of code and easier to read and understand than the YAML equivalent?); Is the language safe? (Does it stop the programmer from introducing defects, even intentionally?). We still have a ways to go to make declarative languages the easy and obvious answer to infrastructure-as-code programming. One reason developers turn to imperative languages is that they have huge ecosystems built up around them with documentation, tooling, and more. Thus it’s easier to start with imperative languages, even if they’re not ideal for expressing authorization configurations in IaC. We also still have work to do to make the declarative languages themselves approachable for newbies. This is one reason Polar, for example, tries to borrow imperative syntax.


A Cloud-Native Architecture for a Digital Enterprise

Cloud-native applications are all about dynamism, and microservice architecture (MSA) is critical to accomplish this goal. MSA helps to divide and conquer by deploying smaller services focusing on well-defined scopes. These smaller services need to integrate with different software as a service (SaaS) endpoints, legacy applications, and other microservices to deliver business functionalities. While microservices expose their capabilities as simple APIs, ideally, consumers should access these as integrated, composite APIs to align with business requirements. A combination of API-led integration platform and cloud-native technologies helps to provide secured, managed, observed, and monetized APIs that are critical for a digital enterprise. The infrastructure and orchestration layers represent the same functionality that we discussed in the cloud-native reference architecture. Cloud Foundry, Mesos, Nomad, Kubernetes, Istio, Linkerd, and OpenPaaS are some examples of current industry-leading container orchestration platforms. Knative, AWS Lambda, Azure Functions, Google Functions, and Oracle Functions are a few examples of functions as a service platform (FaaS).


New streaming and digital media rules by Indian government rattles industry

So, what exactly does rule this portend? It's not entirely clear. To some who earn their bread and butter monitoring these industries, the prognosis is dire. Nikhil Pahwa, a digital rights activist and founder of prominent website MediaNama that writes about these industries said this to the Guardian: "The fear is that with the Ministry of Information and Broadcasting -- essentially India's Ministry of Truth -- now in a position to regulate online news and entertainment, we will see a greater exercise of government control and censorship." If this becomes reality it would wreck the plans of companies such as Netflix and Amazon that have seen their fortunes rise dramatically in the last few years with the spectacular boom of smartphones and cheap data, both goldmines that keep on giving. The COVID era has only added more fuel to this trend. Eager to capitalise on this nascent market, Netflix has already pumped $400 million into the country and amassed 2.5 million precious subscribers. Consulting outfit PwC predicts that India's media and entertainment industry will grow at a brisk 10.1% clip annually to reach $2.9 billion by 2024. 


Executive Perspective: Privacy Ops Meets DataOps

PrivacyOps is emerging because privacy considerations can no longer be an afterthought in an organization’s software development lifecycle -- they need to be tightly integrated. There is pressure on organizations to prove they are taking responsibility for personal data and acting in compliance with regulations, and it’s only going to increase. The real opportunity that the emergence of PrivacyOps presents is bringing security and privacy processes together, and standardizing best practices that need to be implemented across organizations. It’s far too easy for engineering, analytics, and compliance teams to talk over each other. Bringing these domains together through software will help to set expectations across the industry about privatizing data assets. Techniques such as k-anonymization, for example, are practiced by some of the best teams in healthcare, but they are hardly commonplace, despite being relatively easy to implement. To deliver compliant analytics, you need data engineers that can reliably ship the data from place to place, while implementing the appropriate transformations. However, what actually needs to be done is often not very clear to the engineering team. Data scientists want as much data as possible; compliance teams are pushing to minimize the data footprint. Regulations are in flux and imprecise.


2021 predictions for the Everywhere Enterprise

While people will eventually return to the office, they won’t do so full-time, and they won’t return in droves. This shift will close the circle on a long trend that has been building since the mid-2000s: the dissolution of the network perimeter. The network and the devices that defined its perimeter will become even less special from a cybersecurity standpoint. ... Happy, productive workers are even more important during a pandemic. Especially as on average, employees are working three hours longer since the pandemic started, disrupting the work-life balance. It’s up to employers to focus on the user experience and make workers’ lives as easy as possible. When the COVID-19 lockdown began, companies coped by expanding their remote VPN usage. That got them through the immediate crisis, but it was far from ideal. On-premises VPN appliances suffered a capacity crunch as they struggled to scale, creating performance issues, and users found themselves dealing with cumbersome VPN clients and log-ins. It worked for a few months, but as employees settle in to continue working from home in 2021, IT departments must concentrate on building a better remote user experience.



Quote for the day:

"At first dreams seem impossible, then improbable, then inevitable." -- Christopher Reeve

Daily Tech Digest - November 16, 2020

System brings deep learning to “internet of things” devices

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight — instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine. The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.” In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.


Beyond the Database, and Beyond the Stream Processor: What's the Next Step for Data Management?

The breadth of database systems available today is staggering. Something like Cassandra lets us store a huge amount of data for the amount of memory the database is allocated; Elasticsearch is different, providing a rich, interactive query model; Neo4j lets us query the relationship between entities, not just the entities themselves; things like Oracle or PostgreSQL are workhorse databases that can morph to different types of use case. Each of these platforms has slightly different capabilities that make it more appropriate to a certain use case but at a high level, they’re all similar. In all cases, we ask a question and wait for an answer. This hints at an important assumption all databases make: data is passive. It sits there in the database, waiting for us to do something. This makes a lot of sense: the database, as a piece of software, is a tool designed to help us humans — whether it's you or me, a credit officer, or whoever — interact with data.  But if there's no user interface waiting, if there's no one clicking buttons and expecting things to happen, does it have to be synchronous? In a world where software is increasingly talking to other software, the answer is: probably not.


Data warehousing workloads at data lake economics with lakehouse architecture

Data lakes in the cloud have high durability, low cost, and unbounded scale, and they provide good support for the data science and machine learning use cases that many enterprises prioritize today. But, all the traditional analytics use cases still exist. Therefore, customers generally have, and pay for, two copies of their data, and they spend a lot of time engineering processes to keep them in sync. This has a knock-on effect of slowing down decision making, because analysts and line-of-business teams only have access to data that’s been sent to the data warehouse rather than the freshest, most complete data in the data lake. ... The complexity from intertwined data lakes and data warehouses is not desirable, and our customers have told us that they want to be able to consolidate and simplify their data architecture. Advanced analytics and machine learning on unstructured and large-scale data are one of the most strategic priorities for enterprises today, – and the growth of unstructured data is going to increase exponentially – therefore it makes sense for customers to think about positioning their data lake as the center of data infrastructure. However, for this to be achievable, the data lake needs a way to adopt the strengths of data warehouses.


What to Learn to Become a Data Scientist in 2021

Apache Airflow, an open source workflow management tool, is rapidly being adopted by many businesses for the management of ETL processes and machine learning pipelines. Many large tech companies such as Google and Slack are using it and Google even built their cloud composer tool on top of this project. I am noticing Airflow being mentioned more and more often as a desirable skill for data scientists on job adverts. As mentioned at the beginning of this article I believe it will become more important for data scientists to be able to build and manage their own data pipelines for analytics and machine learning. The growing popularity of Airflow is likely to continue at least in the short term, and as an open source tool, is definitely something that every budding data scientist should at learn. ... Data science code is traditionally messy, not always well tested and lacking in adherence to styling conventions. This is fine for initial data exploration and quick analysis but when it comes to putting machine learning models into production then a data scientist will need to have a good understanding of software engineering principles. If you are planning to work as a data scientist it is likely that you will either be putting models into production yourself or at least be involved heavily in the process.


WhatsApp Pay: Game changer with new risks

The payment instruction itself is a message to the partner bank, which then triggers a normal UPI transaction from the customer’s designated UPI bank to the destination partner bank through the National Payments Corporation of India (NPCI). The destination partner bank forwards the payment to the addressee’s default UPI bank registered with WhatsApp. A confirmation of credit is also sent through WhatsApp and reaches the message box of the recipient. It is possible that at either end, the WhatsApp partner bank may not be the customer’s bank. Hence, there may be the involvement of four banks, the NPCI and WhatsApp in completing the transaction. As far as the user is concerned, the system is managed by WhatsApp and none of the other players is visible. Though WhatsApp is not licensed to undertake UPI transactions directly, it engages the services of its partner banks to initiate the transaction. As these partner banks are not bankers for the customers, they engage two more banks to assist them. Finally, NPCI acts as the agent of the two banks through which the money actually passes through to the right bank. Thus, there is a chain of principal agent transaction and the roles of the customer, WhatsApp, banks, etc., need to be clarified. 


New Circuit Compression Technique Could Deliver Real-World Quantum Computers Years Ahead of Schedule

“By compressing quantum circuits, we could reduce the size of the quantum computer and its runtime, which in turn lessens the requirement for error protection,” said Michael Hanks, a researcher at NII and one of the authors of a paper, published on November 11, 2020, in Physical Review X. Large-scale quantum computer architectures depend on an error correction code to function properly, the most commonly used of which is surface code and its variants. The researchers focused on the circuit compression of one of these variants: the 3D-topological code. This code behaves particularly well for distributed quantum computer approaches and has wide applicability to different varieties of hardware. In the 3D-topological code, quantum circuits look like interlacing tubes or pipes, and are commonly called “braided circuits. The 3D diagrams of braided circuits can be manipulated to compress and thus reduce the volume they occupy. Until now, the challenge has been that such “pipe manipulation” is performed in an ad-hoc fashion. Moreover, there have only been partial rules for how to do this. “Previous compression approaches cannot guarantee whether the resulting quantum circuit is correct,” said co-author Marta Estarellas, a researcher at NII.


Microsoft Warns: A Strong Password Doesn’t Work, Neither Does Typical MFA 

“Remember that all your attacker cares about is stealing passwords...That’s a key difference between hypothetical and practical security.” — Microsoft’s Alex Weinert In other words, the bad guys will do whatever is necessary to steal your password and a strong password isn’t an obstacle when criminals have a lot of time and a lot of tools at their disposal. ... MFA based on phones, aka publicly switched telephone networks or PSTN, is not secure, according to Weinert. (What is typical MFA? It’s when, for example, a bank sends you a verification code via a text message.) “I believe they’re the least secure of the MFA methods available today,” Weinert wrote in a blog. “When SMS (texting) and voice protocols were developed, they were designed without encryption...What this means is that signals can be intercepted by anyone who can get access to the switching network or within the radio range of a device,” Weinert wrote. Solution: use app-based authentication. For example, Microsoft Authenticator or Google Authenticator. An app is safer because it doesn’t rely on your carrier. The codes are in the app itself and expire quickly.


Defining data protection standards could be a hot topic in state legislation in 2021

Once the immediacy of both the pandemic dissipates and the political heat cools, cybersecurity issues will likely surface again in new or revived legislation in many states, even if weaved throughout other related matters. It’s difficult to separate cybersecurity per se from adjoining issues such as data privacy, which has generally been the biggest topic to involve cybersecurity issues at the state level over the past four years. “You really don’t have this plethora of state cybersecurity laws that would be independent of their privacy law brethren,” Tantleff said. According to the National Conference of State Legislatures, at least 38 states, along with Washington, DC, and Puerto Rico introduced or considered more than 280 bills or resolutions that deal significantly with cybersecurity as of September 2020. Setting aside privacy and some grid security funding issues, there are two categories of cybersecurity legislative issues at the state level to watch during 2021. The first and most important is spelling out more clearly what organizations need to meet security and privacy regulations. The second is whether states will pick up election security legislation left over from the 2020 sessions.


The Case for Combining Next Generation Tech with Human Oversight

Human error is the main cause of security breaches, wrong data interpretation, mistaken insights, and a variety of other damning experiences the insights industry has had to wade through ever since its conception. Zooming out to take a wider look, human error is the cause of mistaken elections, aviation accidents, cybersecurity issues, etc. but also scientific breakthroughs across the world. While some mistakes yield true results, most have dangerous consequences that could have been avoided if we were more careful. To err is human, but in an industry where mistakes have real-world consequences, to err is to potentially cost a business it’s life. If we stick with the artificial intelligence and automation example, automated processes with next generation technology are the most poignant example of humans trying to make up for their mistakes and can help minimise human error at all stages ... The main benefit of combining human oversight with this next generation technology, is that we can catch and fix any bugs that arise before they harm the research process and projects that rely on said technology. But we need to be wary that humans cannot catch every mistake, and when one slips through that is when oversight takes on a whole new, disappointing meaning.


Important Considerations for Pushing AI to the Edge

The decision on where to train and deploy AI models can be determined by balancing considerations across six vectors: scalability, latency, autonomy, bandwidth, security, and privacy. In terms of scalability, in a perfect world, we’d just run all AI workloads in the cloud where compute is centralized and readily scalable. However, the benefits of centralization must be balanced out with the remaining factors that tend to drive decentralization. For example, if you depend on edge AI for latency-critical use cases and for which autonomy is a must, you would never make a decision to deploy a vehicle’s airbag from the cloud when milliseconds matter, regardless of how fast and reliable your broadband network may be under normal circumstances. As a general rule, latency-critical applications will leverage edge AI close to the process, running at the Smart and Constrained Device Edges as defined in the paper. Meanwhile, latency-sensitive applications will often take advantage of higher tiers at the Service Provider Edge and in the cloud because of the scale factor. In terms of bandwidth consumption, the deployment location of AI solutions spanning the User and Service Provider Edges will be based on a balance of the cost of bandwidth, the capabilities of devices involved and the benefits of centralization for scalability.



Quote for the day:

"If you want to do a few small things right, do them yourself. If you want to do great things and make a big impact, learn to delegate." -- John C. Maxwell

Daily Tech Digest - November 14, 2020

Data Scientist vs Business Analyst. Here’s the Difference.

Perhaps the biggest similarity of Business Analyst to Data Scientist is the words itself to describe the role. A Data Scientist is expected to perform business analytics in their role as it is essentially what dictates their Data Science goals. A Business Analyst can expect to focus not on Machine Learning algorithms to solve business problems, but instead on surfacing anomalies, shifts and trends, and key points of interest for a business. ... Of course, there are some key differences between these two roles. One of the biggest differences is the use of Machine Learning for Data Scientists only. Another difference is that a Business Analyst can expect to communicate more to stakeholders than a Data Scientist would (sometimes Data Scientist work can be more heads down and not involve as many meetings). Here is a summary of the differences you can expect to find between these positions. ... These two roles share goals with one another. Each requires a deep dive into data with similar tools as well. The process of communication is similar, too — working with stakeholders from the company to go over the business problem, solution, results, and impact. Here is a summary of the key similarities between a Data Scientist and a Business Analyst.


CISA Director Expects to Be Fired Following Secure Election

US officials delivered a statement emphasizing the security of this year's election as news of these firings began to unfold. Members of the Election Infrastructure Government Coordinating Council (GCC) Executive Committee and the Election Infrastructure Sector Coordinating Council (SCC) say this election "was the most secure in American history." Across the country, they add, officials are reviewing the election process, and states with close calls will recount ballots. "This is an added benefit for security and resilience," they wrote. "This process allows for the identification and correction of any mistakes or errors. There is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised." Security measures included pre-election testing, state certification of voting equipment, and the US Election Assistance Commission's (EAC) certification of voting equipment contribute to confidence in voting systems used in 2020, they said. Officials acknowledged the "many unfounded claims and opportunities for misinformation" about the election process and emphasize they have the "utmost confidence" in the election's security and integrity.


Security Awareness: Preventing Another Dark Web Horror Story

Our research from last year has already revealed that 1 in 4 people would be willing to pay to get their private information taken down from the dark web – and this number jumps to 50% for those who have experienced a hack. While only 13% have been able to confirm whether a company with which they’ve interacted has been involved in a breach, the reality is it’s much more likely than you’d think – since 2013, over 9.7 billion data records have been lost or stolen, and this number is only rising. Most of us would have no way of knowing whether our information is up for sale online. However, solutions now exist which proactively check for email addresses, usernames and other exposed credentials against third-party databases, alerting users should any leaked information be found. ...  Detection is undoubtedly pivotal in keeping ahead of fraudsters, but the foundations begin with awareness. The majority of breaches take place as a result of simple mistakes which can be easily addressed – using your Facebook password at work or failing to change the default settings of connected devices. But at the same time, businesses must stress the importance of being cyber-aware and foster a culture of security awareness throughout the organisation.


14 Finance Specialists Share Their Largest Fintech Predictions For 2021

There can be extra “bank in a box” tech layers between fintech and banks to allow spinning up partnerships on a sooner timeline. I additionally see extra back-end firms to automate important compliance capabilities akin to Know Your Buyer and regulatory change administration. I additionally assume we are going to see much more “regular” firms providing monetary providers in addition to growing consolidation amongst fintech firms. – Jeanette Fast... An enormous development that might be seen is a renewed want for monetary literacy. Covid-19 compelled everybody to consider each their long- and short-term monetary outlooks. What now we have seen within the auto refinancing sector is that individuals don’t even know you possibly can refinance a car. You’ll discover customers who need to sharpen their funds and firms that can be making an attempt to achieve and educate them. – Tom Holgate, ... The rise of insurance coverage tech will revolutionize the medical insurance trade, with improvements starting from digital well being information to monitoring health. The rise of good contracts offers insurance coverage firms a solution to replace their infrastructure and minimize long-term prices whereas offering shoppers with superior service. – Joseph Safina


How to Keep Up With Big Tech's Hiring Spree

If you’re realizing you need more tech skills to handle the new digital demands of your industry, look first at your existing workforce. Instead of spending time and money on hiring, look for ways to upskill employees interested in a more technical career path and have demonstrated an aptitude for learning. For example, someone in an administrative role who has quickly adapted to remote work might be a good candidate for a scrum master or project management role. If you don’t have the ability to train employees in-house, consider a partnership. ... Hiring, in general, is starting to pick up again. When the pandemic finally subsides and companies begin hiring in full force, most will be looking for talent in the same places. Instead of sourcing recent college grads, look for graduates from coding boot camps and other alternative skilling programs, or target self-taught learners. This crisis has demonstrated that online learning isn’t just possible; it’s a critical part of today’s young people’s development. The talent acquisition team at IBM has made a point to target so-called “new collar” workers to bolster its 360,000-employee workforce. The company has developed a robust learning program for people both inside and outside of the company interested in learning new technical skills.


Digital Robber Barons and Digital Vertical Integration

These Robber Barons leveraged vertical integration to create “economic moats” that locked out and blocked potential competitors. The term “economic moat”, popularized by Warren Buffett, refers to a business' ability to maintain competitive advantages in order to protect its long-term profits and market share from competing firms while charging monopoly-like prices to its customers and onerous terms to its suppliers. Just like a medieval castle, the moat serves to protect the riches of those inside the castle from outsiders. Andrew Carnegie is an example of a Robber Baron who used vertical integration to create economic moats for Carnegie Steel. Carnegie Steel (later U.S. Steel) became the dominant steel supplier in the U.S. through the vertical integration of the steel value chain process. Carnegie owned not only the steel mills that produced the different grades and types of steel, but Carnegie also owned the iron ore mines that was the main ingredient in steel production, coke/coal mines that powered the blast furnaces from which steel was produced, and the railroads and shipping that transported the iron ore and coke to the steel mills and the finished steel products to its customers


Building a secure hybrid cloud

If all your computing assets are stored in a single location which then experiences an extended power outage, phone service or internet outage, natural disaster, or terrorist attack, your business essentially grinds to a halt. Many larger organizations invest in constructing and maintaining multiple data centers for just that reason. For most small businesses, this added cost is beyond their capabilities. Cloud technology removes this challenge by placing the business continuity requirement entirely on the provider. Along the same lines of business continuity, is that because of its ubiquity, cloud provides businesses with a competitive advantage over companies that still rely on legacy on-premises hardware-based solutions. Case in point: I recently worked with a company who had one of their location’s phone lines go down. It took 3 days for 2 different phone companies to figure out whose fault it was and then finally fix the problem. During those 3 days, a busy office was completely down with no phone service whatsoever. This kind of service level might have been acceptable in 1992. However, in the 2020s that’s beyond unacceptable. A cloud communications provider with a guaranteed service-level agreement would have ensured that such a serious outage would never happen.


Testing in Production 101

To start, deploy your first feature to production with the default rule off for safety. This ensures that only the targeted users will have access to the feature. Next, run your automation scripts in production with targeted test users, as well as the regression suite to guarantee previously released features are not affected by your changes. With the feature flag off and only your targeted team members having access to the feature, you will officially be testing in production. This is the time to resolve any bugs and validate all proper functionality. It’s important to remember that because end users do not yet have access to your feature, they will not be impacted if anything does go wrong. After you’ve resolved the issues that appeared in your first test and you’re confident the feature will work properly, it’s time to use a canary release to open up the feature to 1% of your user base. The next days will be spent monitoring error logs and growing your confidence in the feature until you feel it’s appropriate to increase the percentage of users that can access your feature. Once you reach 100% of users and you know without a doubt that the feature works, it’s time to turn on the default rule for the feature.


Digital Twins: Bridging the Physical and Digital World

In short, a digital twin is the precise replica of the physical world preserved through updates on a real-time basis. It is used in virtual reality and 3D data and graphics to create virtual buildings and other models of product, service, system, process, and so on. According to the SAP Senior Vice President of IoT Thomas Kaiser, he says that this is “becoming a business imperative, covering the entire lifecycle of an asset or process and forming the foundation for connected products and services.” ... The concept of a digital twin has been around since 2002 but was shadowed by IoT. However, it has made a resurgence and, in 2017, it was part of Gartner’s Top 10 Strategic Technology Trends. It has made the system cost-effective to implement and become imperative in today’s business, combining virtual and physical worlds to enable analyses of data and monitoring systems. It also helps forestall a problem before it occurs, avoid interruption, advance new opportunities, and plan for the future with simulations. Digital twins enable real-world data for creating simulations for predicting the production process. It incorporates IoT Industry 4.0, Artificial Intelligence (AI), and software analytics to augment a better result.


Self-Service Security for Developers Is the DevSecOps Brass Ring

The ability for organizations to fold self-service security functionality into these internal platforms tends to be highly correlated to the degree to which security integration has been achieved across the software delivery life cycle. The survey asked respondents to pick which of the five phases of the life cycle where security is integrated: requirements, design, building, testing, and deployment. It found the ratio of organizations with two or more phases integrated has gone up from 63% last year to 70% this year. The ratio of organizations with complete integration now stands at 12%. As the report explains, the self-service offering of security and compliance validation is intertwined with the push for greater integration. Meanwhile, among those with three to four phases of development integrated with security, 42% offer self-service security and compliance validation. And 58% those that have achieved full security integration across all five phases say they provide self-service security. Companies that have fully integrated security are more than twice as likely to offer self-service security as firms with no security integration.



Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki