Daily Tech Digest - April 23, 2020

Indian IT desperately needed a new business model and coronavirus gave it one

remote-working-jeonghwaryu0.jpg
Some IT companies have implemented "employee productivity trackers like webcam-based movement capture, hourly timesheet entry, tracking of keyboards, and so on, to ensure employees are working at home," Yugal Joshi, vice-president at Texas-based consultancy Everest Group, told Quartz. "This indicates a deep-rooted malaise in Indian IT/ITes industry where the senior management generally mistrusts people," he added. Two, unlike the retail or manufacturing sectors that cannot operate with current social distancing norms, the top-tier Indian IT companies and their mid-sized brethren are responsible for keeping the lights on for a large collection global companies -- some of whom are depended on people every second of the day. This includes banks, utility companies, retailers, and, of course, pharmaceuticals. With the ongoing coronavirus outbreak, all of these industries are now being serviced from the apartments and houses of India's IT workforce, which as you can imagine, is a supremely difficult and exasperating task for everyone involved. Most of IT's clients have ironclad regulatory and privacy riders that have needed to be tweaked considerably in light of coronavirus.



How a basic cross-training program can ease disruptions on the IT team

If the coronavirus hasn't disrupted your business operations yet, there's a good chance it will soon. This first wave of illness will not be the last time the coronavirus disrupts daily business operations. First companies had to adjust to remote work for all employees. The next challenge may be filling in for colleagues who are out sick or caring for family members or friends who are ill. A cross-training program can make this transition go smoothly. Sam Maley, an IT operations manager at Bailey & Associates, an IT consultancy, said cross-training can minimize disruptions and reduce stress levels due to absenteeism. "Cross-training programs are designed to build versatility and skill overlaps in your team members," he said. Jeff Fleischman, CMO at the consulting firm Altimetrik, said cross-training needs to be part of business continuity plans. "To receive buy-in from top management, quantify the impact disruption has on the business such as revenue loss, reputational risk, defaulting on contractual obligations, and failing to meet regulatory requirements, and then explain how cross-training would eliminate these risks," Fleischman said.


Kubernetes vs. VMware: Drive the choice with IT architecture


The choice to run either containers in VMs vs. VMs in containers is an architectural design decision. This is because there's a line of thought that containers are the ideal abstraction for multi-cloud application delivery. Though VMware assures admins containers and VMs are the same in vSphere, it's difficult to draw a similar comparison for Kubernetes and VMs. Kubernetes is an orchestration product that admins use primarily for containers. In theory, Kubernetes could manage compute resources other than containers. However, a container as the primary abstraction layer means that traditional VM management tools don't map directly. Though networking can help solve this issue, KubeVirt could be the answer. KubeVirt uses Kubernetes network architecture and plugins rather than hypervisor abstractions, such as vSwitches, to manage networking. As a result, products must switch to network management based on Kubernetes namespaces. That's not necessarily a bad thing; it's just an overall change in operations mode from a VM-centric operating model to a container-centric operating model.



Researchers Release Open Source Counterfactual Machine Learning Library

Three Counterfactuals for Loan Application Scenario
Exactly what machine learning counterfactuals are, and the reasons why they are important, are best explained by example. Suppose a loan company has a trained ML model that is used to approve or decline customers' loan applications. The predictor variables (often called features in ML terminology) are things like annual income, debt, sex, savings, and so on. A customer submits a loan application. Their income is $45,000 with debt = $11,000 and their age is 29 and their savings is $6,000. The application is declined. A counterfactual is change to one or more predictor values that results in the opposite result. For example, one possible counterfactual could be stated in words as, "If your income was increased to $60,000 then your application would have been approved." In general, there will be many possible counterfactuals for a given ML model and set of inputs. Two other counterfactuals might be, "If your income was increased by $50,000 and debt was decreased to $9,000 then your application would have been approved" and, "If your income was increased to $48,000 and your age was changed to 36 then your application would have been approved." Figure 1 illustrates three such counterfactuals for a loan scenario.


What is value stream mapping? A lean technique for improving business processes

What is value stream mapping? A lean technique for improving business processes
Before you can start building a value stream map, you need to objectively evaluate your organization’s business processes, products and systems. Start by talking to leadership, department heads and other key stakeholders who can give you more insight into what can be improved. You’ll need to get hands-on experience with the process, product or system yourself and have other employees walk you through their part. It’s important to collect as much data as possible — for example, any inefficiencies in the process, how many workers are involved, what resources are used and any downtime. Any potentially relevant or noteworthy data is helpful in fleshing out your final VSM flow chart and achieving insights into what can be refined or improved. You’ll then create two separate VSM flow charts — a current state value stream map and a future state value stream map. Your current state VSM will be used to establish how the process currently runs and functions in the business. This is where you will demonstrate issues, significant findings and establish key requirements. The future state VSM, on the other hand, focuses on what your process will look like once your organization has completed all of the necessary improvements.


Ethernet consortium announces completion of 800GbE spec 

Network Networking Ethernet
Based on many of the technologies used in the current top-end 400 Gigabit Ethernet protocol, the new spec is formally known as 800GBASE-R. The consortium that designed it (then known as the 25 Gigabit Ethernet Consortium) was also instrumental in developing the 25, 50, and 100 Gigabit Ethernet protocols and includes Broadcom, Cisco, Google, and Microsoft among its members. The 800GbE spec adds new media access control (MAC) and physical coding sublayer (PCS) methods, which tweaks these functions to distribute data across eight physical lanes running at a native 106.25Gbps. (A lane can be a copper twisted pair or in optical cables, a strand of fiber or a wavelength.) The 800GBASE-R specification is built on two 400 GbE 2xClause PCSs to create a single MAC which operates at a combined 800Gbps. And while the focus is on eight 106.25G lanes, it's not locked in. It is possible to run 16 lanes at half the speed, or 53.125Gbps. The new standard offers half the latency of 400G Ethernet specification, but the new spec also cuts the forward error correction (FEC) overhead on networks running at 50 Gbps, 100 Gbps, and 200 Gbps by half, thus reducing the packet-processing load on the NIC.


Application performance for remote workers becomes primary network issue for businesses


In addition to the top-line finding of dealing with complexity and performance, the study also highlighted that cost had become less of an issue for respondents, who also cited significant investment in automation, security, cloud connectivity and the potential of 5G. Drilling deeper into the pressing issues for firms, Aryaka found that as the number of remote workers increases across the globe, productivity and remote application performance have become more important for organisations across Europe, the Middle East and Africa (EMEA). Some 45% of UK businesses noted that slow application performance led to a poor user experience for remote and mobile users, and that it was a significant issue faced by IT and support teams. Accessing and integrating cloud and software-as-a-service (SaaS) applications was one of the most pressing issues for UK IT departments, cited by 39%.


Ransomware is now the biggest online menace you need to worry about - here's why


One of the reasons why ransomware attacks have risen so much is because cyber criminals are increasingly viewing it as the simplest and quickest means of making money from compromised networks. With ransomware, attackers can lockdown an organisation's entire network and demand a bitcoin payment in exchange for the decryption key. Ransomware attacks are often successful because organisations opt to pay the ransom demand, viewing it as the quickest and easiest way to restore functionality to the network, despite authorities warning never to give into the demand of extortionists. These ransomware demands commonly reach six-figure sums and, because the transfer is made in bitcoin, it's relatively simple for the criminals to launder it without it being traced back to them. "The 'beauty' of the ransomware model is you only need to write the ransomware once and its potential to infect is only limited by its reach, which with the internet is unlimited," Ed Williams, EMEA director of SpiderLabs, the research division at Trustwave, told ZDNet.


Remote business continuity techniques to implement now


This is not just an issue when facing a pandemic. If your business continuity plan addresses only short-term disruptions, such as those that last less than a month, it may not be prepared for an extended outage. Your technology disaster recovery plan may need to be activated, assuming outages occur due to insufficient IT staff available or technology disruptions that occur due to a shortage of vendor personnel. Fortunately, many data centers are designed to operate without human intervention or with remote access to system administration functions. Technology vendors frequently use managed IT resources such as cloud-based systems to support their service offerings. This reduces the likelihood of outages as long as the managed service providers are able to keep their systems operational. As many organizations use remotely hosted applications, users can keep using those systems, so long as their vendors are able to keep their operations working. The real challenge for organizations that have mostly locally hosted systems and databases is to remotely manage those assets.


New Enterprise Graph Framework for Data Scientists Leverages Machine Learning

The new Neo4j for Graph Data Science framework is designed to enable data scientists to operationalize better analytics and machine learning models that infer behavior based on connected data and network structures Frame described. The framework, she said in a statement announcing the product release, is intended to provide the most expeditious way to generate better predictions. "A common misconception in data science is that more data increases accuracy and reduces false positives," she explained. "In reality, many data science models overlook the most predictive elements within data -- the connections and structures that lie within. Neo4j for Graph Data Science was conceived for this purpose -- to improve the predictive accuracy of machine learning, or answer previously unanswerable analytics questions, using the relationships inherent within existing data." 



Quote for the day:


"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it." -- Warren Bennis


Daily Tech Digest - April 22, 2020

Cisco integrates SD-WAN connectivity with Google Cloud

sd-wan
The Cisco/Google platform is important because software- and infrastructure-as-a-service (SaaS and IaaS) offerings have been driving SD-WAN implementations in the past year, experts say. “One of the key drivers of SD-WAN has been the increasing consumption of cloud services in the enterprise, across both IaaS and SaaS applications,” said Rohit Mehra, vice president, network infrastructure at IDC. “With some of the largest public cloud providers playing an increasing role in how these enterprise apps are consumed and delivered, and bringing their vast global networks to bear, they will increasingly have a role to play with how WANs are architected going forward.” For enterprises, one of the key takeaways from this announcement is that “SD-WANs will now be able to play a better functional role in the delivery of cloud services such as IaaS and SaaS, and likewise, the large public-cloud purveyors will benefit from providing a stronger value proposition towards multi-cloud deployments,” Mehra said. "Secondly, enterprises will benefit in terms of extending policy and governance beyond applications to other attributes such as locations/geo and multiple clouds.”



The new normal: A step-by-step guide for the enterprise

The new normal: A step-by-step guide for the enterprise
From a business perspective, we need to identify and understand the negative effects that occurred during the lockdown. What additional damage will likely occur in the short and long terms? This can range from relatively minor problems, such as a slowdown of some customer deliveries or lack of materials for manufacturing, to a complete shutdown of some operations due to on-premises systems that could not be maintained or fixed during the lockdown. You need to assign dollar amounts to each issue. Keep in mind that some of these will be hard costs, meaning sales and billing. Others will be soft costs, such as reputation. What points hurt the business the most? We need this information to prioritize triage. For most enterprises, this step will immediately identify the need to migrate some assets to cloud. The migration will typically target existing on-premises systems that managed to limp through the crisis. Based on historical migration data, the most common move will involve a “lift and shift” of resources, such as storage and compute, to a public cloud provider. Most enterprises will opt to refactor the applications at a later date; a few will refactor as the applications migrate.


Here are six tech roles companies want to fill now, despite the coronavirus lockdown


"The fact that recruitment is still continuing with relative strength in IT is perhaps unsurprising due to the on-going need across most sectors to conduct operations remotely," said Ann Swain, CEO of APSCo. John Gaughan, managing director of technology recruitment firm Finlay James, said he has a number of clients who are hiring and using remote on-boarding when filling SaaS tech sales roles and technology leadership positions. Recruiters are switching from in-person interview to video meetings with candidates, and in some cases, with everyone working from home, it may be some time before new recruits actually meet the people they are working with. The APSCo report also noted that recruitment for marketing has also held up surprisingly well, which it said is probably down to businesses ramping up their digital marketing and communications activities. There has also been an increase in roles involving employee engagement. "With many teams now working from home, the challenge of keeping remote employees engaged and operating as a cohesive unit has never been greater," the report said.


Contactless Payments: Healthy COVID-19 Defense


From a fraud-fighting standpoint, compared with swiping a card and signing a paper receipt, contactless is much more secure. And while some call these capabilities "tap and go," in reality, there's no contact required: You just have to wave your card or compatible smartphone close to the card reader until it beeps. Cards with this capability began to be rolled out in the U.K. in 2008, and the vast majority of payment terminals in stores now work with them. Other systems that don't get refreshed very often - for example, inside buses - have been slowly catching up. Here in the Scottish city of Dundee, last year most buses finally got upgraded with the ability to accept contactless payments. Many newer smartphones also have contactless capability via Apple Pay, Android Pay or Samsung Pay. Just load a payment card and use your smartphone to pay without touching anything, up to certain amounts. As a bonus, the smartphone-based approaches add additional layers of security, such as needing to use your fingerprint or face to unlock the contactless payment capability.


Remote Agile (Part 4): Anti-Patterns

remote agile anti-patterns
Hybrid events create two classes of teammates — remote and co-located — where the co-located folks are calling the shots. Beware of the distance bias — when out of sight means out of mind — thus avoiding the creation of a privileged subclass of teammates: “Distance biases have become all too common in today’s globalized world. They emerge in meetings when folks in the room fail to gather input from their remote colleagues, who may be dialing in on a conference line.” To avoid this scenario, make sure that once a single participant joins remotely, all other participants “dial in,” too, to level the playing field. Every communication feels like a (formal) meeting. ... Instead, put trust in people, uphold the prime directive, and be surprised what capable, self-organizing people can achieve once you get out of their way. Trust won’t be built by surveilling and micro-managing team members. Therefore, don’t go rogue; the prime directive rules more than ever in a remote agile setup. Trust in people and do not spy on them — no matter how tempting it might be. Read more about the damaging effect of a downward spiraling trust dynamic from Esther Derby.


COVID-19 & The Digital Imperative


In a recent interview, John Chambers, former Cisco CEO and now Venture Capitalist, said the pandemic will force many “companies to use this moment to make the transition to digital. Things will get worse before they get better— that is the realistic optimist in me speaking,” said Chambers, who has predicted up to 40% of the Fortune 500 and 70% of startups will no longer be around in a decade if they don’t make the digital transition. The disruptions brought about by the pandemic can be expected to accelerate the shift to digital that has already been underway. It is not just that organizations the world over have radically altered their work environments to accommodate work from home and technologies such as video conferencing and remote networking on a massive scale. It is also that the consequences of the pandemic are likely creating digital disruption opportunities and imperatives across the economy, in industries as diverse as food and beverage, hospitality, real estate, travel, and government.


How microsegmentation architectures differ

micro segmentation security lock 2400x1600
It's important to remember that microsegmentation is not just a data center-oriented technology. "Many security incidents start on end-user workstations, because employees click on phishing links or their systems become compromised by other means," Cross says. From that initial point of infection, attackers can spread throughout an organization's network. "A microsegmentation platform should be able to enforce policies in the data center, on cloud workloads, and on end-user workstations from a single console," he explains. "It should also be able to stop attacks from spreading in any of these environments." As with many emerging technologies, vendors are approaching microsegmentation from various directions. Three traditional microsegmentation types are host-agent segmentation, hypervisor segmentation and network segmentation. ... This microsegmentation type relies on agents positioned in the endpoints. All data flows are visible and relayed to a central manager, an approach that can help reduce the pain of discovering challenging protocols or encrypted traffic.


Google wants to make it easier to analyse health data in the cloud


Dr John Halamka, president of Mayo Clinic Platform, said: "We're in a time where technology needs to work fast, securely, and most importantly in a way that furthers our dedication to our patients. Google Cloud's Healthcare API accelerates data liquidity among stakeholders, and in-return, will help us better serve our patients." The issue of interoperability remains a tricky subject within healthcare. Battles over data formats and ownership stymies efforts to join up healthcare systems and make patient data available to healthcare professionals whenever and wherever they need it. In the US, inroads have been made recently through the passing of rules by Centers for Medicare and Medicaid Services (CMS) and the National Coordinator for Health Information Technology (ONC) to make it easier for healthcare organisations to exchange patient data, and for patients to access their own information. Google said its Cloud Healthcare API was designed to scale and support interoperability and patient access. It added that the COVID-19 pandemic had made the need for increased data interoperability more important than ever.


How developer teams went remote overnight

How developer teams went remote overnight
Remote work isn’t new for communications API specialist Twilio, but the pandemic has caused a massive shift. Prior to the coronavirus outbreak, CEO Geoff Lawson told TechCrunch that around 10 percent of the company worked remotely. “For a company like us to go from partially virtual to fully virtual in a short period of time, it’s not without its hiccups, but it has worked pretty well,” he said. Some of that 10 percent of remote workers included the team of Marcos Placona, manager for developer evangelism at Twilio. “My team has always worked on a distributed basis with direct reports in the US, UK, and across Europe,” Placona told InfoWorld. The various time zones involved make it “tough to work this way,” he admits, “but we have regular check-ins with the team and individuals with weekly one-to-ones.” Developer evangelists at Twilio still contribute code and have to track contributions, alongside writing documentation and filtering through reams of customer feedback. During the pandemic this team has shifted to holding daily remote stand-ups.


A Tale of 3 Breaches: Incident Response Challenges

A Tale of 3 Breaches: Incident Response Challenges
Three recently disclosed health data security incidents - including the discovery of a large email hack that happened nearly a year ago - serve as reminders of the ongoing incident response challenges facing healthcare organizations. A 2019 email hacking incident that affected 112,000 individuals was disclosed last week by Dearborn, Michigan-based Beaumont Health. Also recently reported were: a February ransomware attack on Wilmington, Del.-based substance abuse treatment provider Brandywine Counseling and Community Services that affected clinical records of an undisclosed number of patients, and a phishing scam impacting more than 27,000 patients and employees of Wisconsin-based Advocate Aurora Health. The COVID-19 crisis is likely to make it even more difficult for healthcare organizations to respond to security incidents, some observers say. "As long as COVID-19 drives IT activities in supporting remote workers and setting up patient triage tents with access to technology infrastructure, IT may have difficulty monitoring network activity for anomalous events unless a security operations center is in place to monitor around the clock, along with centralized log event management that can automate detection of and alerting on activities of concern," notes Keith Fricke



Quote for the day:


"Many men may see the King in a Kid but it takes a true leader to nurture it" -- Bernard Kelvin Clive


Daily Tech Digest April 21, 2020

Stay Ahead of the 5G and DevOps Race with Continuous Network Monitoring

5G and DevOps - Continuous Network Monitoring
Automobiles aside, another industry that benefits from being proactive rather than reactive is telecommunications. Not only does the telecoms world requires routine checks and maintenance, but it also needs to identify problems before they cause larger issues or disruptions. Networks are evolving rapidly and this will continue as 5G deployments expand; as will the need for regularly scheduled maintenance and examinations. DevOps–a set of procedures that automates between software development (Dev) and IT operations (Ops) along with continuous delivery (CD)–allows for a level of agility that enables new features and services to be deployed within weeks or days. There are four stages of establishing these services–design, deploy, test and operate–all of which demand a constant pace and network monitoring. To maximize DevOps and CD, including the speed benefits that come with both, predictive network monitoring (PNM) is vital.


Deploying Edge Cloud Solutions Without Sacrificing Security  

Deploying Edge Cloud Solutions Without Sacrificing Security
First, let's think about the structure of edge cloud systems. In most implementations, edges are within organizations' computing boundaries, and so they will be protected by a wide variety of tools that focus on perimeter scanning and intrusion detection. However, that's not quite the whole story: in most systems, there will also be a tunnel between the edge straight to cloud storage. Sending data from the edge to the cloud in a secure way is fairly straightforward, because organizations will control the infrastructure that is used to encrypt and verify it. The problem arises when the cloud needs to send data back to the edge for processing. The challenge here is to ensure that this data is authenticated and verified, and is therefore safe to enter into an organizations' internal systems. First, and most obviously, edge cloud systems fragment data. Having each device connected directly to cloud services might incur a performance loss, but at least this data is centralized, and can be covered by a single cloud security policy. Because edge cloud servers – almost by definition – need to be connected to many different devices, they represent a nightmare when it comes to securing these same connections.


DDoS in the Time of COVID-19: Attacks and Raids


Unfortunately, or fortunately, cyber security is an essential business. As a result, those working in the field are not getting to experience any downtime during a quarantine. Many of us have been working around the clock, fighting off waves of attacks and helping other essential businesses adjust to a remote work force as the global environments change. Along the way we have learned a few things about how a modern society deals with a pandemic. Obviously, a global Shelter-in-Place resulted in an unanticipated surge in traffic. As lockdowns began in China and worked their way west, we began to see massive spikes in streaming and gaming services. These unanticipated surges in traffic required digital content providers to throttle or downgrade streaming services across Europe, to prevent networks from overloading.  The COVID-19 pandemic also highlights the importance of service availability during a global crisis. Due to the forced digitalization of the work force and a global Shelter-in-Place, the world became heavily dependent on a number of digital services during isolation. Degradation or an outage impacting these services during the pandemic could quickly spark speculation and/or panic.



Governing by data: Limits and opportunities

Healthcare is perhaps the most obvious area of public service for the adoption of data analysis, given that medical science is largely built on this. The UK government has been led by data and science in reacting to the coronavirus epidemic over recent weeks, making a celebrity out of the UK’s chief medical officer Chris Whitty. But politics can trump data analysis. David Nutt, professor of neuropsychopharmacology at Imperial College London, was sacked as the government’s chief advisor on drugs in 2009 after saying policy in this area was not based on evidence. Nutt’s research found that legal alcohol was more harmful to society than illegal drugs, although heroin was rated as having the greatest damage on individuals. “The logical conclusion is, if government drugs policy is about harms, alcohol should be the primary focus,” Nutt writes in his new book Drink? The new science of alcohol and your health. “But for political reasons, this evidence has been ignored.”


IT directors plan to protect cloud budgets and consolidate vendors during downturn


According to the survey, agile delivery and cloud cost optimization are the most important priorities for tech leaders at the moment. IT managers will be using these tools to respond more quickly to customer demands and increase fiscal discipline. Agile and DevOps practices will drive faster software releases with lower failure rates and quicker recovery from incidents. IT leaders also need to pay attention to internal customers as well. The report recommends that teams should move from reactive infrastructure management to proactive support of digital transformation efforts by working closely with business owners, developers, product managers, and tech partners. The financial crunch due to the coronavirus will motivate financial teams to track down redundant, unused, and underused cloud services and turn them off. IT managers also reported that they will analyze worloads and identify the right pricing models—on-demand, spot, or reserved—to maximize savings. The survey also found that the gap between public cloud platform providers is closing with Google Cloud, Amazon Web Services, and Microsoft Azure each getting an equal share of votes as a preferred cloud provider. Tech leaders are looking for providers that can deliver on business needs


The Bootstrap 4 Grid Deconstructed

While upgrading my skillset and implementing an Angular based website, I again looked at the Bootstrap Grid system and decided to deep-dive into it and see what makes it work. I'll be using my original article as a kind of template for the structure of this article and will sometimes reference it for things explained there. I will also assume a basic knowledge of HTML and CSS. That you know what a <div>, <span>, etc. are..., that you know about CSS inheritance rules, ... I also assume you have read the article about the Bootstrap 3 grid system so you are familiar with responsive breakpoints and the like. ... The Grid: It's Still All About Rows and Columns. Nothing has changed here: we still need to define a container with rows which in turn contains columns. However, where in the Bootstrap 3 grid you had to always specify the width of your columns and make them add up to a total of 12, this is no longer true for the Bootstrap 4 grid. The Bootstrap 4 grid defines a simple col class which allows you to evenly spread your columns over the width of your page while taking up as much space as necessary for the content to match the column.


USB-C power for laptops is still complicated - and here's why

USB cable with magnetic interchangeable heads
The problem is that while USB-C can support any and all of those, what actually works is down to the capabilities of the port and of the cable itself (more specifically, the control chips at either end of the cable). Some laptops have one USB-C port that supports the PD (Power Delivery) standard and one that doesn't, because that way you can use a cheaper controller chip and only have to route the power down one path on the motherboard. Different protocols have different licencing requirements, so not every cable supports Thunderbolt. And you need specific controller chips in the cable to support PD. That's why the UNO interchangeable cable we looked at recently didn't support PD, making it an almost, but not quite, universal cable. The £46/$55 Infinity Cable (also from Chargeasap) has some nice tweaks: a cord wrap; a smaller, less bright LED on the cable so you know when power is flowing but you don't get dazzled by your phone cable at night; and the 15-year warranty that presumably inspired the name. But the big change is that it supports PD up to 100W. The Infinity cable has USB-C on one end, with an optional ($5) USB-A adapter for when you need to use an older port; the other end is a magnet with interchangeable connectors for USB-C, Micro-USB and Lightning. The magnets are strong -- get the tip close to the cable and it snaps on securely, but if you yank on the cable the tip will come off before you pull your device off the table.


The Internet Only Works During A Pandemic Because We Killed Net Neutrality

In fact, networks in China and Italy, like here in the States, have (with a few exceptions) held up reasonably well under the massive load of telecommuting and home learning. Not because of net neutrality policy, but because network engineers are generally good at their jobs. While there have been some network problems, they're usually of the "last mile" variety in both the EU and US. As in, your ISP never upgraded that "last mile" to your house, so you're still stuck on a DSL line from around 2007 that struggles to handle Zoom teleconferencing particularly well. But most core networks around the world have held up rather admirably. The claim that the EU was suffering some kind of exceptional congestion problems appears to have originated among some EU regulators who simply urged Netflix to reduce bandwidth consumption by 25% to pre-emptively help lighten the load. There was no supporting public evidence provided of actual harm. The move was precautionary.


How to overcome application modernisation barriers


“We’re talking about IT estates that have grown up over the past 30 to 40 years, and you find that many of these organisations have not invested in technology over time,” he says, adding that a lack of integrations between these applications is a major barrier to building agile, modern application portfolios. Like Mendix’s Ford, Fairclough recommends modernisation projects are divided into “prioritised chunks”, which he says enables IT teams to tackle the most important things first.  “Maybe there are some things that you don't even need to tackle, so actually you segment and decide that we can run those IT systems over there for another few years and then just retire them,” he says.  Describing a challenging modernisation project he worked on, Fairclough says the amount of work required to complete the project had been “totally underestimated”. He says the project involved an IT estate of more than 500 applications, which meant the customer did not understand how everything was connected. As a consequence, project costs were pushed up “exponentially”.


Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice

Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice
The biggest challenge associated with the topic of reliability is knowing where to invest your time and energies. We’re never ‘done’ making a system reliable, so how do we know what components are most critical? Where will we get the highest ROI? Furthermore, how do we decide that a system is reliable enough? To answer that last question, set recovery time and recovery point objectives (RTOs and RPOs) and let yourself be guided by them. Based on those metrics, decide where you should be investing your time. To decide where to start improving the overall reliability of your system, you need to know how all of the components interact, and identify the most critical components and bottlenecks. You can spend all of your time making a database reliable, but that won’t matter if it sits behind a heavily used but unreliable caching layer. Dependency graphs are great for visualising how the components of your service fit together and will allow you to identify the places where you will reap the biggest reliability rewards. The challenge here is that dependency graphs get stale ridiculously quickly unless they are automated.



Quote for the day:


"When you can't make them see the light, make them feel the heat." - Ronald Reagan


Daily Tech Digest - April 20, 2020

The SingularityNET Foundation continues to provide and maintain tools, such as a command-line interface (CLI), to help AI developers create and publish services on the platform directly, irrespective of whether these services appear on the Marketplace. This is key to the decentralized methodology, vision and ethos which has guided SingularityNET since its founding. However, AI services that appear on the platform via routes other than the Publisher Portal will not be listed on the Marketplace UI and cannot make use of the Marketplace’s tools for easy deployment, monitoring, maintenance, fiat/crypto conversion and so forth. The AI Publisher Portal enables developers to register themselves and submit their services for curation, seamlessly validates developer identities, and provides a guided and intuitive way to create and manage services on the Marketplace. Only services curated and published via the Publisher portal, and in this way approved by the Foundation, will appear on the Marketplace. 


COVID-19 Has United Cybersecurity Experts, But Will That Unity Survive the Pandemic?


“A nurse or doctor can’t do what we do, and we can’t do what they do,” Espinosa said. “We’ve seen a massive rise in threats and attacks against healthcare systems, but it’s worse if someone dies due to a malicious cyberattack when we have the ability to prevent that. A lot of people are involved because they’re emotionally attached to the idea of helping this critical infrastructure stay safe and online.” Using threat intelligence feeds donated by dozens of cybersecurity companies, the CTC is poring over more than 100 million pieces of data about potential threats each day, running those indicators through security products from roughly 70 different vendors. If at least 10 of those flag a specific data point — such as a domain name — as malicious or bad, it gets added to the CTC’s blocklist, which is designed to be used by organizations worldwide for blocking malicious traffic. “For possible threats, meaning between five and nine vendors detect an indicator as bad, our volunteers manually verify that the indicator is malicious before including it in our blocklist,” Espinosa said. ... Mark Rogers, one of several people helping to manage the CTI League’s efforts, told Reuters the top priority of the group is working to combat hacks against medical facilities and other frontline responders to the pandemic, as well as helping defend communication networks and services that have become essential as more people work from home.


Machine Learning Playing An Important Role In Data Management


With advances in machine learning, cloud computing and storage, enterprises are finally breaking the data-management logjam. In question are breakout upgrades in business proficiency, revenue realization, product innovation and competitive differentiation. The outcomes driven here could be transformational. For CIOs and CISOs stressed over security, compliance and scheduling SLAs, it’s basic to understand that ever-expanding volumes and varieties of data, it’s not humanly workable for an administrator or even a team of administrators and data scientists to tackle these challenges. Luckily, machine learning can help. A variety of machine learning and deep learning strategies might be utilized to achieve this. Comprehensively, machine/deep learning methods might be named either unsupervised learning, supervised learning, or reinforcement learning The decision of which strategy will be driven by what issue is being fathomed. For instance, supervised learning mechanisms, for example, random forest might be utilized to build up a gauge, or what comprises “typical” behavior for a system, by observing applicable traits, at that point utilize the benchmark to identify inconsistencies that stray from the standard.


How can businesses ensure ROI from 5G services?

How can businesses ensure ROI from 5G services? image
The unprecedented speed and capacity of 5G will dramatically increase the productivity of a typical business, paying dividends in terms of increased efficiency and therefore tangible ROI. In the short-term, 5G will enable agile and fast fixed wireless connections that will enable organisations to “cut-the-cord” while extending the reach and reliability of their WAN. While businesses today operate networks as many individual domains (branch, mobile and IoT), an advanced orchestration and automation system can make the entire network operate as a single unified network fabric. Looking further ahead, the power of edge computing will provide the processing power that will move artificial intelligence-powered solutions from the niche to the mainstream. From a cost-benefit perspective, AI automates and simplifies data analysis of any type, which can clearly offload work from human staff and increase productivity. While AI solutions are currently housed mainly in data centres, 5G will enable rapidly accelerated data processing at the network edge, providing the real-time and ubiquitous connectivity that AI requires to function.


Data-Driven Decision Making – Optimizing the Product Delivery Organization

Data-Driven Decision Making – Optimizing the Product Delivery Organization
With the Indicators Framework defined, it was clear to us that its introduction to the organization of 16 development teams could only be effective if sufficient support could be provided to the teams. We introduced Hypotheses first. Six months later we introduced SRE. And six months after that we introduced Continuous Delivery Indicators to the organization. We chose a staged approach to introducing these changes in order to have the organization focus on one change at a time. In terms of preparation for the introduction, Hypotheses were the easiest; it took an extension of our Business Feature Template and a workshop with each team.  To prepare for the SRE introduction, we implemented basic infrastructure for two fundamental SLIs - Availability and Latency. The infrastructure is able to generate SLI and Error Budget Dashboards for each service of each team. Most importantly, it is able to do alerting on Error Budget Consumption in all deployment environments.


Is a free VPN a good idea for your IoT devices?

Is a free VPN a good idea for your IoT devices?
While some of the free VPNs available are secure, a few others aren’t. Some free VPNs have been reported to sell out the user’s data to third-parties, thereby undermining your privacy. There are also a few cases where VPNs have been used to facilitate malware attacks by housing the malware elements. Some may also try to access apps that they should not, such as Maps. For these reasons, it is recommended to use free VPNs from tried and tested reliable providers. Various VPN providers throw in different features to their free version products. Generally, most include basic functionalities, i.e. privacy and encryption. The rest of the advanced features are reserved for the premium plans. Truth to tell, you can hardly find a free VPN that has all the features you need. You might be forced to forego some features. It goes without saying then that the best free VPN is one that brings you the most of the features you need. The Commonwealth Scientific and Industrial Research Organization conducted a study on over 280 Android VPN apps. The study revealed that 67% of the apps had trackers embedded in their codes.


The Way Forward: Digital Resiliency Wins

Digital Resiliency Wins
McKinsey & Company advised CIOs to keep their focus on stabilizing emergency measures by strengthening remote working capabilities, improve cybersecurity, adjust ways of working with agile teams and prepare for a breakdown of parts of the vendor ecosystem (supply chain). In the interim, CIOs need to address immediate IT cost pressures to reduce costs, and creatively redeploy the IT workforce, while also pivoting to new areas of focus in the future. According to McKinsey & Company, many organizations are successfully digitally engaging with customers, and cited a government in Western Europe who embarked on an “express digitization” of quarantine-compensation claims to deal with a more than 100-fold increase in volume. “Sometimes this effort is about taking loads from call centers, but more often it addresses real new business opportunities. To engage with consumers, for example, retailers in China increasingly gave products at-home themes in WeChat,” McKinsey & Company wrote.


Windows 10 turns five: Don't get too comfortable, the rules will change again

windows-10-device-range-2015.jpg
Despite the occasional twists and turns that Windows 10 has taken in the past five years, it has accomplished its two overarching goals. First, it erased the memory of Windows 8 and its confusing interface. For the overwhelming majority of Microsoft's customers who decided to skip Windows 8 and stick with Windows 7, the transition was reasonably smooth. Even the naming decision, to skip Windows 9 and go straight to 10 was, in hindsight, pretty smart. Second, it offered an upgrade path to customers who were still deploying Windows 7 in businesses. That alternative became extremely important when we zoomed past the official end-of-support date for Windows 7 in January 2020. In mid-2019, when I checked usage data from the U.S. Government's Data Analytics Program, the migration to Windows 10 appeared to be stalled. As of July 31, 2019, Windows 7 still accounted for 26% of all visits to U.S. government websites from Windows PCs. Nine months later, that number has been cut in half. For the six weeks ending April 15, that same metric shows the number of visits from Windows 7 PCs is down to 12.7% and continuing to slide.


What is TypeScript? Strongly typed JavaScript

What is TypeScript? Strongly typed JavaScript
TypeScript is a superset of JavaScript. While any correct JavaScript code is also correct TypeScript code, TypeScript also has language features that aren’t part of JavaScript. The most prominent feature unique to TypeScript—the one that gave TypeScript its name—is, as noted, strong typing: a TypeScript variable is associated with a type, like a string, number, or boolean, that tells the compiler what kind of data it can hold. In addition, TypeScript does support type inference, and includes a catch-all any type, which means that variables don’t have to have their types assigned explicitly by the programmer; more on that in a moment. TypeScript is also designed for object-oriented programming—JavaScript, not so much. Concepts like inheritance and access control that are not intuitive in JavaScript are simple to implement in TypeScript. In addition, TypeScript allows you to implement interfaces, a largely meaningless concept in the JavaScript world. That said, there’s no functionality you can code in TypeScript that you can’t also code in JavaScript. That’s because TypeScript isn’t compiled in a conventional sense—the way, for instance, C++ is compiled into a binary executable that can run on specified hardware.


Fostering Smart Cities based on an Enterprise Architecture Approach

Figure 1: Proposed EAF for the +CityxChange project.
The research aims to develop an overall ICT architecture and service-based ecosystem to ensure that service providers of the +CityxChange project can develop, deploy and test their services through integrated and interconnected approaches. For the purpose of this research, a city can be seen as a big enterprise with different departments. With its ability to model the complexities of the real world in a practical way and to help users plan, design, document, and communicate IT and business-oriented issues, the Enterprise Architecture (EA) method has become a popular domain for business and IT system management. The decision support that it offers makes EA an ideal approach for sustainable smart cities, and it is being increasingly used in smart city projects. This approach allows functional components to be shared and reused and infrastructure and technologies to be standardised. EA can enhance the quality and performance of city processes and improve productivity across a city by integrating and unifying data linkages. 



Quote for the day:


"Don't believe what your eyes are telling you. All they show is limitation. Look with your understanding." -- Richard B


Daily Tech Digest - April 19, 2020

Robotic Process Automation: The Ultimate Way Forward for Smart Data Centers

As we enter this new shift in how companies work, each bit of data must be treated and properly used to maximize their value. This would not be possible without cost-effective storage and increasingly incredible hardware, digital transformation, and the associated new business models. For quite a while, experts have anticipated that the automation developments introduced in manufacturing plants worldwide will later be extended to data centers. In all realities, with the use of Robotic Process Automation (RPA) and machine learning in the data center setting, we are fast advancing this possibility. Human error is an essential explanation for network failure by a wide margin. Software defects and breakdowns lead this down. Despite almost zero knowledge of how the equipment operates, the step must be made after the downtime has just occurred. The cost effect is much higher as the emphasis is deviated from other issues to deal with the cause for the problem, along with the impact of the actual downtime of the network. To have an increasingly efficient data center, dependability, cost, and management have to be set. That can be supported by automation.


How The Remote Workforce Impacts GDPR & CCPA Compliance

So to achieve GDPR and CCPA compliance, organizations must ensure not only that explicit policies and procedures are in place for handling personal information, but also the ability to prove that those policies and procedures are being followed and operationally enforced. The new normal of remote workforces is a critical challenge that must be addressed. What has always been needed is gaining immediate visibility into unstructured distributed data across the enterprise, including on laptops and other unstructured data maintained by remote workforces, through the ability to search and report across several thousand endpoints and other unstructured data sources, and return results within minutes instead of days or weeks. The need for such an operational capability provided by best practices technology is further heightened by the urgency of CCPA and GDPR compliance. Solving this collection challenge is X1 Distributed Discovery, which is specially designed to address the challenges presented by remote and distributed workforces. 


Thinking about Microservices

Monolithic Architecture
As the name implies, this architecture is based on services. This architecture is more than SOA architecture. Services are typically separated by either business capabilities or sub-domain. Once modules/components are defined, they can be implemented through a different set of teams. These teams would be the same or different technology stack teams. In this way, individual components can be scaled up when needed and quickly scaled down once the need is over. ... Now we talked about the benefits of Microservices, but it does not mean that every single application architecture should be drawn in Microservices. Before adopting Microservice architecture- ask yourself “Do you really need a Microservices based application?” Judge your decision by asking a simple set of questions before moving ahead with Microservices. ... Now you have a good overview of Microservice architecture, but having said that, practical implementation still has lot of differences compared to Monolithic. They are really not the same as traditional Monolithic architecture.


3 Keys to Efficient Enterprise Microservices Governance

An enterprise normally has a few thousand microservices, having autonomy for each team in selecting its own choice of the technology stack. Therefore, it’s inevitable that an enterprise should have a microservices governance mechanism to avoid building an unmanageable and unstable architecture. Any centralized governance goes against the core principle of microservices architectures i.e. “provide autonomy and agility to the team.” But that also doesn’t mean that we should not have a centralized policy, standards, and best practices that each team should follow. With an enterprise-scale of integrations with multiple systems and complex operations, the question is, “How do we effectively provide decentralized governance?” We need to have a paradigm shift in our thinking while implementing a microservices governance strategy. The governance strategy should align with core microservices principles – independent and self-contained services, single responsibility, and cross-functional teams aligning with the business as well as policies and best practices.


Artificial intelligence is evolving all by itself


Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI. “While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.” Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.


HowTo Secure Distributed Cloud Systems in Enterprise Environments


Rapidly increasing workloads call for improved IT infrastructure scaling in businesses. Cloud resources are designed to be scalable by changing several lines of code and increasing spending. This ease of scaling, however, can lull organizations into scaling too much without considering the side effects. Scaling cloud resources would require an equal expansion in security systems. If an enterprise’s security measures cannot keep up with the rate at which its cloud environment is growing, it’s only going to increase the attack surface for costly breaches. To avoid this problem, enterprises should consider the scalability of their security systems first before expanding cloud environments. Security applications should also be integrated into the environment, not as a separate or external resource, to maintain business continuity. Automation is a must for good distributed cloud management. Again, an increased number of applications and dependencies make it almost impossible for it to be done efficiently by hand. The time saved from automation can then be funneled towards higher-level, strategic work.


3 Steps for Deploying Robotic Process Automation

Image: Poobest - stockadobe.com
The first step to adopting RPA is discerning which processes in your organization can, and should, be automated. Look at which tasks require critical thinking, emotional intelligence and add the most value to your customer. Then, automate tasks that are manual, repetitive and prone to error. For example, you could automate processes like collecting data, monitoring and prioritizing emails and filling out forms, which are tedious tasks that would otherwise take hours of your employees’ time. We thought critically about how to use RPA to better support our people­ -- allowing them to dedicate more time advising customers, while bots pulled the information needed to assist in that counsel. ... To note, deploying RPA is not a one-and-done initiative. Adopting RPA is a dynamic process that you need to continually update to support your company’s unique and growing business needs. We deployed a timeboxed approach over the course of 20 weeks. Rather than attempt to deploy as many bots as possible, we first established a sound foundation for RPA within our operations from which we could scale in automated measures.


OpenTelemetry Steps up to Manage the Mayhem of Microservices


The goal with OpenTelemetry is not to provide a platform for observability, but rather to provide a standard substrate to collect and convey operational data so it can be used in monitoring and observational platforms, either of the open source or commercial variety. Historically, when an enterprise would purchase a package for systems monitoring, all the agents that would be attached to the resources would be specific to that provider’s implementation. If a customer wanted to change out, the applications and infrastructure would have to be entirely re-instrumented, Sigelman explained. By using the OpenTelemetry, users could instrument their systems once and pick the best and visualization and analysis products for their workloads, and not worry about lock-in. In addition to Honeycomb and Lightstep, some of the largest vendors in the monitoring field, as well as the largest end-users are participating, including Google, Microsoft, Splunk, Postmates, and Uber. The new collector is crucial, explained Honeycomb’s Fong-Jones, in that it narrows the minimum scope of what vendors must support in order to ingest telemetry.


Steps to Implementing Voice Authentication and Securing Biometric Data


Fraud prevention is a key driver for implementation and companies are looking both internally as well as externally. Insider threats can be reduced as staff access privileges are tightened up alongside voice biometric introduction. What are the steps to implementing a voice verification system, and how should the voiceprint data be secured, while ensuring compliance? Before implementing, the current system of authentication needs to be analyzed and compared to the desired process. Companies need to answer a number of questions. What is the current authentication process? For example passwords, PINs, set questions. How will this process change by using voice biometrics? Will Voice biometry replace OR extend current authentication steps? This depends on the geography. EU regulations such as PSD2 require strong authentication such as a biometric factor and something in your possession, such as an app. It also depends on their motivation. Some banks want voice biometrics to help with compliance, some want it to slash verification time – for example, if a bank currently asks five questions, they can safely cut it down to only one.


Working With Data in Microservices

A computer program is a set of instructions for manipulating data. Data will be stored (and transferred) in a machine-readable structured way that is easy to process by programs. Every year there are programming languages, frameworks, and technologies that emerge to optimize data processing in computer programs. Without the proper support from languages or frameworks, the developer won’t be able to write their programs in a way that’s easy to process and get meaningful information out of the data. Languages such as Python and R have adapted to specialize in data processing jobs and Matlab and Octave specialize in complex numbers for numerical computing processing. However, for microservice development where the programs are network distributed, traditional languages are yet to specialize for their unique needs. Ballerina is a new open-source programming language, which provides a unique developer experience to working with data in network-distributed programs.



Quote for the day:


"Leadership is getting someone to do what they don't want to do, to achieve what they want to achieve." -- Tom Landry