Daily Tech Digest - April 24, 2020

Data: The Fabric of Developers’ Lives

Data fabric_developers
Storage-as-a-Service—we hardly knew about it. Thanks in large part to containers, which offer exceptional scalability, simplicity and high availability, the speed of application development has increased dramatically. Developers need to be able to quickly provision their own data, in just the right amounts, to match that velocity. And, like containers, that data needs to be portable. Provisioning quickly means no more going through storage administrators to get the services they need, which can be a cumbersome and time-consuming process. Solutions like Kubernetes’ on-demand clusters enable developers to procure the data they need when they need it. The abstraction layer provided by a data fabric can empower developers even further. They can write their own APIs, provision data services as needed and move that data between clouds with ease. This is particularly important when dealing with cloud providers that offer different services. Sometimes a developer may need a service that exists in one cloud but not another. It’s critical to have an underlying storage infrastructure that enables applications and their data to be transferred as needs require.


Remember when open source was fun?

When Daniel Stenberg set out to make currency exchange rates available to IRC users, he wasn’t trying to “do open source.” It was 1996 and the term “open source” hadn’t even been coined yet (that came in February 1998). No, he just wanted to build a little utility (“how hard can it be?”), so he started from an existing tool (httpget), made some adjustments, and released what would eventually become known as cURL, a way to transfer data using a variety of protocols. It wasn’t Stenberg’s full-time job, or even his part-time job. “It was completely a side thing,” he says in an interview. “I did it for fun.” Stenberg’s side project has lasted for over 20 years, attracted hundreds of contributors, and has a billion users. Yes, billion with a B. Some of those users contact him with urgent requests to fix this or that bug. Their bosses are angry and they need help RIGHT NOW. “They are getting paid to use my stuff that I do at home without getting paid,” Stenberg notes. Is he annoyed? No. “I do it because it’s fun, right? So I’ve always enjoyed it. And that’s why I still do it.”


New research by the data protection and management software supplier has found 5.8 million tonnes of carbon dioxide will be pumped into the atmosphere this year resulting from the use of storage systems to house and process dark data. Veritas derived the figure by mapping industry data on power consumption from data storage, industry data on emissions from datacentres and its own research. On average, 52% of all data stored by organisations worldwide is likely to be dark data, according to Veritas. With the amount of data growing from 33 zettabytes in 2018 to 175 zettabytes by 2025, there will be 91 zettabytes of dark data in five years’ time – over four times the volume of dark data today. Ravi Rajendran, vice-president and managing director for the Asia South region at Veritas Technologies, said that although companies are trying to reduce their carbon footprint, dark data is often neglected. And with dark data producing more carbon dioxide than 80 countries do individually, Rajendran called for organisations to start taking it seriously. 


How different generations approach remote work

Maybe it's more millennials that are really pushing the work from home, but if you would think it would be more of your generation. I say that I'm Gen X. Veronica and I both are, of course. But, you would think that it'd be the younger ones that would be all for working from home, to have that freedom. ... When I'm in an office, as you both know, I tend to be a bit of a chatterbox, so it's good for me to have that alone time to really lock things down. But it's different for people. But, Veronica, you and I would be able to speak on this for Gen X, at least, in the research that I saw, NRG found that most Gen X-ers enjoyed working from home because they were really comfortable, and they liked that independence. And they also liked being around their families, and having that quality time, and felt a little more relaxed. Would you say that's accurate? ... You can get up and take a break whenever, and reset your brain to shift tasks, or to find inspiration if you're stuck on something. I think if you can close the door or close your family off, it's OK. My kids are older now, but if they were little, it would be so hard to work from home now. I have an 11-year-old and a 15-year-old, so they can make their own lunch, and walk the dog, and be self-sufficient while I'm down here.



Netgear is ahead of the game with its WiFi 6 router portfolio and it is paying off as the company is seeing a surge in home network upgrades. The catch for Netgear is that its supply chain, sales channels and markets have all been upended by the COVID-19 pandemic. CEO Patrick Lo outlined the moving parts of Netgear's first quarter. We saw two distinct phenomena during the Covid-19 pandemic. Whenever a shelter in place lockdown was declared, business activities fell and demand for our SMB products dropped significantly. At the same time, consumers are quickly finding out that high performance WiFi at home is a necessity and are rushing to upgrade their home WiFi, driving upticks in our consumer WiFi and mobile hotspot sales. We also saw significant channel shift from physical retail channel purchases to online purchases which put strain on the logistics of some of our online sales partners. On an earnings conference call, it became clear that Netgear had a lot to navigate as it pulled its guidance due to COVID-19. The company reported a first quarter net loss of $4.17 million on revenue of $229.96 million, down from $249 million a year ago. On a non-GAAP basis, Netgear's earnings of 21 cents a share were a nickel better than estimates.


Researchers say deep learning will power 5G and 6G ‘cognitive radios’


For decades, amateur two-way radio operators have communicated across entire continents by choosing the right radio frequency at the right time of day, a luxury made possible by having relatively few users and devices sharing the airwaves. But as cellular radios multiply in both phones and Internet of Things devices, finding interference-free frequencies is becoming more difficult, so researchers are planning to use deep learning to create cognitive radios that instantly adjust their radio frequencies to achieve optimal performance. As explained by researchers with Northeastern University’s Institute for the Wireless Internet of Things, the increasing varieties and densities of cellular IoT devices are creating new challenges for wireless network optimization; a given swath of radio frequencies may be shared by a hundred small radios designed to operate in the same general area, each with individual signaling characteristics and variations in adjusting to changed conditions. The sheer number of devices reduces the efficacy of fixed mathematical models when predicting what spectrum fragments may be free at a given split second.


Outsourced DevOps brings benefits, and risks, to IT shops


When IT teams outsource DevOps planning to a third-party service provider, it only exacerbates existing planning issues. Another option is to hire a contract Scrum Master or product manager with DevOps experience to work with the in-house teams. Either way, proceed with an end game of knowledge transfer to build in-house planning expertise. Depending on the organization's attitude toward contractors, the addition of an outside contractor to work on planning can bring some cultural challenges. Some organizations treat contractors as valued members of the team, while others treat them as outsiders -- which makes it challenging to have a contractor in any subject matter expert position. Planning tools, however, are ripe for outsourcing. For example, if an organization lacks the in-house expertise to implement and maintain Atlassian Jira or another planning tool, it can outsource that platform and use a managed version. While it's more common to outsource the build phase of DevOps than it is the planning phase, it still has risks.


Tech Leaders Map Out Post-Pandemic Return to Workplace

Businesses will be turning to enterprise technology to smooth out the process of getting employees back to the workplace in the wake of the coronavirus pandemic, according to a report by Forrester Research. Technology leaders say safety will be a top priority. The information-technology research firm’s report lays out an early-stage road map for IT executives preparing to reopen corporate offices—a process that will vary by industry, but for most businesses will involve multiple stages. Chief information officers and their teams will likely be in the first wave of employees returning to the job site, said Andrew Hewitt, a Forrester analyst serving infrastructure and operations professionals. He said their initial task will be to develop a strategy for keeping employee tech tools—including PCs, mobile devices, monitors, keyboards and mice—germ-free without damaging them. “IT teams will need to have a staging area that’s outside of the front door of the office where employees can bring their home technology in and sanitize it,” Mr. Hewitt said.


Five Attributes of a Great DevOps Platform

DevOps Platform
Culture plays a significant role in establishing the guidelines while embracing DevOps in any organization. Through DevOps culture, companies seek to bring dev and ops teams into harmony to promote collaboration, automation, process improvements, continuous iterative development and deployment methodologies. But above everything else, a sound DevOps culture fundamentally solves one of IT’s biggest people problems: bridging the gap between dev and ops teams to get them to stop working in silos and have common goals. According to Gartner estimation, DevOps efforts fail 90% of the time when infrastructure and operations teams try to drive a DevOps initiative without nurturing a cultural shift in the first place. It is not just about the efficient tools or experts working; it is about the behavioral modifications and mentality necessary to effect cultural change. Hence, it is important for the firms to consider the culture of the company before selecting its tool as a potential DevOps tool for their development.


Use tokens for microservices authentication and authorization


STS enables clients to obtain the credentials they need to access multiple services that live across distributed environments. It issues digital security tokens that stay with users from the beginning of their session and continuously validate their permission for each service they call. An STS can also reissue, exchange and cancel security tokens as needed. The STS must connect with an enterprise user directory that contains all the details about user roles and responsibilities. This directory, and any connection made to it, should be properly secured as well, otherwise users could elevate their permissions just by editing policies on their own. Consider segmenting user access policies based on roles and activities. For instance, identify the individuals who have administrative capabilities. Or, you might limit a developer's access permissions to only include the services they are supposed to work on. ... Not all microservices permission and security checks are based around a human user.



Quote for the day:


"I'm not crazy about reality, but it's still the only place to get a decent meal." -- Groucho Marx


Daily Tech Digest - April 23, 2020

Indian IT desperately needed a new business model and coronavirus gave it one

remote-working-jeonghwaryu0.jpg
Some IT companies have implemented "employee productivity trackers like webcam-based movement capture, hourly timesheet entry, tracking of keyboards, and so on, to ensure employees are working at home," Yugal Joshi, vice-president at Texas-based consultancy Everest Group, told Quartz. "This indicates a deep-rooted malaise in Indian IT/ITes industry where the senior management generally mistrusts people," he added. Two, unlike the retail or manufacturing sectors that cannot operate with current social distancing norms, the top-tier Indian IT companies and their mid-sized brethren are responsible for keeping the lights on for a large collection global companies -- some of whom are depended on people every second of the day. This includes banks, utility companies, retailers, and, of course, pharmaceuticals. With the ongoing coronavirus outbreak, all of these industries are now being serviced from the apartments and houses of India's IT workforce, which as you can imagine, is a supremely difficult and exasperating task for everyone involved. Most of IT's clients have ironclad regulatory and privacy riders that have needed to be tweaked considerably in light of coronavirus.



How a basic cross-training program can ease disruptions on the IT team

If the coronavirus hasn't disrupted your business operations yet, there's a good chance it will soon. This first wave of illness will not be the last time the coronavirus disrupts daily business operations. First companies had to adjust to remote work for all employees. The next challenge may be filling in for colleagues who are out sick or caring for family members or friends who are ill. A cross-training program can make this transition go smoothly. Sam Maley, an IT operations manager at Bailey & Associates, an IT consultancy, said cross-training can minimize disruptions and reduce stress levels due to absenteeism. "Cross-training programs are designed to build versatility and skill overlaps in your team members," he said. Jeff Fleischman, CMO at the consulting firm Altimetrik, said cross-training needs to be part of business continuity plans. "To receive buy-in from top management, quantify the impact disruption has on the business such as revenue loss, reputational risk, defaulting on contractual obligations, and failing to meet regulatory requirements, and then explain how cross-training would eliminate these risks," Fleischman said.


Kubernetes vs. VMware: Drive the choice with IT architecture


The choice to run either containers in VMs vs. VMs in containers is an architectural design decision. This is because there's a line of thought that containers are the ideal abstraction for multi-cloud application delivery. Though VMware assures admins containers and VMs are the same in vSphere, it's difficult to draw a similar comparison for Kubernetes and VMs. Kubernetes is an orchestration product that admins use primarily for containers. In theory, Kubernetes could manage compute resources other than containers. However, a container as the primary abstraction layer means that traditional VM management tools don't map directly. Though networking can help solve this issue, KubeVirt could be the answer. KubeVirt uses Kubernetes network architecture and plugins rather than hypervisor abstractions, such as vSwitches, to manage networking. As a result, products must switch to network management based on Kubernetes namespaces. That's not necessarily a bad thing; it's just an overall change in operations mode from a VM-centric operating model to a container-centric operating model.



Researchers Release Open Source Counterfactual Machine Learning Library

Three Counterfactuals for Loan Application Scenario
Exactly what machine learning counterfactuals are, and the reasons why they are important, are best explained by example. Suppose a loan company has a trained ML model that is used to approve or decline customers' loan applications. The predictor variables (often called features in ML terminology) are things like annual income, debt, sex, savings, and so on. A customer submits a loan application. Their income is $45,000 with debt = $11,000 and their age is 29 and their savings is $6,000. The application is declined. A counterfactual is change to one or more predictor values that results in the opposite result. For example, one possible counterfactual could be stated in words as, "If your income was increased to $60,000 then your application would have been approved." In general, there will be many possible counterfactuals for a given ML model and set of inputs. Two other counterfactuals might be, "If your income was increased by $50,000 and debt was decreased to $9,000 then your application would have been approved" and, "If your income was increased to $48,000 and your age was changed to 36 then your application would have been approved." Figure 1 illustrates three such counterfactuals for a loan scenario.


What is value stream mapping? A lean technique for improving business processes

What is value stream mapping? A lean technique for improving business processes
Before you can start building a value stream map, you need to objectively evaluate your organization’s business processes, products and systems. Start by talking to leadership, department heads and other key stakeholders who can give you more insight into what can be improved. You’ll need to get hands-on experience with the process, product or system yourself and have other employees walk you through their part. It’s important to collect as much data as possible — for example, any inefficiencies in the process, how many workers are involved, what resources are used and any downtime. Any potentially relevant or noteworthy data is helpful in fleshing out your final VSM flow chart and achieving insights into what can be refined or improved. You’ll then create two separate VSM flow charts — a current state value stream map and a future state value stream map. Your current state VSM will be used to establish how the process currently runs and functions in the business. This is where you will demonstrate issues, significant findings and establish key requirements. The future state VSM, on the other hand, focuses on what your process will look like once your organization has completed all of the necessary improvements.


Ethernet consortium announces completion of 800GbE spec 

Network Networking Ethernet
Based on many of the technologies used in the current top-end 400 Gigabit Ethernet protocol, the new spec is formally known as 800GBASE-R. The consortium that designed it (then known as the 25 Gigabit Ethernet Consortium) was also instrumental in developing the 25, 50, and 100 Gigabit Ethernet protocols and includes Broadcom, Cisco, Google, and Microsoft among its members. The 800GbE spec adds new media access control (MAC) and physical coding sublayer (PCS) methods, which tweaks these functions to distribute data across eight physical lanes running at a native 106.25Gbps. (A lane can be a copper twisted pair or in optical cables, a strand of fiber or a wavelength.) The 800GBASE-R specification is built on two 400 GbE 2xClause PCSs to create a single MAC which operates at a combined 800Gbps. And while the focus is on eight 106.25G lanes, it's not locked in. It is possible to run 16 lanes at half the speed, or 53.125Gbps. The new standard offers half the latency of 400G Ethernet specification, but the new spec also cuts the forward error correction (FEC) overhead on networks running at 50 Gbps, 100 Gbps, and 200 Gbps by half, thus reducing the packet-processing load on the NIC.


Application performance for remote workers becomes primary network issue for businesses


In addition to the top-line finding of dealing with complexity and performance, the study also highlighted that cost had become less of an issue for respondents, who also cited significant investment in automation, security, cloud connectivity and the potential of 5G. Drilling deeper into the pressing issues for firms, Aryaka found that as the number of remote workers increases across the globe, productivity and remote application performance have become more important for organisations across Europe, the Middle East and Africa (EMEA). Some 45% of UK businesses noted that slow application performance led to a poor user experience for remote and mobile users, and that it was a significant issue faced by IT and support teams. Accessing and integrating cloud and software-as-a-service (SaaS) applications was one of the most pressing issues for UK IT departments, cited by 39%.


Ransomware is now the biggest online menace you need to worry about - here's why


One of the reasons why ransomware attacks have risen so much is because cyber criminals are increasingly viewing it as the simplest and quickest means of making money from compromised networks. With ransomware, attackers can lockdown an organisation's entire network and demand a bitcoin payment in exchange for the decryption key. Ransomware attacks are often successful because organisations opt to pay the ransom demand, viewing it as the quickest and easiest way to restore functionality to the network, despite authorities warning never to give into the demand of extortionists. These ransomware demands commonly reach six-figure sums and, because the transfer is made in bitcoin, it's relatively simple for the criminals to launder it without it being traced back to them. "The 'beauty' of the ransomware model is you only need to write the ransomware once and its potential to infect is only limited by its reach, which with the internet is unlimited," Ed Williams, EMEA director of SpiderLabs, the research division at Trustwave, told ZDNet.


Remote business continuity techniques to implement now


This is not just an issue when facing a pandemic. If your business continuity plan addresses only short-term disruptions, such as those that last less than a month, it may not be prepared for an extended outage. Your technology disaster recovery plan may need to be activated, assuming outages occur due to insufficient IT staff available or technology disruptions that occur due to a shortage of vendor personnel. Fortunately, many data centers are designed to operate without human intervention or with remote access to system administration functions. Technology vendors frequently use managed IT resources such as cloud-based systems to support their service offerings. This reduces the likelihood of outages as long as the managed service providers are able to keep their systems operational. As many organizations use remotely hosted applications, users can keep using those systems, so long as their vendors are able to keep their operations working. The real challenge for organizations that have mostly locally hosted systems and databases is to remotely manage those assets.


New Enterprise Graph Framework for Data Scientists Leverages Machine Learning

The new Neo4j for Graph Data Science framework is designed to enable data scientists to operationalize better analytics and machine learning models that infer behavior based on connected data and network structures Frame described. The framework, she said in a statement announcing the product release, is intended to provide the most expeditious way to generate better predictions. "A common misconception in data science is that more data increases accuracy and reduces false positives," she explained. "In reality, many data science models overlook the most predictive elements within data -- the connections and structures that lie within. Neo4j for Graph Data Science was conceived for this purpose -- to improve the predictive accuracy of machine learning, or answer previously unanswerable analytics questions, using the relationships inherent within existing data." 



Quote for the day:


"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it." -- Warren Bennis


Daily Tech Digest - April 22, 2020

Cisco integrates SD-WAN connectivity with Google Cloud

sd-wan
The Cisco/Google platform is important because software- and infrastructure-as-a-service (SaaS and IaaS) offerings have been driving SD-WAN implementations in the past year, experts say. “One of the key drivers of SD-WAN has been the increasing consumption of cloud services in the enterprise, across both IaaS and SaaS applications,” said Rohit Mehra, vice president, network infrastructure at IDC. “With some of the largest public cloud providers playing an increasing role in how these enterprise apps are consumed and delivered, and bringing their vast global networks to bear, they will increasingly have a role to play with how WANs are architected going forward.” For enterprises, one of the key takeaways from this announcement is that “SD-WANs will now be able to play a better functional role in the delivery of cloud services such as IaaS and SaaS, and likewise, the large public-cloud purveyors will benefit from providing a stronger value proposition towards multi-cloud deployments,” Mehra said. "Secondly, enterprises will benefit in terms of extending policy and governance beyond applications to other attributes such as locations/geo and multiple clouds.”



The new normal: A step-by-step guide for the enterprise

The new normal: A step-by-step guide for the enterprise
From a business perspective, we need to identify and understand the negative effects that occurred during the lockdown. What additional damage will likely occur in the short and long terms? This can range from relatively minor problems, such as a slowdown of some customer deliveries or lack of materials for manufacturing, to a complete shutdown of some operations due to on-premises systems that could not be maintained or fixed during the lockdown. You need to assign dollar amounts to each issue. Keep in mind that some of these will be hard costs, meaning sales and billing. Others will be soft costs, such as reputation. What points hurt the business the most? We need this information to prioritize triage. For most enterprises, this step will immediately identify the need to migrate some assets to cloud. The migration will typically target existing on-premises systems that managed to limp through the crisis. Based on historical migration data, the most common move will involve a “lift and shift” of resources, such as storage and compute, to a public cloud provider. Most enterprises will opt to refactor the applications at a later date; a few will refactor as the applications migrate.


Here are six tech roles companies want to fill now, despite the coronavirus lockdown


"The fact that recruitment is still continuing with relative strength in IT is perhaps unsurprising due to the on-going need across most sectors to conduct operations remotely," said Ann Swain, CEO of APSCo. John Gaughan, managing director of technology recruitment firm Finlay James, said he has a number of clients who are hiring and using remote on-boarding when filling SaaS tech sales roles and technology leadership positions. Recruiters are switching from in-person interview to video meetings with candidates, and in some cases, with everyone working from home, it may be some time before new recruits actually meet the people they are working with. The APSCo report also noted that recruitment for marketing has also held up surprisingly well, which it said is probably down to businesses ramping up their digital marketing and communications activities. There has also been an increase in roles involving employee engagement. "With many teams now working from home, the challenge of keeping remote employees engaged and operating as a cohesive unit has never been greater," the report said.


Contactless Payments: Healthy COVID-19 Defense


From a fraud-fighting standpoint, compared with swiping a card and signing a paper receipt, contactless is much more secure. And while some call these capabilities "tap and go," in reality, there's no contact required: You just have to wave your card or compatible smartphone close to the card reader until it beeps. Cards with this capability began to be rolled out in the U.K. in 2008, and the vast majority of payment terminals in stores now work with them. Other systems that don't get refreshed very often - for example, inside buses - have been slowly catching up. Here in the Scottish city of Dundee, last year most buses finally got upgraded with the ability to accept contactless payments. Many newer smartphones also have contactless capability via Apple Pay, Android Pay or Samsung Pay. Just load a payment card and use your smartphone to pay without touching anything, up to certain amounts. As a bonus, the smartphone-based approaches add additional layers of security, such as needing to use your fingerprint or face to unlock the contactless payment capability.


Remote Agile (Part 4): Anti-Patterns

remote agile anti-patterns
Hybrid events create two classes of teammates — remote and co-located — where the co-located folks are calling the shots. Beware of the distance bias — when out of sight means out of mind — thus avoiding the creation of a privileged subclass of teammates: “Distance biases have become all too common in today’s globalized world. They emerge in meetings when folks in the room fail to gather input from their remote colleagues, who may be dialing in on a conference line.” To avoid this scenario, make sure that once a single participant joins remotely, all other participants “dial in,” too, to level the playing field. Every communication feels like a (formal) meeting. ... Instead, put trust in people, uphold the prime directive, and be surprised what capable, self-organizing people can achieve once you get out of their way. Trust won’t be built by surveilling and micro-managing team members. Therefore, don’t go rogue; the prime directive rules more than ever in a remote agile setup. Trust in people and do not spy on them — no matter how tempting it might be. Read more about the damaging effect of a downward spiraling trust dynamic from Esther Derby.


COVID-19 & The Digital Imperative


In a recent interview, John Chambers, former Cisco CEO and now Venture Capitalist, said the pandemic will force many “companies to use this moment to make the transition to digital. Things will get worse before they get better— that is the realistic optimist in me speaking,” said Chambers, who has predicted up to 40% of the Fortune 500 and 70% of startups will no longer be around in a decade if they don’t make the digital transition. The disruptions brought about by the pandemic can be expected to accelerate the shift to digital that has already been underway. It is not just that organizations the world over have radically altered their work environments to accommodate work from home and technologies such as video conferencing and remote networking on a massive scale. It is also that the consequences of the pandemic are likely creating digital disruption opportunities and imperatives across the economy, in industries as diverse as food and beverage, hospitality, real estate, travel, and government.


How microsegmentation architectures differ

micro segmentation security lock 2400x1600
It's important to remember that microsegmentation is not just a data center-oriented technology. "Many security incidents start on end-user workstations, because employees click on phishing links or their systems become compromised by other means," Cross says. From that initial point of infection, attackers can spread throughout an organization's network. "A microsegmentation platform should be able to enforce policies in the data center, on cloud workloads, and on end-user workstations from a single console," he explains. "It should also be able to stop attacks from spreading in any of these environments." As with many emerging technologies, vendors are approaching microsegmentation from various directions. Three traditional microsegmentation types are host-agent segmentation, hypervisor segmentation and network segmentation. ... This microsegmentation type relies on agents positioned in the endpoints. All data flows are visible and relayed to a central manager, an approach that can help reduce the pain of discovering challenging protocols or encrypted traffic.


Google wants to make it easier to analyse health data in the cloud


Dr John Halamka, president of Mayo Clinic Platform, said: "We're in a time where technology needs to work fast, securely, and most importantly in a way that furthers our dedication to our patients. Google Cloud's Healthcare API accelerates data liquidity among stakeholders, and in-return, will help us better serve our patients." The issue of interoperability remains a tricky subject within healthcare. Battles over data formats and ownership stymies efforts to join up healthcare systems and make patient data available to healthcare professionals whenever and wherever they need it. In the US, inroads have been made recently through the passing of rules by Centers for Medicare and Medicaid Services (CMS) and the National Coordinator for Health Information Technology (ONC) to make it easier for healthcare organisations to exchange patient data, and for patients to access their own information. Google said its Cloud Healthcare API was designed to scale and support interoperability and patient access. It added that the COVID-19 pandemic had made the need for increased data interoperability more important than ever.


How developer teams went remote overnight

How developer teams went remote overnight
Remote work isn’t new for communications API specialist Twilio, but the pandemic has caused a massive shift. Prior to the coronavirus outbreak, CEO Geoff Lawson told TechCrunch that around 10 percent of the company worked remotely. “For a company like us to go from partially virtual to fully virtual in a short period of time, it’s not without its hiccups, but it has worked pretty well,” he said. Some of that 10 percent of remote workers included the team of Marcos Placona, manager for developer evangelism at Twilio. “My team has always worked on a distributed basis with direct reports in the US, UK, and across Europe,” Placona told InfoWorld. The various time zones involved make it “tough to work this way,” he admits, “but we have regular check-ins with the team and individuals with weekly one-to-ones.” Developer evangelists at Twilio still contribute code and have to track contributions, alongside writing documentation and filtering through reams of customer feedback. During the pandemic this team has shifted to holding daily remote stand-ups.


A Tale of 3 Breaches: Incident Response Challenges

A Tale of 3 Breaches: Incident Response Challenges
Three recently disclosed health data security incidents - including the discovery of a large email hack that happened nearly a year ago - serve as reminders of the ongoing incident response challenges facing healthcare organizations. A 2019 email hacking incident that affected 112,000 individuals was disclosed last week by Dearborn, Michigan-based Beaumont Health. Also recently reported were: a February ransomware attack on Wilmington, Del.-based substance abuse treatment provider Brandywine Counseling and Community Services that affected clinical records of an undisclosed number of patients, and a phishing scam impacting more than 27,000 patients and employees of Wisconsin-based Advocate Aurora Health. The COVID-19 crisis is likely to make it even more difficult for healthcare organizations to respond to security incidents, some observers say. "As long as COVID-19 drives IT activities in supporting remote workers and setting up patient triage tents with access to technology infrastructure, IT may have difficulty monitoring network activity for anomalous events unless a security operations center is in place to monitor around the clock, along with centralized log event management that can automate detection of and alerting on activities of concern," notes Keith Fricke



Quote for the day:


"Many men may see the King in a Kid but it takes a true leader to nurture it" -- Bernard Kelvin Clive


Daily Tech Digest April 21, 2020

Stay Ahead of the 5G and DevOps Race with Continuous Network Monitoring

5G and DevOps - Continuous Network Monitoring
Automobiles aside, another industry that benefits from being proactive rather than reactive is telecommunications. Not only does the telecoms world requires routine checks and maintenance, but it also needs to identify problems before they cause larger issues or disruptions. Networks are evolving rapidly and this will continue as 5G deployments expand; as will the need for regularly scheduled maintenance and examinations. DevOps–a set of procedures that automates between software development (Dev) and IT operations (Ops) along with continuous delivery (CD)–allows for a level of agility that enables new features and services to be deployed within weeks or days. There are four stages of establishing these services–design, deploy, test and operate–all of which demand a constant pace and network monitoring. To maximize DevOps and CD, including the speed benefits that come with both, predictive network monitoring (PNM) is vital.


Deploying Edge Cloud Solutions Without Sacrificing Security  

Deploying Edge Cloud Solutions Without Sacrificing Security
First, let's think about the structure of edge cloud systems. In most implementations, edges are within organizations' computing boundaries, and so they will be protected by a wide variety of tools that focus on perimeter scanning and intrusion detection. However, that's not quite the whole story: in most systems, there will also be a tunnel between the edge straight to cloud storage. Sending data from the edge to the cloud in a secure way is fairly straightforward, because organizations will control the infrastructure that is used to encrypt and verify it. The problem arises when the cloud needs to send data back to the edge for processing. The challenge here is to ensure that this data is authenticated and verified, and is therefore safe to enter into an organizations' internal systems. First, and most obviously, edge cloud systems fragment data. Having each device connected directly to cloud services might incur a performance loss, but at least this data is centralized, and can be covered by a single cloud security policy. Because edge cloud servers – almost by definition – need to be connected to many different devices, they represent a nightmare when it comes to securing these same connections.


DDoS in the Time of COVID-19: Attacks and Raids


Unfortunately, or fortunately, cyber security is an essential business. As a result, those working in the field are not getting to experience any downtime during a quarantine. Many of us have been working around the clock, fighting off waves of attacks and helping other essential businesses adjust to a remote work force as the global environments change. Along the way we have learned a few things about how a modern society deals with a pandemic. Obviously, a global Shelter-in-Place resulted in an unanticipated surge in traffic. As lockdowns began in China and worked their way west, we began to see massive spikes in streaming and gaming services. These unanticipated surges in traffic required digital content providers to throttle or downgrade streaming services across Europe, to prevent networks from overloading.  The COVID-19 pandemic also highlights the importance of service availability during a global crisis. Due to the forced digitalization of the work force and a global Shelter-in-Place, the world became heavily dependent on a number of digital services during isolation. Degradation or an outage impacting these services during the pandemic could quickly spark speculation and/or panic.



Governing by data: Limits and opportunities

Healthcare is perhaps the most obvious area of public service for the adoption of data analysis, given that medical science is largely built on this. The UK government has been led by data and science in reacting to the coronavirus epidemic over recent weeks, making a celebrity out of the UK’s chief medical officer Chris Whitty. But politics can trump data analysis. David Nutt, professor of neuropsychopharmacology at Imperial College London, was sacked as the government’s chief advisor on drugs in 2009 after saying policy in this area was not based on evidence. Nutt’s research found that legal alcohol was more harmful to society than illegal drugs, although heroin was rated as having the greatest damage on individuals. “The logical conclusion is, if government drugs policy is about harms, alcohol should be the primary focus,” Nutt writes in his new book Drink? The new science of alcohol and your health. “But for political reasons, this evidence has been ignored.”


IT directors plan to protect cloud budgets and consolidate vendors during downturn


According to the survey, agile delivery and cloud cost optimization are the most important priorities for tech leaders at the moment. IT managers will be using these tools to respond more quickly to customer demands and increase fiscal discipline. Agile and DevOps practices will drive faster software releases with lower failure rates and quicker recovery from incidents. IT leaders also need to pay attention to internal customers as well. The report recommends that teams should move from reactive infrastructure management to proactive support of digital transformation efforts by working closely with business owners, developers, product managers, and tech partners. The financial crunch due to the coronavirus will motivate financial teams to track down redundant, unused, and underused cloud services and turn them off. IT managers also reported that they will analyze worloads and identify the right pricing models—on-demand, spot, or reserved—to maximize savings. The survey also found that the gap between public cloud platform providers is closing with Google Cloud, Amazon Web Services, and Microsoft Azure each getting an equal share of votes as a preferred cloud provider. Tech leaders are looking for providers that can deliver on business needs


The Bootstrap 4 Grid Deconstructed

While upgrading my skillset and implementing an Angular based website, I again looked at the Bootstrap Grid system and decided to deep-dive into it and see what makes it work. I'll be using my original article as a kind of template for the structure of this article and will sometimes reference it for things explained there. I will also assume a basic knowledge of HTML and CSS. That you know what a <div>, <span>, etc. are..., that you know about CSS inheritance rules, ... I also assume you have read the article about the Bootstrap 3 grid system so you are familiar with responsive breakpoints and the like. ... The Grid: It's Still All About Rows and Columns. Nothing has changed here: we still need to define a container with rows which in turn contains columns. However, where in the Bootstrap 3 grid you had to always specify the width of your columns and make them add up to a total of 12, this is no longer true for the Bootstrap 4 grid. The Bootstrap 4 grid defines a simple col class which allows you to evenly spread your columns over the width of your page while taking up as much space as necessary for the content to match the column.


USB-C power for laptops is still complicated - and here's why

USB cable with magnetic interchangeable heads
The problem is that while USB-C can support any and all of those, what actually works is down to the capabilities of the port and of the cable itself (more specifically, the control chips at either end of the cable). Some laptops have one USB-C port that supports the PD (Power Delivery) standard and one that doesn't, because that way you can use a cheaper controller chip and only have to route the power down one path on the motherboard. Different protocols have different licencing requirements, so not every cable supports Thunderbolt. And you need specific controller chips in the cable to support PD. That's why the UNO interchangeable cable we looked at recently didn't support PD, making it an almost, but not quite, universal cable. The £46/$55 Infinity Cable (also from Chargeasap) has some nice tweaks: a cord wrap; a smaller, less bright LED on the cable so you know when power is flowing but you don't get dazzled by your phone cable at night; and the 15-year warranty that presumably inspired the name. But the big change is that it supports PD up to 100W. The Infinity cable has USB-C on one end, with an optional ($5) USB-A adapter for when you need to use an older port; the other end is a magnet with interchangeable connectors for USB-C, Micro-USB and Lightning. The magnets are strong -- get the tip close to the cable and it snaps on securely, but if you yank on the cable the tip will come off before you pull your device off the table.


The Internet Only Works During A Pandemic Because We Killed Net Neutrality

In fact, networks in China and Italy, like here in the States, have (with a few exceptions) held up reasonably well under the massive load of telecommuting and home learning. Not because of net neutrality policy, but because network engineers are generally good at their jobs. While there have been some network problems, they're usually of the "last mile" variety in both the EU and US. As in, your ISP never upgraded that "last mile" to your house, so you're still stuck on a DSL line from around 2007 that struggles to handle Zoom teleconferencing particularly well. But most core networks around the world have held up rather admirably. The claim that the EU was suffering some kind of exceptional congestion problems appears to have originated among some EU regulators who simply urged Netflix to reduce bandwidth consumption by 25% to pre-emptively help lighten the load. There was no supporting public evidence provided of actual harm. The move was precautionary.


How to overcome application modernisation barriers


“We’re talking about IT estates that have grown up over the past 30 to 40 years, and you find that many of these organisations have not invested in technology over time,” he says, adding that a lack of integrations between these applications is a major barrier to building agile, modern application portfolios. Like Mendix’s Ford, Fairclough recommends modernisation projects are divided into “prioritised chunks”, which he says enables IT teams to tackle the most important things first.  “Maybe there are some things that you don't even need to tackle, so actually you segment and decide that we can run those IT systems over there for another few years and then just retire them,” he says.  Describing a challenging modernisation project he worked on, Fairclough says the amount of work required to complete the project had been “totally underestimated”. He says the project involved an IT estate of more than 500 applications, which meant the customer did not understand how everything was connected. As a consequence, project costs were pushed up “exponentially”.


Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice

Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice
The biggest challenge associated with the topic of reliability is knowing where to invest your time and energies. We’re never ‘done’ making a system reliable, so how do we know what components are most critical? Where will we get the highest ROI? Furthermore, how do we decide that a system is reliable enough? To answer that last question, set recovery time and recovery point objectives (RTOs and RPOs) and let yourself be guided by them. Based on those metrics, decide where you should be investing your time. To decide where to start improving the overall reliability of your system, you need to know how all of the components interact, and identify the most critical components and bottlenecks. You can spend all of your time making a database reliable, but that won’t matter if it sits behind a heavily used but unreliable caching layer. Dependency graphs are great for visualising how the components of your service fit together and will allow you to identify the places where you will reap the biggest reliability rewards. The challenge here is that dependency graphs get stale ridiculously quickly unless they are automated.



Quote for the day:


"When you can't make them see the light, make them feel the heat." - Ronald Reagan


Daily Tech Digest - April 20, 2020

The SingularityNET Foundation continues to provide and maintain tools, such as a command-line interface (CLI), to help AI developers create and publish services on the platform directly, irrespective of whether these services appear on the Marketplace. This is key to the decentralized methodology, vision and ethos which has guided SingularityNET since its founding. However, AI services that appear on the platform via routes other than the Publisher Portal will not be listed on the Marketplace UI and cannot make use of the Marketplace’s tools for easy deployment, monitoring, maintenance, fiat/crypto conversion and so forth. The AI Publisher Portal enables developers to register themselves and submit their services for curation, seamlessly validates developer identities, and provides a guided and intuitive way to create and manage services on the Marketplace. Only services curated and published via the Publisher portal, and in this way approved by the Foundation, will appear on the Marketplace. 


COVID-19 Has United Cybersecurity Experts, But Will That Unity Survive the Pandemic?


“A nurse or doctor can’t do what we do, and we can’t do what they do,” Espinosa said. “We’ve seen a massive rise in threats and attacks against healthcare systems, but it’s worse if someone dies due to a malicious cyberattack when we have the ability to prevent that. A lot of people are involved because they’re emotionally attached to the idea of helping this critical infrastructure stay safe and online.” Using threat intelligence feeds donated by dozens of cybersecurity companies, the CTC is poring over more than 100 million pieces of data about potential threats each day, running those indicators through security products from roughly 70 different vendors. If at least 10 of those flag a specific data point — such as a domain name — as malicious or bad, it gets added to the CTC’s blocklist, which is designed to be used by organizations worldwide for blocking malicious traffic. “For possible threats, meaning between five and nine vendors detect an indicator as bad, our volunteers manually verify that the indicator is malicious before including it in our blocklist,” Espinosa said. ... Mark Rogers, one of several people helping to manage the CTI League’s efforts, told Reuters the top priority of the group is working to combat hacks against medical facilities and other frontline responders to the pandemic, as well as helping defend communication networks and services that have become essential as more people work from home.


Machine Learning Playing An Important Role In Data Management


With advances in machine learning, cloud computing and storage, enterprises are finally breaking the data-management logjam. In question are breakout upgrades in business proficiency, revenue realization, product innovation and competitive differentiation. The outcomes driven here could be transformational. For CIOs and CISOs stressed over security, compliance and scheduling SLAs, it’s basic to understand that ever-expanding volumes and varieties of data, it’s not humanly workable for an administrator or even a team of administrators and data scientists to tackle these challenges. Luckily, machine learning can help. A variety of machine learning and deep learning strategies might be utilized to achieve this. Comprehensively, machine/deep learning methods might be named either unsupervised learning, supervised learning, or reinforcement learning The decision of which strategy will be driven by what issue is being fathomed. For instance, supervised learning mechanisms, for example, random forest might be utilized to build up a gauge, or what comprises “typical” behavior for a system, by observing applicable traits, at that point utilize the benchmark to identify inconsistencies that stray from the standard.


How can businesses ensure ROI from 5G services?

How can businesses ensure ROI from 5G services? image
The unprecedented speed and capacity of 5G will dramatically increase the productivity of a typical business, paying dividends in terms of increased efficiency and therefore tangible ROI. In the short-term, 5G will enable agile and fast fixed wireless connections that will enable organisations to “cut-the-cord” while extending the reach and reliability of their WAN. While businesses today operate networks as many individual domains (branch, mobile and IoT), an advanced orchestration and automation system can make the entire network operate as a single unified network fabric. Looking further ahead, the power of edge computing will provide the processing power that will move artificial intelligence-powered solutions from the niche to the mainstream. From a cost-benefit perspective, AI automates and simplifies data analysis of any type, which can clearly offload work from human staff and increase productivity. While AI solutions are currently housed mainly in data centres, 5G will enable rapidly accelerated data processing at the network edge, providing the real-time and ubiquitous connectivity that AI requires to function.


Data-Driven Decision Making – Optimizing the Product Delivery Organization

Data-Driven Decision Making – Optimizing the Product Delivery Organization
With the Indicators Framework defined, it was clear to us that its introduction to the organization of 16 development teams could only be effective if sufficient support could be provided to the teams. We introduced Hypotheses first. Six months later we introduced SRE. And six months after that we introduced Continuous Delivery Indicators to the organization. We chose a staged approach to introducing these changes in order to have the organization focus on one change at a time. In terms of preparation for the introduction, Hypotheses were the easiest; it took an extension of our Business Feature Template and a workshop with each team.  To prepare for the SRE introduction, we implemented basic infrastructure for two fundamental SLIs - Availability and Latency. The infrastructure is able to generate SLI and Error Budget Dashboards for each service of each team. Most importantly, it is able to do alerting on Error Budget Consumption in all deployment environments.


Is a free VPN a good idea for your IoT devices?

Is a free VPN a good idea for your IoT devices?
While some of the free VPNs available are secure, a few others aren’t. Some free VPNs have been reported to sell out the user’s data to third-parties, thereby undermining your privacy. There are also a few cases where VPNs have been used to facilitate malware attacks by housing the malware elements. Some may also try to access apps that they should not, such as Maps. For these reasons, it is recommended to use free VPNs from tried and tested reliable providers. Various VPN providers throw in different features to their free version products. Generally, most include basic functionalities, i.e. privacy and encryption. The rest of the advanced features are reserved for the premium plans. Truth to tell, you can hardly find a free VPN that has all the features you need. You might be forced to forego some features. It goes without saying then that the best free VPN is one that brings you the most of the features you need. The Commonwealth Scientific and Industrial Research Organization conducted a study on over 280 Android VPN apps. The study revealed that 67% of the apps had trackers embedded in their codes.


The Way Forward: Digital Resiliency Wins

Digital Resiliency Wins
McKinsey & Company advised CIOs to keep their focus on stabilizing emergency measures by strengthening remote working capabilities, improve cybersecurity, adjust ways of working with agile teams and prepare for a breakdown of parts of the vendor ecosystem (supply chain). In the interim, CIOs need to address immediate IT cost pressures to reduce costs, and creatively redeploy the IT workforce, while also pivoting to new areas of focus in the future. According to McKinsey & Company, many organizations are successfully digitally engaging with customers, and cited a government in Western Europe who embarked on an “express digitization” of quarantine-compensation claims to deal with a more than 100-fold increase in volume. “Sometimes this effort is about taking loads from call centers, but more often it addresses real new business opportunities. To engage with consumers, for example, retailers in China increasingly gave products at-home themes in WeChat,” McKinsey & Company wrote.


Windows 10 turns five: Don't get too comfortable, the rules will change again

windows-10-device-range-2015.jpg
Despite the occasional twists and turns that Windows 10 has taken in the past five years, it has accomplished its two overarching goals. First, it erased the memory of Windows 8 and its confusing interface. For the overwhelming majority of Microsoft's customers who decided to skip Windows 8 and stick with Windows 7, the transition was reasonably smooth. Even the naming decision, to skip Windows 9 and go straight to 10 was, in hindsight, pretty smart. Second, it offered an upgrade path to customers who were still deploying Windows 7 in businesses. That alternative became extremely important when we zoomed past the official end-of-support date for Windows 7 in January 2020. In mid-2019, when I checked usage data from the U.S. Government's Data Analytics Program, the migration to Windows 10 appeared to be stalled. As of July 31, 2019, Windows 7 still accounted for 26% of all visits to U.S. government websites from Windows PCs. Nine months later, that number has been cut in half. For the six weeks ending April 15, that same metric shows the number of visits from Windows 7 PCs is down to 12.7% and continuing to slide.


What is TypeScript? Strongly typed JavaScript

What is TypeScript? Strongly typed JavaScript
TypeScript is a superset of JavaScript. While any correct JavaScript code is also correct TypeScript code, TypeScript also has language features that aren’t part of JavaScript. The most prominent feature unique to TypeScript—the one that gave TypeScript its name—is, as noted, strong typing: a TypeScript variable is associated with a type, like a string, number, or boolean, that tells the compiler what kind of data it can hold. In addition, TypeScript does support type inference, and includes a catch-all any type, which means that variables don’t have to have their types assigned explicitly by the programmer; more on that in a moment. TypeScript is also designed for object-oriented programming—JavaScript, not so much. Concepts like inheritance and access control that are not intuitive in JavaScript are simple to implement in TypeScript. In addition, TypeScript allows you to implement interfaces, a largely meaningless concept in the JavaScript world. That said, there’s no functionality you can code in TypeScript that you can’t also code in JavaScript. That’s because TypeScript isn’t compiled in a conventional sense—the way, for instance, C++ is compiled into a binary executable that can run on specified hardware.


Fostering Smart Cities based on an Enterprise Architecture Approach

Figure 1: Proposed EAF for the +CityxChange project.
The research aims to develop an overall ICT architecture and service-based ecosystem to ensure that service providers of the +CityxChange project can develop, deploy and test their services through integrated and interconnected approaches. For the purpose of this research, a city can be seen as a big enterprise with different departments. With its ability to model the complexities of the real world in a practical way and to help users plan, design, document, and communicate IT and business-oriented issues, the Enterprise Architecture (EA) method has become a popular domain for business and IT system management. The decision support that it offers makes EA an ideal approach for sustainable smart cities, and it is being increasingly used in smart city projects. This approach allows functional components to be shared and reused and infrastructure and technologies to be standardised. EA can enhance the quality and performance of city processes and improve productivity across a city by integrating and unifying data linkages. 



Quote for the day:


"Don't believe what your eyes are telling you. All they show is limitation. Look with your understanding." -- Richard B