Daily Tech Digest April 21, 2020

Stay Ahead of the 5G and DevOps Race with Continuous Network Monitoring

5G and DevOps - Continuous Network Monitoring
Automobiles aside, another industry that benefits from being proactive rather than reactive is telecommunications. Not only does the telecoms world requires routine checks and maintenance, but it also needs to identify problems before they cause larger issues or disruptions. Networks are evolving rapidly and this will continue as 5G deployments expand; as will the need for regularly scheduled maintenance and examinations. DevOps–a set of procedures that automates between software development (Dev) and IT operations (Ops) along with continuous delivery (CD)–allows for a level of agility that enables new features and services to be deployed within weeks or days. There are four stages of establishing these services–design, deploy, test and operate–all of which demand a constant pace and network monitoring. To maximize DevOps and CD, including the speed benefits that come with both, predictive network monitoring (PNM) is vital.


Deploying Edge Cloud Solutions Without Sacrificing Security  

Deploying Edge Cloud Solutions Without Sacrificing Security
First, let's think about the structure of edge cloud systems. In most implementations, edges are within organizations' computing boundaries, and so they will be protected by a wide variety of tools that focus on perimeter scanning and intrusion detection. However, that's not quite the whole story: in most systems, there will also be a tunnel between the edge straight to cloud storage. Sending data from the edge to the cloud in a secure way is fairly straightforward, because organizations will control the infrastructure that is used to encrypt and verify it. The problem arises when the cloud needs to send data back to the edge for processing. The challenge here is to ensure that this data is authenticated and verified, and is therefore safe to enter into an organizations' internal systems. First, and most obviously, edge cloud systems fragment data. Having each device connected directly to cloud services might incur a performance loss, but at least this data is centralized, and can be covered by a single cloud security policy. Because edge cloud servers – almost by definition – need to be connected to many different devices, they represent a nightmare when it comes to securing these same connections.


DDoS in the Time of COVID-19: Attacks and Raids


Unfortunately, or fortunately, cyber security is an essential business. As a result, those working in the field are not getting to experience any downtime during a quarantine. Many of us have been working around the clock, fighting off waves of attacks and helping other essential businesses adjust to a remote work force as the global environments change. Along the way we have learned a few things about how a modern society deals with a pandemic. Obviously, a global Shelter-in-Place resulted in an unanticipated surge in traffic. As lockdowns began in China and worked their way west, we began to see massive spikes in streaming and gaming services. These unanticipated surges in traffic required digital content providers to throttle or downgrade streaming services across Europe, to prevent networks from overloading.  The COVID-19 pandemic also highlights the importance of service availability during a global crisis. Due to the forced digitalization of the work force and a global Shelter-in-Place, the world became heavily dependent on a number of digital services during isolation. Degradation or an outage impacting these services during the pandemic could quickly spark speculation and/or panic.



Governing by data: Limits and opportunities

Healthcare is perhaps the most obvious area of public service for the adoption of data analysis, given that medical science is largely built on this. The UK government has been led by data and science in reacting to the coronavirus epidemic over recent weeks, making a celebrity out of the UK’s chief medical officer Chris Whitty. But politics can trump data analysis. David Nutt, professor of neuropsychopharmacology at Imperial College London, was sacked as the government’s chief advisor on drugs in 2009 after saying policy in this area was not based on evidence. Nutt’s research found that legal alcohol was more harmful to society than illegal drugs, although heroin was rated as having the greatest damage on individuals. “The logical conclusion is, if government drugs policy is about harms, alcohol should be the primary focus,” Nutt writes in his new book Drink? The new science of alcohol and your health. “But for political reasons, this evidence has been ignored.”


IT directors plan to protect cloud budgets and consolidate vendors during downturn


According to the survey, agile delivery and cloud cost optimization are the most important priorities for tech leaders at the moment. IT managers will be using these tools to respond more quickly to customer demands and increase fiscal discipline. Agile and DevOps practices will drive faster software releases with lower failure rates and quicker recovery from incidents. IT leaders also need to pay attention to internal customers as well. The report recommends that teams should move from reactive infrastructure management to proactive support of digital transformation efforts by working closely with business owners, developers, product managers, and tech partners. The financial crunch due to the coronavirus will motivate financial teams to track down redundant, unused, and underused cloud services and turn them off. IT managers also reported that they will analyze worloads and identify the right pricing models—on-demand, spot, or reserved—to maximize savings. The survey also found that the gap between public cloud platform providers is closing with Google Cloud, Amazon Web Services, and Microsoft Azure each getting an equal share of votes as a preferred cloud provider. Tech leaders are looking for providers that can deliver on business needs


The Bootstrap 4 Grid Deconstructed

While upgrading my skillset and implementing an Angular based website, I again looked at the Bootstrap Grid system and decided to deep-dive into it and see what makes it work. I'll be using my original article as a kind of template for the structure of this article and will sometimes reference it for things explained there. I will also assume a basic knowledge of HTML and CSS. That you know what a <div>, <span>, etc. are..., that you know about CSS inheritance rules, ... I also assume you have read the article about the Bootstrap 3 grid system so you are familiar with responsive breakpoints and the like. ... The Grid: It's Still All About Rows and Columns. Nothing has changed here: we still need to define a container with rows which in turn contains columns. However, where in the Bootstrap 3 grid you had to always specify the width of your columns and make them add up to a total of 12, this is no longer true for the Bootstrap 4 grid. The Bootstrap 4 grid defines a simple col class which allows you to evenly spread your columns over the width of your page while taking up as much space as necessary for the content to match the column.


USB-C power for laptops is still complicated - and here's why

USB cable with magnetic interchangeable heads
The problem is that while USB-C can support any and all of those, what actually works is down to the capabilities of the port and of the cable itself (more specifically, the control chips at either end of the cable). Some laptops have one USB-C port that supports the PD (Power Delivery) standard and one that doesn't, because that way you can use a cheaper controller chip and only have to route the power down one path on the motherboard. Different protocols have different licencing requirements, so not every cable supports Thunderbolt. And you need specific controller chips in the cable to support PD. That's why the UNO interchangeable cable we looked at recently didn't support PD, making it an almost, but not quite, universal cable. The £46/$55 Infinity Cable (also from Chargeasap) has some nice tweaks: a cord wrap; a smaller, less bright LED on the cable so you know when power is flowing but you don't get dazzled by your phone cable at night; and the 15-year warranty that presumably inspired the name. But the big change is that it supports PD up to 100W. The Infinity cable has USB-C on one end, with an optional ($5) USB-A adapter for when you need to use an older port; the other end is a magnet with interchangeable connectors for USB-C, Micro-USB and Lightning. The magnets are strong -- get the tip close to the cable and it snaps on securely, but if you yank on the cable the tip will come off before you pull your device off the table.


The Internet Only Works During A Pandemic Because We Killed Net Neutrality

In fact, networks in China and Italy, like here in the States, have (with a few exceptions) held up reasonably well under the massive load of telecommuting and home learning. Not because of net neutrality policy, but because network engineers are generally good at their jobs. While there have been some network problems, they're usually of the "last mile" variety in both the EU and US. As in, your ISP never upgraded that "last mile" to your house, so you're still stuck on a DSL line from around 2007 that struggles to handle Zoom teleconferencing particularly well. But most core networks around the world have held up rather admirably. The claim that the EU was suffering some kind of exceptional congestion problems appears to have originated among some EU regulators who simply urged Netflix to reduce bandwidth consumption by 25% to pre-emptively help lighten the load. There was no supporting public evidence provided of actual harm. The move was precautionary.


How to overcome application modernisation barriers


“We’re talking about IT estates that have grown up over the past 30 to 40 years, and you find that many of these organisations have not invested in technology over time,” he says, adding that a lack of integrations between these applications is a major barrier to building agile, modern application portfolios. Like Mendix’s Ford, Fairclough recommends modernisation projects are divided into “prioritised chunks”, which he says enables IT teams to tackle the most important things first.  “Maybe there are some things that you don't even need to tackle, so actually you segment and decide that we can run those IT systems over there for another few years and then just retire them,” he says.  Describing a challenging modernisation project he worked on, Fairclough says the amount of work required to complete the project had been “totally underestimated”. He says the project involved an IT estate of more than 500 applications, which meant the customer did not understand how everything was connected. As a consequence, project costs were pushed up “exponentially”.


Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice

Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice
The biggest challenge associated with the topic of reliability is knowing where to invest your time and energies. We’re never ‘done’ making a system reliable, so how do we know what components are most critical? Where will we get the highest ROI? Furthermore, how do we decide that a system is reliable enough? To answer that last question, set recovery time and recovery point objectives (RTOs and RPOs) and let yourself be guided by them. Based on those metrics, decide where you should be investing your time. To decide where to start improving the overall reliability of your system, you need to know how all of the components interact, and identify the most critical components and bottlenecks. You can spend all of your time making a database reliable, but that won’t matter if it sits behind a heavily used but unreliable caching layer. Dependency graphs are great for visualising how the components of your service fit together and will allow you to identify the places where you will reap the biggest reliability rewards. The challenge here is that dependency graphs get stale ridiculously quickly unless they are automated.



Quote for the day:


"When you can't make them see the light, make them feel the heat." - Ronald Reagan


Daily Tech Digest - April 20, 2020

The SingularityNET Foundation continues to provide and maintain tools, such as a command-line interface (CLI), to help AI developers create and publish services on the platform directly, irrespective of whether these services appear on the Marketplace. This is key to the decentralized methodology, vision and ethos which has guided SingularityNET since its founding. However, AI services that appear on the platform via routes other than the Publisher Portal will not be listed on the Marketplace UI and cannot make use of the Marketplace’s tools for easy deployment, monitoring, maintenance, fiat/crypto conversion and so forth. The AI Publisher Portal enables developers to register themselves and submit their services for curation, seamlessly validates developer identities, and provides a guided and intuitive way to create and manage services on the Marketplace. Only services curated and published via the Publisher portal, and in this way approved by the Foundation, will appear on the Marketplace. 


COVID-19 Has United Cybersecurity Experts, But Will That Unity Survive the Pandemic?


“A nurse or doctor can’t do what we do, and we can’t do what they do,” Espinosa said. “We’ve seen a massive rise in threats and attacks against healthcare systems, but it’s worse if someone dies due to a malicious cyberattack when we have the ability to prevent that. A lot of people are involved because they’re emotionally attached to the idea of helping this critical infrastructure stay safe and online.” Using threat intelligence feeds donated by dozens of cybersecurity companies, the CTC is poring over more than 100 million pieces of data about potential threats each day, running those indicators through security products from roughly 70 different vendors. If at least 10 of those flag a specific data point — such as a domain name — as malicious or bad, it gets added to the CTC’s blocklist, which is designed to be used by organizations worldwide for blocking malicious traffic. “For possible threats, meaning between five and nine vendors detect an indicator as bad, our volunteers manually verify that the indicator is malicious before including it in our blocklist,” Espinosa said. ... Mark Rogers, one of several people helping to manage the CTI League’s efforts, told Reuters the top priority of the group is working to combat hacks against medical facilities and other frontline responders to the pandemic, as well as helping defend communication networks and services that have become essential as more people work from home.


Machine Learning Playing An Important Role In Data Management


With advances in machine learning, cloud computing and storage, enterprises are finally breaking the data-management logjam. In question are breakout upgrades in business proficiency, revenue realization, product innovation and competitive differentiation. The outcomes driven here could be transformational. For CIOs and CISOs stressed over security, compliance and scheduling SLAs, it’s basic to understand that ever-expanding volumes and varieties of data, it’s not humanly workable for an administrator or even a team of administrators and data scientists to tackle these challenges. Luckily, machine learning can help. A variety of machine learning and deep learning strategies might be utilized to achieve this. Comprehensively, machine/deep learning methods might be named either unsupervised learning, supervised learning, or reinforcement learning The decision of which strategy will be driven by what issue is being fathomed. For instance, supervised learning mechanisms, for example, random forest might be utilized to build up a gauge, or what comprises “typical” behavior for a system, by observing applicable traits, at that point utilize the benchmark to identify inconsistencies that stray from the standard.


How can businesses ensure ROI from 5G services?

How can businesses ensure ROI from 5G services? image
The unprecedented speed and capacity of 5G will dramatically increase the productivity of a typical business, paying dividends in terms of increased efficiency and therefore tangible ROI. In the short-term, 5G will enable agile and fast fixed wireless connections that will enable organisations to “cut-the-cord” while extending the reach and reliability of their WAN. While businesses today operate networks as many individual domains (branch, mobile and IoT), an advanced orchestration and automation system can make the entire network operate as a single unified network fabric. Looking further ahead, the power of edge computing will provide the processing power that will move artificial intelligence-powered solutions from the niche to the mainstream. From a cost-benefit perspective, AI automates and simplifies data analysis of any type, which can clearly offload work from human staff and increase productivity. While AI solutions are currently housed mainly in data centres, 5G will enable rapidly accelerated data processing at the network edge, providing the real-time and ubiquitous connectivity that AI requires to function.


Data-Driven Decision Making – Optimizing the Product Delivery Organization

Data-Driven Decision Making – Optimizing the Product Delivery Organization
With the Indicators Framework defined, it was clear to us that its introduction to the organization of 16 development teams could only be effective if sufficient support could be provided to the teams. We introduced Hypotheses first. Six months later we introduced SRE. And six months after that we introduced Continuous Delivery Indicators to the organization. We chose a staged approach to introducing these changes in order to have the organization focus on one change at a time. In terms of preparation for the introduction, Hypotheses were the easiest; it took an extension of our Business Feature Template and a workshop with each team.  To prepare for the SRE introduction, we implemented basic infrastructure for two fundamental SLIs - Availability and Latency. The infrastructure is able to generate SLI and Error Budget Dashboards for each service of each team. Most importantly, it is able to do alerting on Error Budget Consumption in all deployment environments.


Is a free VPN a good idea for your IoT devices?

Is a free VPN a good idea for your IoT devices?
While some of the free VPNs available are secure, a few others aren’t. Some free VPNs have been reported to sell out the user’s data to third-parties, thereby undermining your privacy. There are also a few cases where VPNs have been used to facilitate malware attacks by housing the malware elements. Some may also try to access apps that they should not, such as Maps. For these reasons, it is recommended to use free VPNs from tried and tested reliable providers. Various VPN providers throw in different features to their free version products. Generally, most include basic functionalities, i.e. privacy and encryption. The rest of the advanced features are reserved for the premium plans. Truth to tell, you can hardly find a free VPN that has all the features you need. You might be forced to forego some features. It goes without saying then that the best free VPN is one that brings you the most of the features you need. The Commonwealth Scientific and Industrial Research Organization conducted a study on over 280 Android VPN apps. The study revealed that 67% of the apps had trackers embedded in their codes.


The Way Forward: Digital Resiliency Wins

Digital Resiliency Wins
McKinsey & Company advised CIOs to keep their focus on stabilizing emergency measures by strengthening remote working capabilities, improve cybersecurity, adjust ways of working with agile teams and prepare for a breakdown of parts of the vendor ecosystem (supply chain). In the interim, CIOs need to address immediate IT cost pressures to reduce costs, and creatively redeploy the IT workforce, while also pivoting to new areas of focus in the future. According to McKinsey & Company, many organizations are successfully digitally engaging with customers, and cited a government in Western Europe who embarked on an “express digitization” of quarantine-compensation claims to deal with a more than 100-fold increase in volume. “Sometimes this effort is about taking loads from call centers, but more often it addresses real new business opportunities. To engage with consumers, for example, retailers in China increasingly gave products at-home themes in WeChat,” McKinsey & Company wrote.


Windows 10 turns five: Don't get too comfortable, the rules will change again

windows-10-device-range-2015.jpg
Despite the occasional twists and turns that Windows 10 has taken in the past five years, it has accomplished its two overarching goals. First, it erased the memory of Windows 8 and its confusing interface. For the overwhelming majority of Microsoft's customers who decided to skip Windows 8 and stick with Windows 7, the transition was reasonably smooth. Even the naming decision, to skip Windows 9 and go straight to 10 was, in hindsight, pretty smart. Second, it offered an upgrade path to customers who were still deploying Windows 7 in businesses. That alternative became extremely important when we zoomed past the official end-of-support date for Windows 7 in January 2020. In mid-2019, when I checked usage data from the U.S. Government's Data Analytics Program, the migration to Windows 10 appeared to be stalled. As of July 31, 2019, Windows 7 still accounted for 26% of all visits to U.S. government websites from Windows PCs. Nine months later, that number has been cut in half. For the six weeks ending April 15, that same metric shows the number of visits from Windows 7 PCs is down to 12.7% and continuing to slide.


What is TypeScript? Strongly typed JavaScript

What is TypeScript? Strongly typed JavaScript
TypeScript is a superset of JavaScript. While any correct JavaScript code is also correct TypeScript code, TypeScript also has language features that aren’t part of JavaScript. The most prominent feature unique to TypeScript—the one that gave TypeScript its name—is, as noted, strong typing: a TypeScript variable is associated with a type, like a string, number, or boolean, that tells the compiler what kind of data it can hold. In addition, TypeScript does support type inference, and includes a catch-all any type, which means that variables don’t have to have their types assigned explicitly by the programmer; more on that in a moment. TypeScript is also designed for object-oriented programming—JavaScript, not so much. Concepts like inheritance and access control that are not intuitive in JavaScript are simple to implement in TypeScript. In addition, TypeScript allows you to implement interfaces, a largely meaningless concept in the JavaScript world. That said, there’s no functionality you can code in TypeScript that you can’t also code in JavaScript. That’s because TypeScript isn’t compiled in a conventional sense—the way, for instance, C++ is compiled into a binary executable that can run on specified hardware.


Fostering Smart Cities based on an Enterprise Architecture Approach

Figure 1: Proposed EAF for the +CityxChange project.
The research aims to develop an overall ICT architecture and service-based ecosystem to ensure that service providers of the +CityxChange project can develop, deploy and test their services through integrated and interconnected approaches. For the purpose of this research, a city can be seen as a big enterprise with different departments. With its ability to model the complexities of the real world in a practical way and to help users plan, design, document, and communicate IT and business-oriented issues, the Enterprise Architecture (EA) method has become a popular domain for business and IT system management. The decision support that it offers makes EA an ideal approach for sustainable smart cities, and it is being increasingly used in smart city projects. This approach allows functional components to be shared and reused and infrastructure and technologies to be standardised. EA can enhance the quality and performance of city processes and improve productivity across a city by integrating and unifying data linkages. 



Quote for the day:


"Don't believe what your eyes are telling you. All they show is limitation. Look with your understanding." -- Richard B


Daily Tech Digest - April 19, 2020

Robotic Process Automation: The Ultimate Way Forward for Smart Data Centers

As we enter this new shift in how companies work, each bit of data must be treated and properly used to maximize their value. This would not be possible without cost-effective storage and increasingly incredible hardware, digital transformation, and the associated new business models. For quite a while, experts have anticipated that the automation developments introduced in manufacturing plants worldwide will later be extended to data centers. In all realities, with the use of Robotic Process Automation (RPA) and machine learning in the data center setting, we are fast advancing this possibility. Human error is an essential explanation for network failure by a wide margin. Software defects and breakdowns lead this down. Despite almost zero knowledge of how the equipment operates, the step must be made after the downtime has just occurred. The cost effect is much higher as the emphasis is deviated from other issues to deal with the cause for the problem, along with the impact of the actual downtime of the network. To have an increasingly efficient data center, dependability, cost, and management have to be set. That can be supported by automation.


How The Remote Workforce Impacts GDPR & CCPA Compliance

So to achieve GDPR and CCPA compliance, organizations must ensure not only that explicit policies and procedures are in place for handling personal information, but also the ability to prove that those policies and procedures are being followed and operationally enforced. The new normal of remote workforces is a critical challenge that must be addressed. What has always been needed is gaining immediate visibility into unstructured distributed data across the enterprise, including on laptops and other unstructured data maintained by remote workforces, through the ability to search and report across several thousand endpoints and other unstructured data sources, and return results within minutes instead of days or weeks. The need for such an operational capability provided by best practices technology is further heightened by the urgency of CCPA and GDPR compliance. Solving this collection challenge is X1 Distributed Discovery, which is specially designed to address the challenges presented by remote and distributed workforces. 


Thinking about Microservices

Monolithic Architecture
As the name implies, this architecture is based on services. This architecture is more than SOA architecture. Services are typically separated by either business capabilities or sub-domain. Once modules/components are defined, they can be implemented through a different set of teams. These teams would be the same or different technology stack teams. In this way, individual components can be scaled up when needed and quickly scaled down once the need is over. ... Now we talked about the benefits of Microservices, but it does not mean that every single application architecture should be drawn in Microservices. Before adopting Microservice architecture- ask yourself “Do you really need a Microservices based application?” Judge your decision by asking a simple set of questions before moving ahead with Microservices. ... Now you have a good overview of Microservice architecture, but having said that, practical implementation still has lot of differences compared to Monolithic. They are really not the same as traditional Monolithic architecture.


3 Keys to Efficient Enterprise Microservices Governance

An enterprise normally has a few thousand microservices, having autonomy for each team in selecting its own choice of the technology stack. Therefore, it’s inevitable that an enterprise should have a microservices governance mechanism to avoid building an unmanageable and unstable architecture. Any centralized governance goes against the core principle of microservices architectures i.e. “provide autonomy and agility to the team.” But that also doesn’t mean that we should not have a centralized policy, standards, and best practices that each team should follow. With an enterprise-scale of integrations with multiple systems and complex operations, the question is, “How do we effectively provide decentralized governance?” We need to have a paradigm shift in our thinking while implementing a microservices governance strategy. The governance strategy should align with core microservices principles – independent and self-contained services, single responsibility, and cross-functional teams aligning with the business as well as policies and best practices.


Artificial intelligence is evolving all by itself


Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI. “While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.” Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.


HowTo Secure Distributed Cloud Systems in Enterprise Environments


Rapidly increasing workloads call for improved IT infrastructure scaling in businesses. Cloud resources are designed to be scalable by changing several lines of code and increasing spending. This ease of scaling, however, can lull organizations into scaling too much without considering the side effects. Scaling cloud resources would require an equal expansion in security systems. If an enterprise’s security measures cannot keep up with the rate at which its cloud environment is growing, it’s only going to increase the attack surface for costly breaches. To avoid this problem, enterprises should consider the scalability of their security systems first before expanding cloud environments. Security applications should also be integrated into the environment, not as a separate or external resource, to maintain business continuity. Automation is a must for good distributed cloud management. Again, an increased number of applications and dependencies make it almost impossible for it to be done efficiently by hand. The time saved from automation can then be funneled towards higher-level, strategic work.


3 Steps for Deploying Robotic Process Automation

Image: Poobest - stockadobe.com
The first step to adopting RPA is discerning which processes in your organization can, and should, be automated. Look at which tasks require critical thinking, emotional intelligence and add the most value to your customer. Then, automate tasks that are manual, repetitive and prone to error. For example, you could automate processes like collecting data, monitoring and prioritizing emails and filling out forms, which are tedious tasks that would otherwise take hours of your employees’ time. We thought critically about how to use RPA to better support our people­ -- allowing them to dedicate more time advising customers, while bots pulled the information needed to assist in that counsel. ... To note, deploying RPA is not a one-and-done initiative. Adopting RPA is a dynamic process that you need to continually update to support your company’s unique and growing business needs. We deployed a timeboxed approach over the course of 20 weeks. Rather than attempt to deploy as many bots as possible, we first established a sound foundation for RPA within our operations from which we could scale in automated measures.


OpenTelemetry Steps up to Manage the Mayhem of Microservices


The goal with OpenTelemetry is not to provide a platform for observability, but rather to provide a standard substrate to collect and convey operational data so it can be used in monitoring and observational platforms, either of the open source or commercial variety. Historically, when an enterprise would purchase a package for systems monitoring, all the agents that would be attached to the resources would be specific to that provider’s implementation. If a customer wanted to change out, the applications and infrastructure would have to be entirely re-instrumented, Sigelman explained. By using the OpenTelemetry, users could instrument their systems once and pick the best and visualization and analysis products for their workloads, and not worry about lock-in. In addition to Honeycomb and Lightstep, some of the largest vendors in the monitoring field, as well as the largest end-users are participating, including Google, Microsoft, Splunk, Postmates, and Uber. The new collector is crucial, explained Honeycomb’s Fong-Jones, in that it narrows the minimum scope of what vendors must support in order to ingest telemetry.


Steps to Implementing Voice Authentication and Securing Biometric Data


Fraud prevention is a key driver for implementation and companies are looking both internally as well as externally. Insider threats can be reduced as staff access privileges are tightened up alongside voice biometric introduction. What are the steps to implementing a voice verification system, and how should the voiceprint data be secured, while ensuring compliance? Before implementing, the current system of authentication needs to be analyzed and compared to the desired process. Companies need to answer a number of questions. What is the current authentication process? For example passwords, PINs, set questions. How will this process change by using voice biometrics? Will Voice biometry replace OR extend current authentication steps? This depends on the geography. EU regulations such as PSD2 require strong authentication such as a biometric factor and something in your possession, such as an app. It also depends on their motivation. Some banks want voice biometrics to help with compliance, some want it to slash verification time – for example, if a bank currently asks five questions, they can safely cut it down to only one.


Working With Data in Microservices

A computer program is a set of instructions for manipulating data. Data will be stored (and transferred) in a machine-readable structured way that is easy to process by programs. Every year there are programming languages, frameworks, and technologies that emerge to optimize data processing in computer programs. Without the proper support from languages or frameworks, the developer won’t be able to write their programs in a way that’s easy to process and get meaningful information out of the data. Languages such as Python and R have adapted to specialize in data processing jobs and Matlab and Octave specialize in complex numbers for numerical computing processing. However, for microservice development where the programs are network distributed, traditional languages are yet to specialize for their unique needs. Ballerina is a new open-source programming language, which provides a unique developer experience to working with data in network-distributed programs.



Quote for the day:


"Leadership is getting someone to do what they don't want to do, to achieve what they want to achieve." -- Tom Landry


Daily Tech Digest - April 18, 2020

10 developer skills on the rise — and five on the decline
Understanding which disciplines and skills are up-and-coming and which are fading can help both companies and developers ensure they have the right skills and knowledge to succeed. And what better way to find that out than to mine developer job postings. Indeed.com analyzed job postings using a list of 500 key technology skill terms to see which ones employers are looking for more these days and which are falling out of favor. Such research has helped identify cutting-edge skills over the past five years, with some previous years’ risers now well establish, thanks to explosive growth. Docker, for one, has risen more than 4,000 percent in the past five years and was listed in more than 5 percent of all U.S. tech jobs in 2019. IoT as well has shot up nearly 2,000 percent in the past half-decade, with Ansible — an IT automation, configuration management, and deployment tool — and Kafka — a tool for building real-time data pipelines and streaming apps — showing similarly strong growth. And, of course, the rise of data science has also since cemented high demand for a range of skills, including artificial intelligence, machine learning, and data analysis.



Simply put, an anytime algorithm is just an algorithm that gradually improves a solution over time and can be interrupted at any time for that solution. For example, if we’re trying to come up with a route from the grocery story to the hospital, the anytime algorithm would continually produce routes that get better and better with more and more time. Basically, when we say a robot is thinking, what we really mean is that the robot is executing an anytime algorithm that produces solutions that improve over time. An anytime algorithm usually has a couple nice properties. First, an anytime algorithm exhibits monotonicity: it guarantees that solution quality increases or stays the same but never gets worse over time. Next, an anytime algorithm exhibits diminishing returns: the improvement in solution quality is high at the early stages of the computation and low at later stages. To illustrate the behavior of an anytime algorithm, take a look at this photo. In this photo, as computation time increases, solution quality increases as well. It turns out that this behavior is pretty typical of an anytime algorithm.



The pros and cons of AI and ML in DevOps

The pros and cons of AI and ML in DevOps image
A criticism that has been made towards AI in DevOps is that it can distract engineering teams from the end goal, and from more human elements of processes that are just as vital to success. “When it comes to tech and DevOps, we’re not talking about ‘strong AI’ or ‘Artificial General Intelligence’ that mimics the breadth of human understanding, but ‘soft’, or ‘weak’ AI, and specifically narrow, task-specific ‘intelligence’,” said Nigel Kersten, field CTO of Puppet. “We’re not talking about systems that think, but really just referring to statistics paired with computational power being applied to a specific task. “Now that sounds practical and useful, but is it useful to DevOps? Sure, but I strongly believe that focusing on this is dangerous and distracts from the actual benefits of a DevOps approach, which should always keep humans front and centre. “I see far too many enterprise leaders looking to ‘AI’ and Robotic Process Automation as a way of dealing with the complexity and fragility of their IT environments instead of doing the work of applying systems thinking, streamlining processes, creating autonomous teams, adopting agile and lean methodologies, and creating an environment of incremental progress and continuous improvement.


Raspberry Pi sales are rocketing in the middle of the coronavirus outbreak

When the Raspberry Pi Foundation has asked for stories about how people are using their Raspberry Pi devices to address COVID-19, one of the most common uses it saw was people showing their 3D-printed face shields, driven by a Raspberry Pi. "And that's just been individuals, that's what's inspiring - making face shields seems to be a community effort. You have people with a home printer, printing these things once a week and then going to a post office and sending them," he said. "Then you'll have some people sat in a hack space receiving the parcels, cutting the acetate and the elastic, assembling them into face shields then sending them to the hospital. It's amazing." Upton suggested this effort could eventually be ramped up to a "massively distributed scale", with the benefit of open source being that, once you have a good design that works, it can be rapidly iterated. In the long term, this could even include the ventilators themselves, he said. "One thing we're seeing with this is people finding a niche within which open hardware really works," he said.


Embracing the Journey to Public Cloud

Journey to Public Cloud
Whatever the sector, digital disruptors have one characteristic in common. They leverage the capabilities of the cloud to the maximum extent possible. A shift in power toward digital-first companies is underway, and the cloud plays a major role in helping these companies establish themselves. Many incumbents are just now taking their first steps on the journey to the cloud, and still working on the challenges involved in migrating their existing legacy applications to more scalable and agile public cloud environments. While they’re engaged in this process, digital insurgents are already exploiting the full potential of the cloud to deliver solutions that meet emerging customer needs associated with today’s evolving lifestyles. In highly regulated industries, such as financial services and healthcare, major brands will not necessarily see an immediate decline in their customer base, nor profits. Nonetheless, digital-first companies are capturing millions of customers annually, and this growth poses a considerable challenge over time. Embracing the journey to public cloud is crucial for legacy enterprises. 


jQuery 3.5 Released, Fixes XSS Vulnerability

Timmy Willison recently released a new version of jQuery. jQuery 3.5 fixes a cross-site scripting (XSS) vulnerability found in the jQuery’s HTML parser. The Snyk open source security platform estimates that 84% of all websites may be impacted by jQuery XSS vulnerabilities. jQuery 3.5 also adds missing methods for the positional selectors :even and :odd in preparation for the complete removal of positional selectors in the next major jQuery release (jQuery 4). Masato Kinugawa found a cross-site scripting (XSS) vulnerability in the htmlPrefilter method of jQuery, and published an example showing a popup alert window in the form of a challenge. Kinugawa explains that jQuery’s html() function calls the htmlPrefilter() method which uses a regexp replacing XHTML-like tags with versions that work in HTML ... While jQuery is a mature library, its presence is also very pervasive in websites. The Snyk open source security platform estimated in its State of JavaScript frameworks security report 2019 that 84% of all websites may be impacted by jQuery XSS vulnerabilities. jQuery can be found in 79% of the top 5,000 URLs from Alexa.


What Chrome OS needs to conquer next

Chrome OS Conquer
When you look at what types of tablets people are actually buying these days, what do you see? Specific data can be somewhat tough to come by, but we can pretty easily assemble a broad overview of what's happening. The big trend, not surprisingly, is that Apple tends to be the most prominent player in tablet sales — with around 36% of the worldwide market, according to IDC's latest stats. But it's what comes next that's particularly interesting for our current purposes. The second-place tablet-seller, again in no huge surprise, is almost always Samsung. But despite all the breathless coverage given to the company's high-priced tablets, IDC's past data indicates the "majority of its shipments" have been "comprised of the lower-end E and A series" devices. Hmmm. The next especially-significant-to-the-U.S. player in the list is Amazon, which uses Android as the base for its own custom tablet operating system. And guess what? ... So why are traditional Android tablets still hanging on and Chromebooks as tablets failing to catch, erm, fire? The answer is right in front of our eyes: When it comes to the non-Apple-associated tablet experience, people seem to be looking for cheap and often thus small-sized options.


Security Lapse Exposed ClearView Source Code

The repository contained Clearview’s source code, which could be used to compile and run the apps from scratch. The repository also stored some of the company’s secret keys and credentials, which granted access to Clearview’s cloud storage buckets. Inside those buckets, Clearview stored copies of its finished Windows, Mac and Android apps, as well as its iOS app, which Apple recently blocked for violating its rules. The storage buckets also contained early, pre-release developer app versions that are typically only for testing, Hussein said. The repository also exposed Clearview’s Slack tokens, according to Hussein, which, if used, could have allowed password-less access to the company’s private messages and communications. Clearview has been dogged by privacy concerns since it was forced out of stealth following a profile in The New York Times, but its technology has gone largely untested and the accuracy of its facial recognition tech unproven. Clearview claims it only allows law enforcement to use its technology, but reports show that the startup courted users from private businesses like Macy’s, Walmart and the NBA.


Zoom in crisis: How to respond and manage product security incidents

zoom crisis
Cybersecurity is a discipline in managing the risks to security, privacy, and safety. It does not eliminate them, but rather seeks to find an optimal balance between the risks, costs, and usability. That means there will always be a chance for undesired impacts. If managed properly from the onset, the minimization of those residual risks can also be handled in ways that reduce the negative effects. Crisis response is a specialty that benefits from forethought, experience, leadership, and skills. I have lead crisis response teams over the years and been fortunate to be part of strong teams that handled events with speed, efficiency, and professionalism. I have also witnessed complete train-wrecks where the wrong people were attempting to lead, focus was misplaced, valuable time and resources were squandered, legal instruments were applied to hide the truth, communication was confusing, and feeble attempts leveraging marketing to “spin messages” were preferred over actually addressing issues head-on. Poor leadership is caustic, can result in more problems and a prolonged recovery. Crisis response is a complex dance. It requires a clearly defined objective to pursue and an understanding of the opposition, obstacles, and resources.



Blockchain Is a Key Technology for the Development of Internet of Things (IoT) Solutions

Blockchain Is a Key Technology for the Development of Internet of  Things (IoT) Solutions
The main problem with the IoT is the danger it poses to users’ internet safety. Any device connected to the IoT is open to exploitation by hackers, and there have been multiple news reports in recent years ranging from hackers exploiting baby monitors to an IoT botnet taking down portions of the Internet. By opening ourselves up even more to the digital world, we are putting ourselves more and more at risk of cyberattack and hacking. And there’s no chance we’re taking a backstep anytime soon - modern internet uptake statistics say it all. We aren’t moving away from technology as the years go by, we are simply surrounding ourselves with more and more of it, putting us at more and more risk of being compromised or hacked. But rest easy, because apparently blockchain could help make the IoT a whole lot safer for us in the coming years. Given that IoT applications are, by very definition, ‘distributed’. It makes sense that distributed ledger technology like blockchain could play a vital role in allowing devices to communicate with each other. Blockchain is, at its core, a cryptographically secured, distributed ledger that allows for the secure transfer of data between parties.



Quote for the day:


"Increasingly, management_s role is not to organize work, but to direct passion and purpose." -- Greg Satell


Daily Tech Digest - April 17, 2020

Work from home, phase 2: What comes next for security?

cybersecurity  >  information security / data protection / lock / shield
From a security perspective, forward-thinking CISOs are now on to phase 2 focused on situational awareness and risk assessment. This is directly related to the fact that a lot of LAN traffic has been rerouted to WANs and internet connections. The goal? Scope out the new realities of usage patterns and the attack surface. To gain this level of visibility, organizations are deploying endpoint security agents to assess device posture and system-level activities. Think Tanium agents and EDR software from vendors like Carbon Black, CrowdStrike, and Cybereason. Security pros also recognize that employee home networks may be populated with insecure IoT devices, out-of-date family PCs, etc., so I’ve heard of instances where security teams are doing home network scans as well. Finally, there is an increased focus on network traffic monitoring travelling back-and-forth on VPNs or directly out to SaaS providers and the public cloud.  Leading organizations are also ramping up monitoring of cyber-adversaries and threat intelligence, looking for targeted attacks, COVID-19 tactics, techniques, and procedures (TTPs), IoCs, etc. 


.NET for Apache Spark brings enterprise coders and big data pros to the same table


Microsoft has worked hard to make the Spark.NET barrier-to-entry quite low. Case in point: The .NET for Apache Spark Web site provides a big white "Get Started" button that guides developers through the process of installing the framework, creating a sample wordcount application and running it. It takes the developer through the installation of all required dependencies, configuration steps, installation of .NET for Apache Spark itself, and the creation and execution of the sample application. The entire guided procedure is designed to take 10 minutes, and assumes little more than a clean machine as the starting environment. In large part it succeeds (with the caveat that I had to research and manually set the SPARK_LOCAL_IP environment variable to localhost to get the sample to run on my Windows machine), and I have to say it's quite a rush to get it working. ... You can deploy your .NET assembly to your Spark cluster and run it on a batch basis from the command line if you wish. But, for C# developers, Microsoft has also enabled the very common scenario of working interactively in a Jupyter notebook. That support includes a Jupyter notebook kernel that leverages the C# REPL (read-eval-print loop) technology which, is highly innovative in and of itself. Microsoft provides an F# kernel as well.



CI/CD Pipeline: Demystifying The Complexities

CD can help you discover and address bugs early in the delivery process before they grow into larger problems later. Your team can easily perform additional types of code tests because the entire process has been automated. With the discipline of more testing more frequently, teams can iterate faster with immediate feedback on the impact of changes. This enables teams to drive quality code with a high assurance of stability and security. Developers will know through immediate feedback whether the new code works and whether any breaking changes or bugs were introduced. Mistakes caught early on in the development process are the easiest to fix. CD helps your team deliver updates to customers quickly and frequently. When CI/CD is implemented, the velocity of the entire team, including the release of features and bug fixes, is increased. Enterprises can respond faster to market changes, security challenges, customer needs, and cost pressures. For example, if a new security feature is required, your team can implement CI/CD with automated testing to introduce the fix quickly and reliably to production systems with high confidence.


Run .Net Core on the Raspberry Pi


The .NET Core Framework also runs on ARM systems. It can be installed on the Raspberry Pi. I’ve successfully installed the .NET CORE framework on the Raspberry Pi 3 and 4. Unfortunately it isn’t supported on the Raspberry Pi Zero; ARM7 is the minimum ARM version supported. The Pi Zero uses an ARM6 processor. Provided you have a supported system you can install the framework in a moderate amount of time. The instructions I use here assume that the Pi is accessed over SSH. To begin you must find the URL to the version of the framework that works on your device. ... The current version of the .NET Core framework is 3.1. The 3.1 downloads can be found here. For running and compiling applications the installation to use is for the .NET Core SDK. (I’ll visit the ASP.NET Core framework in the future). For the ARM processors there is a 32-bit and 64-bit download. If you are running Raspbian use the 32-bit version even if the target is the 64-bit Raspberry Pi; Raspbian is a 32-bit operating system. Since there is no 64-bit version yet the 32-bit .NET Core SDK installation is necessary. Clicking on the link will take you to a page where it will automatically download.


Remote working, now and forevermore?


Companies that realize the benefits of remote work during the current crisis will be more likely to continue it long term, said Zapier’s Foster. These organizations are more likely to have a good remote working strategy in place already, he said, as well as the right tools and processes to make the transition easier.  “In terms of [the Covid-19 crisis] accelerating the movement, I’m fairly optimistic, but I think it will go one of two ways,” he said. “Companies with good communication systems in place that are already used to using things like chat, documents, and videoconferencing systems will see the benefits right away and will perhaps do more remote working in the future.”  The opposite is also true, said Foster. “Companies who don’t have effective systems in place are winging it in a lot of areas right now. They’re going to have a hard time with this sudden transition. They are being thrust into an environment where they have no structure.” In these cases, he said, the “wrong type of management, misaligned culture, and lack of essential tools” could contribute to negative remote work experiences.


Is Edge Computing a Thing?


The term “edge computing” implies a generic capability that is different from cloud computing. While there are often requirements such as data volume reduction, latency or security/compliance concerns that dictate an on-prem component of an application, other than these, does edge computing have unique requirements? It does: Real-time analysis of streaming data demands that we kick the REST + database habit. But there is nothing that is unique to the physical edge. This is great news because it means that “edge applications” can run on cloud infrastructure, or on prem. “Edge computing” is definitely a thing, but it’s about processing streaming data from the edge, as opposed to running the application at the physical edge. ... Real-world relationships between data sources are fluid, and based on computed relationships such as bad braking behavior, the application should respond differently. Finally, effects of changes are immediate, local and contextual (the inspector is notified to stop the truck). The dynamic nature of relationships suggests a graph database – and indeed a graph of related “things” is what is needed.


JSON Is Case Sensitive but You Don't Have to Be

You must have learned capitalization rules in your grammar school, but the real-world search is not so sensitive to capitalization. Charles de Gaulle uses lower case for the middle "de," Tony La Russa uses upper case for "La," and there may be etymological reasons for it, but it's unlikely for your customer service agent to remember.  Databases have a variety of sensitivities. SQL, by default, is case insensitive to identifiers and keywords, but case sensitive to data. JSON is case sensitive to both field names and data. So is N1QL. JSON can have the following. N1QL will select-join-project each field and value as a distinct field and value. ... In this article, we'll discuss dealing with data case sensitivity. Your field references are still case sensitive. If you use the wrong case for the field name, N1QL assumes this is a missing field and assigns MISSING value to that field.


Check Point sounds alarm over double extortion ransomware threat


“Double extortion is a clear and growing ransomware attack trend,” said Check Point threat intelligence manager Lotem Finkelsteen. “We saw a lot of this during Q1 2020. With this tactic, threat actors corner their victims even further by dripping sensitive information into the darkest places in the web to add weight to their ransom demands. “We are especially worried about hospitals having to face this threat. With their focus on coronavirus patients, addressing a double extortion ransomware attack would be very difficult. We are issuing a caution to hospitals and large organisations, urging them to back up their data and educate their staff about the risks of malware-spiked emails.” The first known case of such an attack was in November 2019 on the systems of Allied Universal, a US-based supplier of security and janitorial services to large enterprises, and involved Maze ransomware.


COVID-19 Pandemic Puts Privacy at Crossroads

It's possible to develop a contacts tracing system that protects the personal data of those infected with the virus and those who have been around them, says Vanessa Teague, a Melbourne-based cryptologist and CEO of Thinking Cybersecurity. Proximity-based location technology, such as Bluetooth, can ensure that precise location data isn't revealed while enabling an effective warning tool by knowing where people have been in close contact, Teague says. Also, it would be possible to do that so that a government couldn't identify people, either, she says. But maximizing privacy could also deprive government of a means to reach out to people and deliver important messages if someone has been exposed, she says. There's also a question of whether such a system should be mandated by a government or voluntary. "You could imagine some kind of a hybrid system, for example, where you might volunteer to notify an epidemiologist or the Department of Health if you found that you had been exposed," Teague says.


Government investigates perceptions about data sharing of health and social care


According to the National Data Guardian for health and adult social care, Fiona Caldicott, planning for this had started “long before the outbreak of the Coronavirus Covid-19 pandemic, so it’s not a reaction to it”. “However, we are already thinking about how the knowledge and attitudes of our public participants may have been affected,” she said in an article published on 14 April 2020. Caldicott then went on to explain that the NHS and social care services hold a lot of information about individuals that can be used for a number of purposes, including identify patterns and develop new ways to predict, diagnose or treat illness. However, she noted these organisations don’t always have the expertise to do so and that collaboration can be enabled by sharing data, citing he government’s efforts to encourage data-driven research and innovation. “Organisations which hold health and care data already assess public benefit or interest when deciding whether to allow it to be used to develop new medicines and technologies,” she said.



Quote for the day:


"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg


Daily Tech Digest - April 16, 2020

How is the role of the CTO evolving? image
Besides looking for the next technologies for a company to use, a CTO that succeeds in the modern landscape must have money management skills. Possessing them is particularly important when leading a startup due to the typical financial constraints associated with a newer company. However, keeping a careful watch on tech-related spending is a crucial part of a CTO’s role, regardless of the age of the company. Gartner predicted that worldwide IT spending will grow by 3.7% in 2020. Nevertheless, the potential to invest in new technology doesn’t exist if the company has a perpetually maxed-out or mismanaged budget. Whether being wise about expenditures means investigating new cloud providers to find more reasonable rates, or switching to a yearly billed plan for a team file storing tool to save money compared to the monthly subscription, CTOs should remain alert for practical ways to slash spending. Staying involved in keeping costs down gives the chief technology officer more freedom to invest in new technologies at the right time.


Fueling your company’s urge to surge


Most companies we look at that don’t surge are obsessed with their competitors. They compare pricing, products, marketing initiatives, and, if they can, costs and operating models. They seek to stay one step ahead (or at least not more than a step or two behind) their competitors. They are completely focused on market share. But companies that surge don’t think like that. They don’t look at their competitors, at least not quite so obsessively. They look at their customers, or potential future customers, and focus on how they can provide better value for the customer — profitably. ... Many established companies are risk averse. They adopt risk management processes, report quarterly to shareholders, and are careful not to disturb their market positioning in the eyes of investors. Having the courage to bet the company on a new product or service that fundamentally transforms a business and enables it to grow multiple times larger is rare. Even if the courage can be found, the bet is hard to pull off given the questions that boards and lenders will ask. However, we have found that companies in ASEAN markets can often make bet-the-company decisions quickly because family control means that a close-knit group drives the key decisions.


4 innovations we owe to open source

open source - pc board
Open source (and its kissing cousin, free software) guarantees three legal rights: "Free-of-charge use of the software, access to and modification of the source code, and [the ability] to pass on the source code and a binary copy." In turn, the license specifies the obligations the downstream recipient of the software must perform if she modifies the software and then distributes it. ... Today we're running into trouble because that "specificity" Riehle highlights is becoming, well, too specific, with developers blocking certain classes of organizations from using their software. In our fractious and fraught world, this is understandable. Unfortunately, it's not open source, given that non-discrimination is a cardinal virtue of open source licensing. Even so, this debate is far from over, which proves to be one of the great things about open source: Community. We don't always get along, but we're usually willing to talk about it. If legal innovation is the "brain" of open source, community is the "heart." While collaborative development didn't start with open source, open source has done more to codify the practice than anyone or anything else.


What Is an API Gateway?

Diagram of API gateways
As with any addition to your stack, API Gateways introduce another piece to manage. They need to be hosted, scaled, and managed just like the rest of your software. Since all requests and responses must pass through the gateway, they add an additional point of failure and increase the latency of each call by adding a few extra "hops" across the network. Due to their centralized location, it becomes easy to gradually increase the complexity inside the gateway until it becomes a "black box" of code. This makes maintaining the code harder. ... Gateways let clients access services, but what happens when services need to talk to one another? That's where service mesh comes in. A service mesh is a layer focused on service to service communication. You'll see gateway communication described as North-South(from clients to the gateway) and service mesh communication described as East-West(between services). Traditionally it made sense to use a service mesh and API gateway together. The gateway would be the entry point for your client's requests, and then the service mesh would allow your services to rely on one another before passing responses back through the gateway.


Defending aviation from cyber attack

Defending aviation from cyber attack
In developing their cyber security strategy, aviation businesses need to understand their supply chain and ensure their own cyber security is robust and reliable. They need to know who has access to which systems, and make sure that vendors have the right practices and procedures in place to deal with the cyber threat. There are several steps the industry can take to secure infrastructure, mitigate risk and ensure resilience in the face of the growing cyber threat.  To support businesses in this endeavour, the Civil Aviation Authority in the UK recently launched the ASSURE framework. Developed in collaboration with the Council for Registered Ethical Security Testers (CREST), the ASSURE scheme is designed to enable the aviation industry, including airlines, airports and air navigation service providers, to manage cyber security risk without compromising aviation security or resilience. Everything must be done to limit the threat and make it as difficult as possible for attackers to breach the organisation’s security systems. Achieving this cannot be done without IT teams and OT teams working together on cyber security.


HSBC survey indicates less than ten percent of Hong Kong residents are cyber smart

The survey shows that higher scoring respondents tend to be more affluent, and show greater engagement with a variety of digital activities. Despite their higher degree of risk exposure, they also exhibit better awareness and increased caution on cyber risks. As a whole, respondents showed a high degree of concerns about data privacy, although half of them are willing to connect through smart devices for better convenience. With regard to the use of financial services, 72 per cent of respondents felt uncomfortable in linking their bank account with a third party app. When it comes to cross-generational analysis, Gen Z received the highest scores in knowledge and attitude, but the lowest in behavior. For Gen X, support is needed to help them build tech-related knowledge, such as how to handle privacy settings, two-factor authentication (2FA) and biometric authentication (BA). Among Gen Y respondents, slightly more of them pay attention to suspicious activity alerts, but they have to address some knowledge and behavioral gaps.


Smart and edge data centers for e-governance services

e-governance
A smart data center can make an e-governance system agile and responsive, while fostering a learning environment and combining best practices, predictive analytics and IT automation. It taps into the power of artificial intelligence (AI) and analytics to achieve positive operational outcomes, optimize cooling and overall data center performance, maximize customer experience, and lower risk and IT costs. While identifying the root cause of issues and their impact on business in minutes, a smart data center can lower the Total Cost of Ownership (TCO) by up to 20% and decrease IT response time by up to 30%, besides providing fast, accurate, contextual, actionable insights on a proactive basis. Moreover, as smart cities unleash the full power of Big Data, IoT, Cloud and streaming services, there is a need for real-time collection and analysis of data on utilities, traffic, security and infrastructure to enable city officials to respond to problems faster than ever before. Hence, there is no room for latency in e-governance services. End users and devices demand anywhere, anytime access to applications and services, and this creates the need for setting up edge data centers for efficient delivery of e-governance services.


Comparing 4 ML Classification Techniques


In machine learning and artificial intelligence, an important type of problem is called classification. This article describes and compares four of the most commonly used classification techniques: logistic regression, perceptron, support vector machine (SVM), and single hidden layer neural networks. The goal of a classification problem is to predict the value of a variable that can take on discrete values. In binary classification the goal is to predict a variable that can be one of just two possible values, for example predicting the gender of a person (male or female). In multi-class classification the goal is to predict a variable that can be three or more possible values, for example, predicting a person's state of residence (Alabama, Alaska, . . . Wyoming). Note that a regression problem is one where the goal is to predict a numeric value, for example the annual income of a person. There are dozens of ML classification techniques, and most of these techniques have several variations. One way to mentally organize ML classification techniques is to place each into one of three categories: math equation classification techniques, distance and probability classification techniques, and tree classification techniques. This article explains four of the most common math equation classification techniques: A future PureAI article will explain compare common distance and probability techniques


Businessman using tablet and laptop analyzing sales data and economic growth graph chart. Business strategy. Digital marketing. Business innovation technology concept
"Instinctively, we feel that greater accuracy is better and all else should be subjected to this overriding goal," said Patrick Bangert, CEO of Algorithmic Technologies. "This is not so. While there are a few tasks for which a change in the second decimal place in accuracy might actually matter, for most tasks this improvement will be irrelevant--especially given that this improvement usually comes at a heavy cost."  I get that, but I must confess that I wasn't getting it very well a few years ago, when I was in charge of a financial institution's credit card operation and one of our board members was denied credit at the checkout in a home improvement store because an analytics system issued a false positive and denied him credit. Data science, IT, and business leaders responsible for analytics face the same quandary: To what degree of accuracy must the algorithm operating on the data perform for an analytics program to be declared "ready" for production? The answer depends on the nature of the problem that you're trying to solve. If you're formulating a vaccine, you want to achieve results that exceed 95%. If you're predicting a general trend, the low 90s or even the 80s might suffice.


10 Ways AI Can Improve Digital Transformation's Success Rate

10 Ways AI Can Improve Digital Transformation's Success Rate
AI is revolutionizing how organizations digitally transform their security strategies as threats to customers' identities, and personal data continue to proliferate. It's rare to hear any digital transformation strategy prioritize security. BMC's ADE framework is an exception as it recognizes how integral securing customers' identities is a core part of delivering positive customer experience. Organizations are turning to the Zero Trust Security (ZTS) framework to secure every network, cloud, and on-premise platform, operating system, and application across their supply chain and production networks. Chase Cunningham of Forrester, Principal Analyst, is the leading authority on Zero Trust Security, and his recent video, Zero Trust In Action, is worth watching to learn more about how manufacturers can secure their IT infrastructures. You can find his blog here. There are several fascinating companies to watch in this area, including MobileIron, which has created a mobile-centric, zero-trust enterprise security framework manufacturers are relying on today.



Quote for the day:


“Five years down the line, all of our devices will have an emotion chip. We won’t remember when we couldn't just frown at a device” -- Rana El Kaliouby