Daily Tech Digest - April 18, 2020

10 developer skills on the rise — and five on the decline
Understanding which disciplines and skills are up-and-coming and which are fading can help both companies and developers ensure they have the right skills and knowledge to succeed. And what better way to find that out than to mine developer job postings. Indeed.com analyzed job postings using a list of 500 key technology skill terms to see which ones employers are looking for more these days and which are falling out of favor. Such research has helped identify cutting-edge skills over the past five years, with some previous years’ risers now well establish, thanks to explosive growth. Docker, for one, has risen more than 4,000 percent in the past five years and was listed in more than 5 percent of all U.S. tech jobs in 2019. IoT as well has shot up nearly 2,000 percent in the past half-decade, with Ansible — an IT automation, configuration management, and deployment tool — and Kafka — a tool for building real-time data pipelines and streaming apps — showing similarly strong growth. And, of course, the rise of data science has also since cemented high demand for a range of skills, including artificial intelligence, machine learning, and data analysis.



Simply put, an anytime algorithm is just an algorithm that gradually improves a solution over time and can be interrupted at any time for that solution. For example, if we’re trying to come up with a route from the grocery story to the hospital, the anytime algorithm would continually produce routes that get better and better with more and more time. Basically, when we say a robot is thinking, what we really mean is that the robot is executing an anytime algorithm that produces solutions that improve over time. An anytime algorithm usually has a couple nice properties. First, an anytime algorithm exhibits monotonicity: it guarantees that solution quality increases or stays the same but never gets worse over time. Next, an anytime algorithm exhibits diminishing returns: the improvement in solution quality is high at the early stages of the computation and low at later stages. To illustrate the behavior of an anytime algorithm, take a look at this photo. In this photo, as computation time increases, solution quality increases as well. It turns out that this behavior is pretty typical of an anytime algorithm.



The pros and cons of AI and ML in DevOps

The pros and cons of AI and ML in DevOps image
A criticism that has been made towards AI in DevOps is that it can distract engineering teams from the end goal, and from more human elements of processes that are just as vital to success. “When it comes to tech and DevOps, we’re not talking about ‘strong AI’ or ‘Artificial General Intelligence’ that mimics the breadth of human understanding, but ‘soft’, or ‘weak’ AI, and specifically narrow, task-specific ‘intelligence’,” said Nigel Kersten, field CTO of Puppet. “We’re not talking about systems that think, but really just referring to statistics paired with computational power being applied to a specific task. “Now that sounds practical and useful, but is it useful to DevOps? Sure, but I strongly believe that focusing on this is dangerous and distracts from the actual benefits of a DevOps approach, which should always keep humans front and centre. “I see far too many enterprise leaders looking to ‘AI’ and Robotic Process Automation as a way of dealing with the complexity and fragility of their IT environments instead of doing the work of applying systems thinking, streamlining processes, creating autonomous teams, adopting agile and lean methodologies, and creating an environment of incremental progress and continuous improvement.


Raspberry Pi sales are rocketing in the middle of the coronavirus outbreak

When the Raspberry Pi Foundation has asked for stories about how people are using their Raspberry Pi devices to address COVID-19, one of the most common uses it saw was people showing their 3D-printed face shields, driven by a Raspberry Pi. "And that's just been individuals, that's what's inspiring - making face shields seems to be a community effort. You have people with a home printer, printing these things once a week and then going to a post office and sending them," he said. "Then you'll have some people sat in a hack space receiving the parcels, cutting the acetate and the elastic, assembling them into face shields then sending them to the hospital. It's amazing." Upton suggested this effort could eventually be ramped up to a "massively distributed scale", with the benefit of open source being that, once you have a good design that works, it can be rapidly iterated. In the long term, this could even include the ventilators themselves, he said. "One thing we're seeing with this is people finding a niche within which open hardware really works," he said.


Embracing the Journey to Public Cloud

Journey to Public Cloud
Whatever the sector, digital disruptors have one characteristic in common. They leverage the capabilities of the cloud to the maximum extent possible. A shift in power toward digital-first companies is underway, and the cloud plays a major role in helping these companies establish themselves. Many incumbents are just now taking their first steps on the journey to the cloud, and still working on the challenges involved in migrating their existing legacy applications to more scalable and agile public cloud environments. While they’re engaged in this process, digital insurgents are already exploiting the full potential of the cloud to deliver solutions that meet emerging customer needs associated with today’s evolving lifestyles. In highly regulated industries, such as financial services and healthcare, major brands will not necessarily see an immediate decline in their customer base, nor profits. Nonetheless, digital-first companies are capturing millions of customers annually, and this growth poses a considerable challenge over time. Embracing the journey to public cloud is crucial for legacy enterprises. 


jQuery 3.5 Released, Fixes XSS Vulnerability

Timmy Willison recently released a new version of jQuery. jQuery 3.5 fixes a cross-site scripting (XSS) vulnerability found in the jQuery’s HTML parser. The Snyk open source security platform estimates that 84% of all websites may be impacted by jQuery XSS vulnerabilities. jQuery 3.5 also adds missing methods for the positional selectors :even and :odd in preparation for the complete removal of positional selectors in the next major jQuery release (jQuery 4). Masato Kinugawa found a cross-site scripting (XSS) vulnerability in the htmlPrefilter method of jQuery, and published an example showing a popup alert window in the form of a challenge. Kinugawa explains that jQuery’s html() function calls the htmlPrefilter() method which uses a regexp replacing XHTML-like tags with versions that work in HTML ... While jQuery is a mature library, its presence is also very pervasive in websites. The Snyk open source security platform estimated in its State of JavaScript frameworks security report 2019 that 84% of all websites may be impacted by jQuery XSS vulnerabilities. jQuery can be found in 79% of the top 5,000 URLs from Alexa.


What Chrome OS needs to conquer next

Chrome OS Conquer
When you look at what types of tablets people are actually buying these days, what do you see? Specific data can be somewhat tough to come by, but we can pretty easily assemble a broad overview of what's happening. The big trend, not surprisingly, is that Apple tends to be the most prominent player in tablet sales — with around 36% of the worldwide market, according to IDC's latest stats. But it's what comes next that's particularly interesting for our current purposes. The second-place tablet-seller, again in no huge surprise, is almost always Samsung. But despite all the breathless coverage given to the company's high-priced tablets, IDC's past data indicates the "majority of its shipments" have been "comprised of the lower-end E and A series" devices. Hmmm. The next especially-significant-to-the-U.S. player in the list is Amazon, which uses Android as the base for its own custom tablet operating system. And guess what? ... So why are traditional Android tablets still hanging on and Chromebooks as tablets failing to catch, erm, fire? The answer is right in front of our eyes: When it comes to the non-Apple-associated tablet experience, people seem to be looking for cheap and often thus small-sized options.


Security Lapse Exposed ClearView Source Code

The repository contained Clearview’s source code, which could be used to compile and run the apps from scratch. The repository also stored some of the company’s secret keys and credentials, which granted access to Clearview’s cloud storage buckets. Inside those buckets, Clearview stored copies of its finished Windows, Mac and Android apps, as well as its iOS app, which Apple recently blocked for violating its rules. The storage buckets also contained early, pre-release developer app versions that are typically only for testing, Hussein said. The repository also exposed Clearview’s Slack tokens, according to Hussein, which, if used, could have allowed password-less access to the company’s private messages and communications. Clearview has been dogged by privacy concerns since it was forced out of stealth following a profile in The New York Times, but its technology has gone largely untested and the accuracy of its facial recognition tech unproven. Clearview claims it only allows law enforcement to use its technology, but reports show that the startup courted users from private businesses like Macy’s, Walmart and the NBA.


Zoom in crisis: How to respond and manage product security incidents

zoom crisis
Cybersecurity is a discipline in managing the risks to security, privacy, and safety. It does not eliminate them, but rather seeks to find an optimal balance between the risks, costs, and usability. That means there will always be a chance for undesired impacts. If managed properly from the onset, the minimization of those residual risks can also be handled in ways that reduce the negative effects. Crisis response is a specialty that benefits from forethought, experience, leadership, and skills. I have lead crisis response teams over the years and been fortunate to be part of strong teams that handled events with speed, efficiency, and professionalism. I have also witnessed complete train-wrecks where the wrong people were attempting to lead, focus was misplaced, valuable time and resources were squandered, legal instruments were applied to hide the truth, communication was confusing, and feeble attempts leveraging marketing to “spin messages” were preferred over actually addressing issues head-on. Poor leadership is caustic, can result in more problems and a prolonged recovery. Crisis response is a complex dance. It requires a clearly defined objective to pursue and an understanding of the opposition, obstacles, and resources.



Blockchain Is a Key Technology for the Development of Internet of Things (IoT) Solutions

Blockchain Is a Key Technology for the Development of Internet of  Things (IoT) Solutions
The main problem with the IoT is the danger it poses to users’ internet safety. Any device connected to the IoT is open to exploitation by hackers, and there have been multiple news reports in recent years ranging from hackers exploiting baby monitors to an IoT botnet taking down portions of the Internet. By opening ourselves up even more to the digital world, we are putting ourselves more and more at risk of cyberattack and hacking. And there’s no chance we’re taking a backstep anytime soon - modern internet uptake statistics say it all. We aren’t moving away from technology as the years go by, we are simply surrounding ourselves with more and more of it, putting us at more and more risk of being compromised or hacked. But rest easy, because apparently blockchain could help make the IoT a whole lot safer for us in the coming years. Given that IoT applications are, by very definition, ‘distributed’. It makes sense that distributed ledger technology like blockchain could play a vital role in allowing devices to communicate with each other. Blockchain is, at its core, a cryptographically secured, distributed ledger that allows for the secure transfer of data between parties.



Quote for the day:


"Increasingly, management_s role is not to organize work, but to direct passion and purpose." -- Greg Satell


Daily Tech Digest - April 17, 2020

Work from home, phase 2: What comes next for security?

cybersecurity  >  information security / data protection / lock / shield
From a security perspective, forward-thinking CISOs are now on to phase 2 focused on situational awareness and risk assessment. This is directly related to the fact that a lot of LAN traffic has been rerouted to WANs and internet connections. The goal? Scope out the new realities of usage patterns and the attack surface. To gain this level of visibility, organizations are deploying endpoint security agents to assess device posture and system-level activities. Think Tanium agents and EDR software from vendors like Carbon Black, CrowdStrike, and Cybereason. Security pros also recognize that employee home networks may be populated with insecure IoT devices, out-of-date family PCs, etc., so I’ve heard of instances where security teams are doing home network scans as well. Finally, there is an increased focus on network traffic monitoring travelling back-and-forth on VPNs or directly out to SaaS providers and the public cloud.  Leading organizations are also ramping up monitoring of cyber-adversaries and threat intelligence, looking for targeted attacks, COVID-19 tactics, techniques, and procedures (TTPs), IoCs, etc. 


.NET for Apache Spark brings enterprise coders and big data pros to the same table


Microsoft has worked hard to make the Spark.NET barrier-to-entry quite low. Case in point: The .NET for Apache Spark Web site provides a big white "Get Started" button that guides developers through the process of installing the framework, creating a sample wordcount application and running it. It takes the developer through the installation of all required dependencies, configuration steps, installation of .NET for Apache Spark itself, and the creation and execution of the sample application. The entire guided procedure is designed to take 10 minutes, and assumes little more than a clean machine as the starting environment. In large part it succeeds (with the caveat that I had to research and manually set the SPARK_LOCAL_IP environment variable to localhost to get the sample to run on my Windows machine), and I have to say it's quite a rush to get it working. ... You can deploy your .NET assembly to your Spark cluster and run it on a batch basis from the command line if you wish. But, for C# developers, Microsoft has also enabled the very common scenario of working interactively in a Jupyter notebook. That support includes a Jupyter notebook kernel that leverages the C# REPL (read-eval-print loop) technology which, is highly innovative in and of itself. Microsoft provides an F# kernel as well.



CI/CD Pipeline: Demystifying The Complexities

CD can help you discover and address bugs early in the delivery process before they grow into larger problems later. Your team can easily perform additional types of code tests because the entire process has been automated. With the discipline of more testing more frequently, teams can iterate faster with immediate feedback on the impact of changes. This enables teams to drive quality code with a high assurance of stability and security. Developers will know through immediate feedback whether the new code works and whether any breaking changes or bugs were introduced. Mistakes caught early on in the development process are the easiest to fix. CD helps your team deliver updates to customers quickly and frequently. When CI/CD is implemented, the velocity of the entire team, including the release of features and bug fixes, is increased. Enterprises can respond faster to market changes, security challenges, customer needs, and cost pressures. For example, if a new security feature is required, your team can implement CI/CD with automated testing to introduce the fix quickly and reliably to production systems with high confidence.


Run .Net Core on the Raspberry Pi


The .NET Core Framework also runs on ARM systems. It can be installed on the Raspberry Pi. I’ve successfully installed the .NET CORE framework on the Raspberry Pi 3 and 4. Unfortunately it isn’t supported on the Raspberry Pi Zero; ARM7 is the minimum ARM version supported. The Pi Zero uses an ARM6 processor. Provided you have a supported system you can install the framework in a moderate amount of time. The instructions I use here assume that the Pi is accessed over SSH. To begin you must find the URL to the version of the framework that works on your device. ... The current version of the .NET Core framework is 3.1. The 3.1 downloads can be found here. For running and compiling applications the installation to use is for the .NET Core SDK. (I’ll visit the ASP.NET Core framework in the future). For the ARM processors there is a 32-bit and 64-bit download. If you are running Raspbian use the 32-bit version even if the target is the 64-bit Raspberry Pi; Raspbian is a 32-bit operating system. Since there is no 64-bit version yet the 32-bit .NET Core SDK installation is necessary. Clicking on the link will take you to a page where it will automatically download.


Remote working, now and forevermore?


Companies that realize the benefits of remote work during the current crisis will be more likely to continue it long term, said Zapier’s Foster. These organizations are more likely to have a good remote working strategy in place already, he said, as well as the right tools and processes to make the transition easier.  “In terms of [the Covid-19 crisis] accelerating the movement, I’m fairly optimistic, but I think it will go one of two ways,” he said. “Companies with good communication systems in place that are already used to using things like chat, documents, and videoconferencing systems will see the benefits right away and will perhaps do more remote working in the future.”  The opposite is also true, said Foster. “Companies who don’t have effective systems in place are winging it in a lot of areas right now. They’re going to have a hard time with this sudden transition. They are being thrust into an environment where they have no structure.” In these cases, he said, the “wrong type of management, misaligned culture, and lack of essential tools” could contribute to negative remote work experiences.


Is Edge Computing a Thing?


The term “edge computing” implies a generic capability that is different from cloud computing. While there are often requirements such as data volume reduction, latency or security/compliance concerns that dictate an on-prem component of an application, other than these, does edge computing have unique requirements? It does: Real-time analysis of streaming data demands that we kick the REST + database habit. But there is nothing that is unique to the physical edge. This is great news because it means that “edge applications” can run on cloud infrastructure, or on prem. “Edge computing” is definitely a thing, but it’s about processing streaming data from the edge, as opposed to running the application at the physical edge. ... Real-world relationships between data sources are fluid, and based on computed relationships such as bad braking behavior, the application should respond differently. Finally, effects of changes are immediate, local and contextual (the inspector is notified to stop the truck). The dynamic nature of relationships suggests a graph database – and indeed a graph of related “things” is what is needed.


JSON Is Case Sensitive but You Don't Have to Be

You must have learned capitalization rules in your grammar school, but the real-world search is not so sensitive to capitalization. Charles de Gaulle uses lower case for the middle "de," Tony La Russa uses upper case for "La," and there may be etymological reasons for it, but it's unlikely for your customer service agent to remember.  Databases have a variety of sensitivities. SQL, by default, is case insensitive to identifiers and keywords, but case sensitive to data. JSON is case sensitive to both field names and data. So is N1QL. JSON can have the following. N1QL will select-join-project each field and value as a distinct field and value. ... In this article, we'll discuss dealing with data case sensitivity. Your field references are still case sensitive. If you use the wrong case for the field name, N1QL assumes this is a missing field and assigns MISSING value to that field.


Check Point sounds alarm over double extortion ransomware threat


“Double extortion is a clear and growing ransomware attack trend,” said Check Point threat intelligence manager Lotem Finkelsteen. “We saw a lot of this during Q1 2020. With this tactic, threat actors corner their victims even further by dripping sensitive information into the darkest places in the web to add weight to their ransom demands. “We are especially worried about hospitals having to face this threat. With their focus on coronavirus patients, addressing a double extortion ransomware attack would be very difficult. We are issuing a caution to hospitals and large organisations, urging them to back up their data and educate their staff about the risks of malware-spiked emails.” The first known case of such an attack was in November 2019 on the systems of Allied Universal, a US-based supplier of security and janitorial services to large enterprises, and involved Maze ransomware.


COVID-19 Pandemic Puts Privacy at Crossroads

It's possible to develop a contacts tracing system that protects the personal data of those infected with the virus and those who have been around them, says Vanessa Teague, a Melbourne-based cryptologist and CEO of Thinking Cybersecurity. Proximity-based location technology, such as Bluetooth, can ensure that precise location data isn't revealed while enabling an effective warning tool by knowing where people have been in close contact, Teague says. Also, it would be possible to do that so that a government couldn't identify people, either, she says. But maximizing privacy could also deprive government of a means to reach out to people and deliver important messages if someone has been exposed, she says. There's also a question of whether such a system should be mandated by a government or voluntary. "You could imagine some kind of a hybrid system, for example, where you might volunteer to notify an epidemiologist or the Department of Health if you found that you had been exposed," Teague says.


Government investigates perceptions about data sharing of health and social care


According to the National Data Guardian for health and adult social care, Fiona Caldicott, planning for this had started “long before the outbreak of the Coronavirus Covid-19 pandemic, so it’s not a reaction to it”. “However, we are already thinking about how the knowledge and attitudes of our public participants may have been affected,” she said in an article published on 14 April 2020. Caldicott then went on to explain that the NHS and social care services hold a lot of information about individuals that can be used for a number of purposes, including identify patterns and develop new ways to predict, diagnose or treat illness. However, she noted these organisations don’t always have the expertise to do so and that collaboration can be enabled by sharing data, citing he government’s efforts to encourage data-driven research and innovation. “Organisations which hold health and care data already assess public benefit or interest when deciding whether to allow it to be used to develop new medicines and technologies,” she said.



Quote for the day:


"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg


Daily Tech Digest - April 16, 2020

How is the role of the CTO evolving? image
Besides looking for the next technologies for a company to use, a CTO that succeeds in the modern landscape must have money management skills. Possessing them is particularly important when leading a startup due to the typical financial constraints associated with a newer company. However, keeping a careful watch on tech-related spending is a crucial part of a CTO’s role, regardless of the age of the company. Gartner predicted that worldwide IT spending will grow by 3.7% in 2020. Nevertheless, the potential to invest in new technology doesn’t exist if the company has a perpetually maxed-out or mismanaged budget. Whether being wise about expenditures means investigating new cloud providers to find more reasonable rates, or switching to a yearly billed plan for a team file storing tool to save money compared to the monthly subscription, CTOs should remain alert for practical ways to slash spending. Staying involved in keeping costs down gives the chief technology officer more freedom to invest in new technologies at the right time.


Fueling your company’s urge to surge


Most companies we look at that don’t surge are obsessed with their competitors. They compare pricing, products, marketing initiatives, and, if they can, costs and operating models. They seek to stay one step ahead (or at least not more than a step or two behind) their competitors. They are completely focused on market share. But companies that surge don’t think like that. They don’t look at their competitors, at least not quite so obsessively. They look at their customers, or potential future customers, and focus on how they can provide better value for the customer — profitably. ... Many established companies are risk averse. They adopt risk management processes, report quarterly to shareholders, and are careful not to disturb their market positioning in the eyes of investors. Having the courage to bet the company on a new product or service that fundamentally transforms a business and enables it to grow multiple times larger is rare. Even if the courage can be found, the bet is hard to pull off given the questions that boards and lenders will ask. However, we have found that companies in ASEAN markets can often make bet-the-company decisions quickly because family control means that a close-knit group drives the key decisions.


4 innovations we owe to open source

open source - pc board
Open source (and its kissing cousin, free software) guarantees three legal rights: "Free-of-charge use of the software, access to and modification of the source code, and [the ability] to pass on the source code and a binary copy." In turn, the license specifies the obligations the downstream recipient of the software must perform if she modifies the software and then distributes it. ... Today we're running into trouble because that "specificity" Riehle highlights is becoming, well, too specific, with developers blocking certain classes of organizations from using their software. In our fractious and fraught world, this is understandable. Unfortunately, it's not open source, given that non-discrimination is a cardinal virtue of open source licensing. Even so, this debate is far from over, which proves to be one of the great things about open source: Community. We don't always get along, but we're usually willing to talk about it. If legal innovation is the "brain" of open source, community is the "heart." While collaborative development didn't start with open source, open source has done more to codify the practice than anyone or anything else.


What Is an API Gateway?

Diagram of API gateways
As with any addition to your stack, API Gateways introduce another piece to manage. They need to be hosted, scaled, and managed just like the rest of your software. Since all requests and responses must pass through the gateway, they add an additional point of failure and increase the latency of each call by adding a few extra "hops" across the network. Due to their centralized location, it becomes easy to gradually increase the complexity inside the gateway until it becomes a "black box" of code. This makes maintaining the code harder. ... Gateways let clients access services, but what happens when services need to talk to one another? That's where service mesh comes in. A service mesh is a layer focused on service to service communication. You'll see gateway communication described as North-South(from clients to the gateway) and service mesh communication described as East-West(between services). Traditionally it made sense to use a service mesh and API gateway together. The gateway would be the entry point for your client's requests, and then the service mesh would allow your services to rely on one another before passing responses back through the gateway.


Defending aviation from cyber attack

Defending aviation from cyber attack
In developing their cyber security strategy, aviation businesses need to understand their supply chain and ensure their own cyber security is robust and reliable. They need to know who has access to which systems, and make sure that vendors have the right practices and procedures in place to deal with the cyber threat. There are several steps the industry can take to secure infrastructure, mitigate risk and ensure resilience in the face of the growing cyber threat.  To support businesses in this endeavour, the Civil Aviation Authority in the UK recently launched the ASSURE framework. Developed in collaboration with the Council for Registered Ethical Security Testers (CREST), the ASSURE scheme is designed to enable the aviation industry, including airlines, airports and air navigation service providers, to manage cyber security risk without compromising aviation security or resilience. Everything must be done to limit the threat and make it as difficult as possible for attackers to breach the organisation’s security systems. Achieving this cannot be done without IT teams and OT teams working together on cyber security.


HSBC survey indicates less than ten percent of Hong Kong residents are cyber smart

The survey shows that higher scoring respondents tend to be more affluent, and show greater engagement with a variety of digital activities. Despite their higher degree of risk exposure, they also exhibit better awareness and increased caution on cyber risks. As a whole, respondents showed a high degree of concerns about data privacy, although half of them are willing to connect through smart devices for better convenience. With regard to the use of financial services, 72 per cent of respondents felt uncomfortable in linking their bank account with a third party app. When it comes to cross-generational analysis, Gen Z received the highest scores in knowledge and attitude, but the lowest in behavior. For Gen X, support is needed to help them build tech-related knowledge, such as how to handle privacy settings, two-factor authentication (2FA) and biometric authentication (BA). Among Gen Y respondents, slightly more of them pay attention to suspicious activity alerts, but they have to address some knowledge and behavioral gaps.


Smart and edge data centers for e-governance services

e-governance
A smart data center can make an e-governance system agile and responsive, while fostering a learning environment and combining best practices, predictive analytics and IT automation. It taps into the power of artificial intelligence (AI) and analytics to achieve positive operational outcomes, optimize cooling and overall data center performance, maximize customer experience, and lower risk and IT costs. While identifying the root cause of issues and their impact on business in minutes, a smart data center can lower the Total Cost of Ownership (TCO) by up to 20% and decrease IT response time by up to 30%, besides providing fast, accurate, contextual, actionable insights on a proactive basis. Moreover, as smart cities unleash the full power of Big Data, IoT, Cloud and streaming services, there is a need for real-time collection and analysis of data on utilities, traffic, security and infrastructure to enable city officials to respond to problems faster than ever before. Hence, there is no room for latency in e-governance services. End users and devices demand anywhere, anytime access to applications and services, and this creates the need for setting up edge data centers for efficient delivery of e-governance services.


Comparing 4 ML Classification Techniques


In machine learning and artificial intelligence, an important type of problem is called classification. This article describes and compares four of the most commonly used classification techniques: logistic regression, perceptron, support vector machine (SVM), and single hidden layer neural networks. The goal of a classification problem is to predict the value of a variable that can take on discrete values. In binary classification the goal is to predict a variable that can be one of just two possible values, for example predicting the gender of a person (male or female). In multi-class classification the goal is to predict a variable that can be three or more possible values, for example, predicting a person's state of residence (Alabama, Alaska, . . . Wyoming). Note that a regression problem is one where the goal is to predict a numeric value, for example the annual income of a person. There are dozens of ML classification techniques, and most of these techniques have several variations. One way to mentally organize ML classification techniques is to place each into one of three categories: math equation classification techniques, distance and probability classification techniques, and tree classification techniques. This article explains four of the most common math equation classification techniques: A future PureAI article will explain compare common distance and probability techniques


Businessman using tablet and laptop analyzing sales data and economic growth graph chart. Business strategy. Digital marketing. Business innovation technology concept
"Instinctively, we feel that greater accuracy is better and all else should be subjected to this overriding goal," said Patrick Bangert, CEO of Algorithmic Technologies. "This is not so. While there are a few tasks for which a change in the second decimal place in accuracy might actually matter, for most tasks this improvement will be irrelevant--especially given that this improvement usually comes at a heavy cost."  I get that, but I must confess that I wasn't getting it very well a few years ago, when I was in charge of a financial institution's credit card operation and one of our board members was denied credit at the checkout in a home improvement store because an analytics system issued a false positive and denied him credit. Data science, IT, and business leaders responsible for analytics face the same quandary: To what degree of accuracy must the algorithm operating on the data perform for an analytics program to be declared "ready" for production? The answer depends on the nature of the problem that you're trying to solve. If you're formulating a vaccine, you want to achieve results that exceed 95%. If you're predicting a general trend, the low 90s or even the 80s might suffice.


10 Ways AI Can Improve Digital Transformation's Success Rate

10 Ways AI Can Improve Digital Transformation's Success Rate
AI is revolutionizing how organizations digitally transform their security strategies as threats to customers' identities, and personal data continue to proliferate. It's rare to hear any digital transformation strategy prioritize security. BMC's ADE framework is an exception as it recognizes how integral securing customers' identities is a core part of delivering positive customer experience. Organizations are turning to the Zero Trust Security (ZTS) framework to secure every network, cloud, and on-premise platform, operating system, and application across their supply chain and production networks. Chase Cunningham of Forrester, Principal Analyst, is the leading authority on Zero Trust Security, and his recent video, Zero Trust In Action, is worth watching to learn more about how manufacturers can secure their IT infrastructures. You can find his blog here. There are several fascinating companies to watch in this area, including MobileIron, which has created a mobile-centric, zero-trust enterprise security framework manufacturers are relying on today.



Quote for the day:


“Five years down the line, all of our devices will have an emotion chip. We won’t remember when we couldn't just frown at a device” -- Rana El Kaliouby


Daily Tech Digest - April 15, 2020

Weekly health check of ISPs, cloud providers and conferencing services

thousandeyes map
Outages for ISPs globally were down 9.13% during the week of March 30 from the week before, whereas U.S. outages were down 16.7%, dropping from 120 to 100. Worldwide the outages were also down, from 252 to 229. Public cloud outages rose worldwide from 22 to 25, and in the U.S. there was one outage, up from zero the previous week. Outages for collaboration apps rose dramatically, increasing more than 260% globally and more than 500% in the U.S. over the week before. The actual numbers were an increase from eight to 29 worldwide, and up from 4 to 25 in the U.S. ... During the week April 6-Apri 12, service outages for ISPs, cloud providers, and conferencing services dropped overall. They went from 298 down to 177 globally (40%, a six-week low), and in the U.S. dropped from 129 to 72 (44%). Globally, ISP outages were down from 229 to 141 (38%), and in the U.S. were down from 100 to 56 (44%). Cloud provider outages were also down overall from 25 to 19 (24%), ThousandEyes says, but jumped up from one to six (500%) in the U.S., which saw the highest rate of increase in seven weeks. Even so, the U.S. total was relatively low. “Again, cloud providers are doing quite well,” ThousandEyes says.


A Smattering of Thoughts About Applying Site Reliability Engineering principles

Google has a lot more detail on the principles of “on-call” rotation work compared to project-oriented work. Life of An On-Call Engineer. Of particular relevance is mention of capping the time that Site Reliability Engineers spend on purely operational work to 50% to ensure the other time is spent building solutions to impact the automation and service reliability in a proactive, rather than reactive manner. In addition, the challenges of operational reactive work and getting in the zone on solving project work with code can limit the ability to address the toil of continual fixes. Google's SRE Handbook also addresses this in mentioning that you should definitely not mix operational work and project work on the same person at the same time. Instead whoever is on call for that reactive work should focus fully on that, and not try to do project work at the same time. Trying to do both results in frustration and fragmentation in their effort. This is refreshing, as I known I've felt the pressure of needing to deliver a project, yet feeling that pressure of reactive work with operational issues taking precedence.


Coronavirus: Zoom user credentials for sale on dark web


Analysis of the database found that alongside personal accounts belonging to consumers, there were also corporate accounts registered to banks, consultancies, schools and colleges, hospitals, and software companies, among many others. IntSights said that whilst some of these accounts only included an email and password, others included Zoom meeting IDs, names and host keys. “The more specific and targeted the databases, the more it's going to cost you. A database of random usernames and passwords is probably going to go pretty cheap because it's harder to utilise,” Maor told Computer Weekly. “But if somebody says they have a database of Zoom users in the UK the price is going to get much higher because it's much more specific and much easier to use.” Whilst it is not uncommon at all for usernames and passwords to be shared or sold, Maor said that some of the discussions that followed had been intriguing, with the sale spawning a number of different posts and threads discussing different approaches to targeting Zoom users, many of them focused on credential stuffing attacks.


Remote work will be forever changed post-COVID-19

The problem with these two competing visions is that they assume we'll return to an extreme version of a pre-COVID-19 scenario, either doubling down on traditional remote working arrangements, or spending even more time traveling and sitting in offices, working the way we always did before the virus. I believe that the key lessons many of us will take from this period of enforced remote work are less about location, and more about time and work management. One thing I noticed and confirmed with several colleagues early in my COVID-19 experience was that productive video conferences were mentally more exhausting than an equivalent in-person meeting. A two-hour workshop over videoconference had the same mental drain as an all-day affair in an in-person meeting, especially for the presenters and facilitators. The medium seems to force more intense interactions, and more planning to successfully orchestrate. Collaborating in the same physical space was the pre-COVID-19 norm since it was easy.


Comparing Three Approaches to Multi-Cloud Security Management


IaC is a second approach to multi-cloud management. This approach arose in response to utility computing and second-generation web frameworks, which gave rise to widespread scaling problems for small businesses. Administrators took a pragmatic approach: they modeled their multi-cloud infrastructures with code, and were therefore able to write management tools that operated in a similar way to standard software. IaC sits in between the other approaches on this list, and represents a compromise solution. It gives more fine-grained control over cloud management and security processes than a CMP, especially when used in conjunction with SaaS security vendors whose software can apply a consistent security layer to a software model of your cloud infrastructure. This is important because SaaS is growing rapidly in popularity, with 86% of organizations expected to have SaaS meeting the vast majority of their software needs within two years.  On the other hand, IaC requires a greater level of knowledge and vigilance than either CMP—or cloud-native approaches.


DevOps implementation is often unsuccessful. Here's why

The primary feature of DevOps is, to a certain extent, the automation of the software development process. Continuous integration and continuous delivery (CI/CD) principles are the cornerstones of this concept, and as you likely know, are very reliant on tools. Tools are awesome, they really are. They can bring unprecedented speed to the software delivery process, managing the code repository, testing, maintenance, and storage elements with relatively seamless ease. And if you’re managing a team of developers in a DevOps process, these tools and ​the people who use them are a vital piece of the puzzle​ in shipping quality software. However, while robots might take all our jobs and imprison us someday, they are definitely not there yet. Heavy reliance on tools and automation leaves a window wide open for errors. Scans and tests may not pick up everything, code may go unchecked, and that presents enormous quality (not to mention, security) issues down the track. An attacker only needs one back door to exploit to steal data, and forgoing the human element in quality and security control can have disastrous consequences.


Videoconferencing quick fixes need a rethink when the pandemic abates

tech spotlight collaboration nww by metamorworks gettyimages 1154341603 3x2 2400x1600
A tier down from its immersive telepresence big brother is the multipurpose conference room. Inside offices, companies have designated multipurpose rooms, equipped more minimally with videoconferencing equipment. Instead of spending big bucks on devoting an entire room, with all of the bells and whistles, to an immersive telepresence system, why not outfit a conference room with enough cameras, screens and microphones to offer a good virtual meeting experience, while still leaving the room to be used for general meetings? These multipurpose rooms generally cost a few thousand dollars to outfit with a camera, a microphone array and maybe some integrated digital whiteboards, and a PC or iPad as a control mechanism, Kerravala says. It's a lot more affordable, but a multipurpose conference room still is bandwidth intensive. And it's likely to be tapping bandwidth on the shared network – instead of having its own pipe, as an immersive room would – and that needs to be taken into consideration in network capacity planning.



Information Age roundtable: Harnessing the power of data in the utilities sector

When it comes to data usage across the company, a major aspect to be considered is the trust that is placed in employees. For Graeme Wright, chief digital officer, manufacturing, utilities and services at Fujitsu UK, “data is only trusted with certain people. “Sometimes, it goes across organisational boundaries, because of the third-party suppliers that people are using, and I don’t know if people have really been really incentivised to exploit the value of that data.” Wright went on to explain that the field force “need a different method of interacting to make sure that the data flows freely from them into the actual centre so we can actually analyse it and understand what’s going on”. Steven Steer, head of data at Ofgem, also weighed in on this issue: “This is really central to the energy sector’s agenda over the last year or so. The Energy Data Task Force, an independent task force, published its findings in June, and one of the main findings was the presumption that data is open to all, not just within your own organisation. 



At first glance, low-code and cloud-native don’t seem to have much to do with each other — but many of the low-code vendors are still making the connection. After all, microservices are chunks of software code, right? So why hand-code them if you could take a low-code approach to craft your microservices? Not so fast. Microservices generally focus on back-end functionality that simply doesn’t lend itself to the visual modeling context that low-code provides. Furthermore, today’s low-code tools tend to center on front-end application creation (often for mobile apps), as well as business process workflow design and automation. Bespoke microservices are unlikely to be on this list of low-code sweet spots. It's clear from the definition of microservices above that they are code-centric and thus might not lend themselves to low-code development. However, how organizations assemble microservices into applications is a different story. Some low-code vendors would have you believe that you can think of microservices as LEGO blocks that you can assemble into applications. Superficially, this LEGO metaphor is on the right track – but the devil is in the details.


Graph Knowledge Base for Stateful Cloud-Native Applications

As a rule, stateless applications do not persist any client application state between requests or events. “Statelessness” decouples cloud-native services from client applications to achieve desired isolation. The tenets of microservice and serverless architecture expressly prohibit retention of session-state or global-context. However, while the state doesn’t reside in the container, it still has to live somewhere. After all, a stateless function takes state as inputs. Application state didn’t go away, it moved. The trade-off is that state, and with it any global-context, must be re-loaded with every execution. The practical consequence of statelessness is a spike in network usage, which results in chatty, bandwidth and I/O intensive, inter-process communications. This comes at a price – in terms of both increased Cloud service expenditures, as well as latency and performance impacts on Client applications. Distributed computing had already weakened the bonds of data-gravity as a long-standing design principle, forcing applications to integrate with an ever-increasing number of external data sources. Cloud-native architecture flips-the-script completely - data ships to functions.



Quote for the day:


"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis


Daily Tech Digest - April 14, 2020

Microsoft and Google delay online authentication change


Companies are gradually replacing this method with more modern protocols. Microsoft and Google are both shifting to OAuth 2.0, which uses tokens to authenticate applications with online services, and gives them an expiry date. That way, an application stays authorised for a predefined period, minimising the need to exchange credentials. This also makes it easier to implement multi-factor authentication (MFA). Microsoft announced that it would switch off Basic Authentication in its Exchange Web Services (EWS) API for Office 365 back in July 2018. It planned to turn off support for the feature entirely on 13 October 2021. At the same time, it also advised developers to begin moving away from this API, instead using Microsoft Graph, which is its newer API for accessing back-end cloud services such as Exchange Online. It also expanded those plans in September 2019, announcing that it would turn off Basic Authentication in Exchange Online for Exchange ActiveSync (EAS), POP, IMAP and Remote PowerShell.



How to achieve agile DevOps: a disruptive necessity for transformation

How to achieve agile DevOps: a disruptive necessity for transformation image
Organisations have accept that the transition to agile DevOps is going to be disruptive, but entirely necessary for effective and sustainable transformation. According to Erica Langhi, EMEA senior solutions architect at Red Hat, “the best way to mitigate this disruption is through transparency and openness — businesses need to make the benefits of this transition clear to their teams. After that, they should encourage their developers and operations teams to look at how other parts of the business are working.” After this, leaders will need to look at the company’s culture “and start making the tweaks necessary to promote collaboration and communication between teams; this isn’t optional, as nine out of ten organisations that try to make the change to DevOps without changing their culture and structure will fail,” she advised. Overall, to create a maximally agile DevOps, organisation’s “should also invest in a few other technologies and cultural changes. DevOps in fact brings together people, processes, and technology for better efficiency. ...” Langhi continued.


Defining the Database Requirements of Dynamic JAMstack Applications


To understand why multi-region distribution is desirable, let’s revisit why static websites on CDNs are incredibly fast. A CDN is fast to deliver your content because it contains copies of your content at different locations. When content is requested from the CDN from a specific location, it will attempt to deliver that content from the closest location to the requestor. In order to get an idea of how much that matters, take a glance at the Zeit CDN status page which shows you the difference in latency between your current location and other locations. By deploying our applications to a CDN, our pages automatically load from the closest location to the user, which results in low loading latencies. And low latencies result in a great user experience. In order to keep this user experience, the dynamic data that will be loaded from our APIs has to exhibit low latencies as well, and the best way to achieve this is to use a distributed database.


Talking Digital Future: Blockchain Technology

Talking Digital Future: Blockchain Technology
Indeed, the United Nations World Food Program, for example, is serving an incredibly large number of people. And we want the highest amount of good resources to go to those people — so they are. The U.N. did a first round of experimentation on blockchain so it could track the flow of aid from source to destination, and it was very successful. Now, it’s in the second or third round of expanding it. I think I like this technology because it directly and positively impacts human beings. This is probably one of my favorite cases at the moment. Another one is the real estate registries. Very often these are paper-based. I think about New Orleans when Hurricane Katrina came a few years ago. The city was flooded, and it was a complete disaster. It was a terrible tragedy. When the water subsided and the city was getting back on its feet, lots of houses were destroyed and the city had to find the titles for the homes. Well, they were destroyed because they were in boxes and the papers were in the basement of a building that was flooded. So, they had a lot of difficulty for a very long time identifying which properties belonged to who, and then how they could sell the properties.


Edge computing vs. cloud computing: What's the difference?

CIO Edge computing myths
Real-time performance is one of the main reasons for using an edge computing architecture, but not the only one. Edge computing can also help prevent overloading network backbones by processing more data locally and sending to the cloud only data that needs to go to the cloud. There could also be security, privacy, and data sovereignty advantages to keeping more data close to the source rather than shipping it to a centralized location. There are plenty of challenges ahead for edge computing, however. A recent Gartner report, How to Overcome Four Major Challenges in Edge Computing, suggests “through 2022, 50 percent of edge computing solutions that worked as proofs of concept (POCs) will fail to scale for production use.” Those who pursue the promise of edge computing need to be prepared to tackle all the usual issues associated with technologies that still need to prove themselves – best practices for edge system management, governance, integration, and so on have yet to be defined.


Enterprises regard the cloud as critical for innovation, but struggle with security


Only a little over half (58%) said their organization has clear guidelines and policies in place for developers building applications and operating in the public cloud. And of those, 25% said these policies are not enforced, while 17% confirmed their organization lacks clear guidelines entirely. “Enterprises believe they must choose between innovation and security—a false choice we see manifested in the results of this report, as well as in conversations with our customers and prospects,” said Brian Johnson, CEO at DivvyCloud. “Only 35% of respondents do not believe security impedes developers’ self-service access to best-in-class cloud services to drive innovation—meaning 65% believe they must choose between giving developers self-service access to tools that fuel innovation and remaining secure. “The truth is, security issues in the cloud can be avoided. By employing the necessary people, processes, and systems at the same time as cloud adoption, enterprises can reap the benefits of the cloud while ensuring continuous security and compliance.”


Developers: Getting ahead is about more than programming languages


From a career perspective, IT professionals will often reach a point where they have to choose between becoming a technical specialist or moving down the management path. But even for those on the management path it is incredibly important that they stay up to date with what is new in tech as it becomes all too easy to fall out of step, he said. Gill says another trend within the IT industry is for companies to become more customer-focused in how they develop their products and services. In light of this, ambitious IT professionals must develop an understanding of the clients' needs as well as the intricacies of the code. "They should discuss requirements directly with them where possible or else with their points of contact within their own organisation, such as sales or business development. Having direct feedback and input from clients means the IT professionals will have a far greater chance of delivering something that will meet their needs," says Gill. Malcolm Lowe, head of IT at Transport for Greater Manchester (TfGM), is another tech chief who believes focusing on the needs of the user is the key to career-development success. He advises other IT professionals to couch everything they do in business outcomes and user needs – because, at the end of the day, that's what you're providing.


How to build a DevSecOps strategy

How to build a DevSecOps strategy image
Almost every DevOps guide talks about implementing the practice at a cultural level, and the same is true with DevSecOps. Developers tend to be incredibly creative and talented people who take a lot of pride in what they do. Get out of their way and allow them to grow. Think of it as future-proofing your security design through a more holistic approach. That’s precisely why the first step on this list is training and educating team members. When given a chance, they will work to further their skills and experience. They will also take everything they learn and incorporate it into the code and content they’re creating. It’s all about giving them the tools they need to succeed, which will only further improve the end product. ... Most likely, there are projects and segments already in place, and your teams created existing code with a different method. Don’t look at this as a negative or obstacle. It provides an excellent opportunity to revisit the foundations of a system to implement the protective armour we’re discussing.


As cybersecurity concerns grow, so does need for security professionals


For people who already work in IT but choose to refocus their energies in the area of cybersecurity, the switch can be lucrative. Job-market analytics company Burning Glass Technologies has been tracking the cybersecurity job market since 2013. In its June 2019 report, it states that the number of cybersecurity job postings has grown 94% since 2013, compared to only 30% for IT positions overall. This growth is three times faster than the overall IT market. Burning Glass’s research shows that cybersecurity jobs account for 13% of all IT jobs. On average, however, cybersecurity jobs take 20% longer to fill than other IT jobs and pay 16% more. This works out to an average of $12,700 more per year. According to the U.S. Bureau of Labor Statistics, the average salary for an information security analyst is $98,350. Analysts plan and carry out security measures to protect an organization’s computer networks and systems. “Their responsibilities continually expand as the number of cyberattacks increases,” Li says.


What Is A Data Passport: Building Trust, Data Privacy And Security In The Cloud


Data passport technology is based on classic mainframe technology, which today, can include full encryption of your data, to ensure that every piece of data is encrypted. When each piece of data is encrypted, even if it is stolen, it can’t be used.  Data passports allow you to extend the encryption technology that used to be only available on a physical mainframe to cloud computing. Each piece of data in the cloud has a passport assigned to it, and with the passport, you can verify if the data is misused, if the passport is still valid, etc. These data passports also give companies the ability to protect data and revoke access to it at any time, across a multi-cloud environment. Because the data carries its passport — and its encryption — with it, it will help enterprises secure their data wherever it travels. And that's the most significant development that makes data passports so unique and important: the protection and enforcement of data privacy and security are available on and off any given platform as it travels with the data.



Quote for the day:



"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner