Daily Tech Digest - October 24, 2020

How will self-driving cars affect public health?

The researchers created a conceptual model to systematically identify the pathways through which AVs can affect public health. The proposed model summarizes the potential changes in transportation after AV implementation into seven points of impact: transportation infrastructure; land use and the built environment; traffic flow; transportation mode choice; transportation equity; and jobs related to transportation and traffic safety. The changes in transportation are then attributed to potential health impacts. In optimistic views, AVs are expected to prevent 94% of traffic crashes by eliminating driver error, but AVs’ operation introduces new safety issues such as the potential of malfunctioning sensors in detecting objects, misinterpretation of data, and poorly executed responses, which can jeopardize the reliability of AVs and cause serious safety consequences in an automated environment. Another possible safety consideration is the riskier behavior of users because of their overreliance on AVs—for example, neglecting the use of seatbelts due to an increased false sense of safety. AVs have the potential to shift people from public transportation and active transportation such as walking and biking to private vehicles in urban areas, which can result in more air pollution and greenhouse gas emissions and create the potential loss of driving jobs for those in the public transit or freight transport industries.


Now’s The Time For Long-Term Thinking

For most financial institutions, the strategic planning process for 2021 is far different than any in the past. As opposed to an iterative adjustment to plans from the previous year, this year’s planning must take into account a level of change in technology, competition, consumer behaviors, society and many other areas that is far less defined than before. The uncertainty about the future requires a combination of a solid strategic foundation with sensing capabilities and the ability to respond to threats and opportunities as quickly as possible. For many banks and credit unions, this will require organizational restructuring, the reallocation of resources, revamping processes, finding new outside partners and a culture that will support flexibility in plans that never was required before. There is also the need to build a marketplace sensing capability across the entire organization and from a broader array of sources. This includes customers, internal staff (especially customer-facing employees), suppliers, strategic partners, research organizations, boards of directors and even competition. Gathering the insights is only half the battle. There must also be a centralized location to gather and analyze the insights collected.


Rapid Threat Evolution Spurs Crucial Healthcare Cybersecurity Needs

Cybercriminals have been actively taking advantage of the global pandemic, with an increase in cyberattacks, phishing, spear-phishing, and business email compromise (BEC) attempts. And on the healthcare side of things, NSCA Executive Director, Kelvin Coleman, said it’s not a huge surprise.  Even in the early 1900s during the Spanish flu pandemic, folks would put articles in newspapers to take advantage of the crisis with hoaxes and scams, Coleman explained. “Bad actors take advantage of crises,” he said. “Hackers are being aggressive, leveraging targeted emails and phishing attempts. Josh Corman, cofounder of IAmTheCalvary.org and DHS CISA Visiting Researcher, stressed that when a provider is forced into EHR downtime and to divert patient care, it’s even more nightmarish during a pandemic. In Germany, a patient died earlier this month after a ransomware attack shut down operations at a hospital, and she was diverted to another hospital. These are criminals without scruples, Corman explained. The attacks were happening before the pandemic, but there’s been no cease- fire amid the crisis. In healthcare, hackers continue to rely on previously successful attack methods – especially phishing. It continues to be a successful attack method. 


FBI, CISA: Russian hackers breached US government networks, exfiltrated data

US officials identified the Russian hacker group as Energetic Bear, a codename used by the cybersecurity industry. Other names for the same group also include TEMP.Isotope, Berserk Bear, TeamSpy, Dragonfly, Havex, Crouching Yeti, and Koala. Officials said the group has been targeting dozens of US state, local, territorial, and tribal (SLTT) government networks since at least February 2020. Companies in the aviation industry were also targeted, CISA and FBI said. The two agencies said Energetic Bear "successfully compromised network infrastructure, and as of October 1, 2020, exfiltrated data from at least two victim servers." The intrusions detailed in today's CISA and FBI advisory are a continuation of attacks detailed in a previous CISA and FBI joint alert, dated October 9. The previous advisory described how hackers had breached US government networks by combining VPN appliances and Windows bugs. Today's advisory attributes those intrusions to the Russian hacker group but also provides additional details about Energetic Bear's tactics. According to the technical advisory, Russian hackers used publicly known vulnerabilities to breach networking gear, pivot to internal networks, elevate privileges, and steal sensitive data.


Secure NTP with NTS

NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients. NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks. The default NTP client in Fedora is chrony. Chrony added NTS support in version 4.0. The default configuration hasn’t changed. Chrony still uses public servers from the pool.ntp.org project and NTS is not enabled by default. Currently, there are very few public NTP servers that support NTS. The two major providers are Cloudflare and Netnod.


Non-Intimidating Ways To Introduce AI/ML To Children

The brainchild of IBM, Machine Learning for Kids is a free, web-based tool to introduce children to machine learning systems and applications of AI in the real world. Machine Learning for Kids is built by Dale Lane using APIs from IBM Watson. It provides hands-on experiments to train ML systems that recognise texts, images, sounds, and numbers. It leverages platforms such as Scratch and App Inventor to create interesting projects and games. It is also being used in schools as a significant resource to teach AI and ML to students. Teachers can also form their own admin page to manage their access to students. A product from the MIT Media Lab, Cognimates is an open-source AI learning platform for young children starting from age 7. Children can learn how to build games, robots, and train their own AI modes. Like Machine Learning for Kids, Cognimates is also based on Scratch programming language. It provides a library of tools and activities for learning AI. This platform even allows children to program intelligent devices such as Alexa. Another offering from Google in order to make learning AI fun and engaging is AIY. The name is an intelligent wordplay with AI and do-it-yourself (DIY).


How RPA differs from conversational AI, and the benefits of both

Enterprises are working to digitally transform core business processes to enable greater automation of backend processes and to encourage more seamless customer experiences and self-service at the frontend. We are seeing banks, insurers, retailers, energy providers and telcos working to develop their own digital assistants with a growing number of skills, while still providing a consistent brand experience. Developing bots doesn’t have to be complex. It is more important to carefully identify the right use cases where these technologies will deliver clear ROI with the least amount of effort. Whether an enterprise is applying RPA or conversational AI, or both, it’s important to first understand the business problem that needs to be solved, and then identify where bots will make an immediate difference. Then consider the investment required, barriers to successful implementation, and the expected business outcomes. It’s better to start small with a narrowly focused use case and achievable KPIs, rather than trying to do too much at once. Conversational AI and RPA are very powerful automation technologies. When designed well, a chatbot can automate up to 80% of routine queries that come into a customer service centre or IT helpdesk, saving an organisation time and money and enabling it to scale its operations.


Things to consider when running visual tests in CI/CD pipelines: Getting Started

Testing – it’s an important part of a developer’s day-to-day, but it’s also crucial to the operations engineer. In a world where DevOps is more than just a buzzword, where it’s become accepted as a mindset shift and culture change, we all need to consider running quality tests. Traditional testing may include UI testing, integration testing, code coverage checks, and so forth, but at some point, we still need eyeballs on a physical page. How many times have we seen a funny looking page because of CSS errors? Or worse yet, an important button like say, “Buy now” “missing” because someone changed the CSS and now the button blends in with the background? Logically, the page still works, and even from a traditional test perspective, the button can be clicked, and the DOM (used in UI Test verification) is perfect. Visually, however, the page is broken; this is where visual testing comes into play. Visual testing allows us to use automated UI testing with the power of AI to help us determine if a page “looks right” aside from just “functions right.” Earlier this year, I partnered with Angie Jones from Applitools in a joint webinar where we talked about best practices as it pertains to both Visual Testing and also CI/CD. This blog post is a summary of that webinar and how to handle visual testing in CI/CD.


Design patterns – for faster, more reliable programming

Every design has a pattern and everything has a template, whether it be a cup, house, or dress. No one would consider attaching a cup’s handle to the inside – apart from novelty item manufacturers. It has simply been proven that these components should be attached to the outside for practical purposes. If you are taking a pottery class and want to make a pot with handles, you already know what the basic shape should be. It is stored in your head as a design pattern, in a manner of speaking. The same general idea applies to computer programming. Certain procedures are repeated frequently, so it was no great leap to think of creating something like pattern templates. In our guide, we will show you how these design patterns can simplify programming. The term “design pattern” was originally coined by the American architect Christopher Alexander who created a collection of reusable patterns. His plan was to involve future users of the structures in the design process. This idea was then adopted by a number of computer scientists. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (sometimes referred to as the Gang of Four or GoF) helped software patterns break through and gain acceptance with their book “Design Patterns – Elements of Reusable Object-Oriented Software” in 1994.


Public and Private Blockchain: How to Differentiate Them and Their Use Cases

Public blockchain is the model of Bitcoin, Ethereum, and Litecoin and is essentially considered to be the original distributed ledger structure. This type of blockchain is completely open and anyone can join and participate in the network. It can receive and send transactions from anybody in the world, and can also be audited by anyone who is in the system. Each node (a computer connected to the network) has as much transmission and power as any other, making public blockchains not only decentralized, but fully distributed, as well. ... Private blockchains, on the other hand, are essentially forks of the originator but are deployed in what is called a permissioned manner. In order to gain access to a private blockchain network, one must be invited and then validated by either the network starter or by specific rules that were put into place by the network starter. Once the invitation is accepted, the new entity can contribute to the maintenance of the blockchain in the customary manner. Due to the fact that the blockchain is on a closed network, it offers the benefits of the technology but not necessarily the distributed characteristics of the public blockchain.



Quote for the day:

"Every moment is a golden one for those who have the vision to recognize it as such." -- Henry Miller

Daily Tech Digest - October 23, 2020

Enterprise Architecture and Tech Debt

Architects must assess the changed needs of the business, – customers, staff, supply chain and identify efficient technology to support those new requirements. There is opportunity to walk away from legacy technology containing Unplanned Tech Debt that has never been corrected, the result of poor practices or poorly communicated requirements. The move to remote workspace may present the option to discontinue the use of equipment or applications that have become instances of Creeping Tech Debt where features become obsolete, replaced by the better, faster more capable upgrades. Or, the applications and operating systems are no longer supported, causing security vulnerabilities. Changes in market dynamics as the customer base struggles to understand their new needs, constraints and opportunities invite architects and product developers to consider incurring Intentional Tech Debt. By releasing prototypes and minimal viable products (MVPs) customers become partners in product development, helping to build the plane even as it reaches cruising altitude. Architects know this will entail false starts as perceived requirements morph or fade away and require rework as the product matures.


Understanding GraphQL engine implementations

Generic and flexible are the key words here and it’s important to realize that it’s hard to keep generic APIs performant. Performance is the number one reason that someone would write a highly customized endpoint in REST (e.g. to join specific data together) and that is exactly what GraphQL tries to eliminate. In other words, it’s a tradeoff, which typically means we can’t have the cake and eat it too. However, is that true? Can’t we get both the generality of GraphQL and the performance of custom endpoints? It depends! Let me first explain what GraphQL is, and what it does really well. Then I’ll discuss how this awesomeness moves problems toward the back-end implementation. Finally, we’ll zoom into different solutions that boost the performance while keeping the generality, and how that compares to what we at Fauna call “native” GraphQL, a solution that offers an out-of-the-box GraphQL layer on top of a database while keeping the performance and advantages of the underlying database. Before we can explain what makes a GraphQL API “native,” we need to explain GraphQL. After all, GraphQL is a multi-headed beast in the sense that it can be used for many different things. First things first: GraphQL is, in essence, a specification that defines three things: schema syntax, query syntax, and a query execution reference.


Digital transformation starts with software development

Software development is another key requirement for businesses that are pursuing digital transformation quests. Leveraging technology and ensuring it is able to offer reliable and high quality results is a key focus for the majority of companies. At this stage, it is important for businesses to acknowledge what its strategic goals are and implement software that is going to help it reach those ambitions and achieve tangible results. Businesses should also ensure the technology it selects is equipped with sustainable software that is going to withstand time, inevitable digital advances and deliver the requirements of the new normal. In addition, today’s current climate has emphasised the importance of providing teams with reliable software that enables them to work remotely and complete projects without any constraints. In the midst of the pandemic, 60% of the UK’s adult population were working remotely. Unfortunately, many businesses did not have the technology in place to cope with this immediate change. Therefore, IT decision makers and leaders had to undergo a rapid shift to remain agile and maintain continuity during this unprecedented time. By keeping software up to date and regularly enhancing tools, employees can remain productive and maintain a high level of communication with colleagues.


We need to be more imaginative about cybersecurity than we are right now

“Trying to achieve security is something of a design attitude—where at every level in your system design, you are thinking about the possible things that can go wrong, the ways the system can be influenced, and what circuit-breakers you might have in place in case something unforeseen happens,” said Mickens. “That seems like a vague answer because it is: There isn’t a magic way to do it.” Designers, Mickens continued, might even need to consider the political or ethical mindset of the people using their system. “There’s no simple way to figure out if our system is going to be used ethically or not, because ethics itself is very poorly defined. And when we think about security, we need to have a similarly broad attitude, saying that there are fundamental questions which are ambiguous, and which have no clean answer—‘What is security and how do I make my product secure?’ As a result, we need to be more imaginative than we are right now.” Thus, suggested Zittrain, the question has moved to the supply side: Consumers want safe products, and the onus is on designers to provide them. This, he said, opens an even thornier question: Does there need to be a regulatory board for people producing code, and if not, “What would incent the suppliers to worry about systematic risks that might not even be traced back to them?”


How to Make DevOps Work with SAFe and On-Premise Software

The main issues we dealt with in speeding up our delivery from a DevOps perspective were: testing (unit and integration), pipeline security check, licensing (open source and other), builds, static code analysis, and deployment of the current release version. For some of these problems we had the tools, for some, we didn’t and we had to integrate new tools. Another issue was the lack of general visibility into the pipeline. We were unable to get a glimpse of what our DevOps status was, at any given moment. This was because we were using many tools for different purposes and there was no consolidated place where someone could take a look and see the complete status for a particular component or the broader project. Having distributed teams is always challenging getting them to come to the same understanding and visibility for the development status. We implemented a tool to enable a standard visibility into how each team was doing and how the SAFe train(s) were doing in general. This tool provided us with a good overview of the pipeline health. The QA department has been working as the key-holder of the releases. Its responsibility is to check the releases against all bugs and not allow the version to be released if there are critical bugs.


The Two Sides of AI in the Modern Digital Age

We will now discuss some of its more sinister aspects. As we’ve already mentioned, as the digital landscape welcomes an increasing number of technological advancements, so does the threat landscape. With rapid progress in the cybersecurity arena, cybercriminals have turned to AI to amp up on their sophistication. One such way through which hackers leverage the potential of artificial intelligence is by using AI to hide malicious codes in otherwise trustworthy applications. The hackers program the code in such a way that it executes after a certain period has elapsed, which makes detection even more difficult. In some cases, cybercriminals programmed the code to activate after a particular number of individuals have downloaded the application, which maximizes the attack’s attack’s impact. Furthermore, hackers can manipulate the power offered by artificial intelligence, and use the AI’s ability to adapt to changes in the environment for their gain. Typically, hackers employ AI-powered systems adaptability to execute stealth attacks and formulate intelligent malware programs. These malware programs can collect information on why previous attacks weren’t successful during attacks and act accordingly.


A Pause to Address 'Ethical Debt' of Facial Recognition

This pause is needed. All too often, ethics lags technology. With all apologies to Jeff Goldblum, there's no need to be hunted by intelligent dinosaurs to realize that we often do things because "we can rather than that we should." This ACM's call for restraint is appropriate, although a few issues remain. What about the facial data that already exists from currently deployed systems? This is not unique to facial recognition, but rather one that is well known from GDPR compliance and other use cases. The stoppage is intended for private and public entities, but personal cameras — and an opening for facial recognition — are rapidly becoming ubiquitous. Log in to your neighborhood watch program for a close-to-home example. (What street doesn't have a doorbell camera?) Public life is being monitored and passive data on our habits and lives is continually collected; any place that there is a camera, facial recognition technology is in play. The call by the ACM could be stronger. They urge the immediate suspension of use of facial recognition technology anywhere that is "known or reasonably foreseeable to be prejudicial to established human and legal rights." What is considered reasonable here? Is good intent enough to absolve misuse of these systems from blame, for instance?


DevOps best practices Q&A: Automated deployments at GitHub

Ultimately, we push code to production on our own GitHub cloud platform, on our data centers, utilizing features provided by the GitHub UI and API along the way. The deployment process can be initiated with ChatOps, a series of Hubot commands. They enable us to automate all sorts of workflows and have a pretty simple interface for people to engage with in order to roll out their changes. When folks have a change that they’d like to ship or deploy to github.com, they just need to run .deploy with a link to their pull request and the system will automatically deconstruct what’s within that link, using GitHub’s API for understanding important details such as the required CI checks, authorization, and authentication. Once the deployment has progressed through a series of stages—which we will talk about in more detail later—you’re able to merge your pull request in GitHub, and from there you can continue on with your day, continue making improvements, and shipping features. The system will know exactly how to deploy it, which servers are involved, and what systems to run. The person running the command has no need to know that it’s all happening. Before any changes are made, we run a series of authentication processes to ensure a user even has the right access to run these commands.


Exploring the prolific threats influencing the cyber landscape

Ransomware has quickly become a more lucrative business model in the past year, with cybercriminals taking online extortion to a new level by threatening to publicly release stolen data or sell it and name and shame victims on dedicated websites. The criminals behind the Maze, Sodinokibi (also known as REvil) and DoppelPaymer ransomware strains are the pioneers of this growing tactic, which is delivering bigger profits and resulting in a wave of copycat actors and new ransomware peddlers. Additionally, the infamous LockBit ransomware emerged earlier this year, which — in addition to copying the extortion tactic — has gained attention due to its self-spreading feature that quickly infects other computers on a corporate network. The motivations behind LockBit appear to be financial, too. CTI analysts have tracked cybercriminals behind it on Dark Web forums, where they are found to advertise regular updates and improvements to the ransomware, and actively recruit new members promising a portion of the ransom money. The success of these hack-and-leak extortion methods, especially against larger organizations, means they will likely proliferate for the remainder of 2020 and could foreshadow future hacking trends in 2021.


Unsecured Voice Transcripts Expose Health Data - Again

In a report issued Tuesday, security researchers at vpnMentor write that they discovered the exposed voice transcript records in early July and contacted Pfizer about the problem three times before the pharmaceutical company finally responded on Sept. 22 and fixed the issue on Sept. 23. Contained in the exposed records were personally identifiable information, including customers' full names, home addresses, email addresses, phone numbers and partial details of health and medical status, the report says. ...  However, upon further investigation, we found files and entries connected to various brands owned by Pfizer," including Lyrica, Chantix, Viagra and cancer treatments Ibrance and Aromasin, the report says. Eventually, the vpnMentor team concluded the exposed bucket most likely belonged to the company's U.S. Drug Safety Unit. "Once we had concluded our investigation, we reached out to Pfizer to present our findings. It took two months, but eventually, we received a reply from the company." In a statement provided to Information Security Media Group, the pharmaceutical company says: "Pfizer is aware that a small number of non-HIPAA data records on a vendor-operated system used for feedback on existing medicines were inadvertently publicly available. ..."



Quote for the day:

"A leader or a man of action in a crisis almost always acts subconsciously and then thinks of the reasons for his action." -- Jawaharlal Nehru

Daily Tech Digest - October 22, 2020

Cisco reports highlight widespread desire for data privacy and fears over remote work security

Cisco has released two studies examining how workers feel about the current state of play when it comes to remote work security and data privacy, finding that thousands around the world are increasingly concerned about how their employers are handling the massive societal changes that have occurred over the last six months. The "Consumer Privacy" report includes findings from a study of responses from more than 2,600 adults in 12 countries across Europe, Asia, and the Americas. The "Global Future of Secure Remote Work" report has insights gleaned from over 3,000 IT decision makers in the Americas, Japan, China, and Europe.  Both reports indicate that remote work is now a permanent part of the new normal, with 62% of respondents telling researchers that more than half of their workplace is working remotely since the onset of the coronavirus pandemic. Despite the massive shift to telecommuting, the vast majority of people who responded to the survey said they did not trust the digital tools they used for work.  Workers and consumers are particularly concerned about the privacy protections built into the tools they use for work and nearly half of all respondents said they do not feel that most businesses can effectively protect their data today.


How To Protect Yourself From Unexpectedly High AWS Bills

Set up billing alerts. If you are using AWS, even for a small task, please please please set up billing alerts. They are not required during setup, but if you are a non-enterprise user, I would consider this step mandatory as AWS will not alert you to dramatic increases in charges unless they bypass 15K which is already an incredible amount of money. Read the pricing table…carefully. If you are installing a new service, make sure to carefully read the pricing table. Amazon will sometimes set ridiculous defaults for container size which you might not see until the bill comes in. Do understand however that this might not be good enough, as bugs, loose API keys, and improper installations can do crazy things. Consider using another service. If you are a non-business individual user or small-business user, you might want to consider using another service. AWS is built for enterprise customers, and as such an enterprise wallet. Yes, it can be very cheap, but consider this: after my little mix up, I could have payed $150 a month over all of the years I used AWS and still come out ahead. Yes, AWS might be cheap at first, but one mistake can make it very expensive.


Learn from the hype surrounding kale – don’t rush Kubernetes

It requires more than just Kubernetes to achieve business outcomes, and hype surrounds the technology and term Kubernetes. A lot of false expectations exist too. Some companies may have heard on the IT grapevine that Google, AWS, Netflix, and Microsoft bet on Docker as a container format and Kubernetes as the orchestration engine – that the technology can scale and provide infrastructure at the same level as the big players. Simultaneously they may not be aware that the whole business model of such companies focuses on making infrastructure fluid and immediately available. Regular customers have a very different business model, with solutions based on trusted platforms by trusted partners that have solved virtualisation in the past, and those partners now have solutions to achieve the same outcomes with containerisation. Of course, Kubernetes technology also has its benefits. Businesses can become more efficient in their use of IT and achieve better results, faster, from development life cycles. They’ll produce better software via more automation and standardisation. Organisations can then use software to explore new business opportunities, experiment with the best ways to profit from ideas, and evolve accordingly.


Ubiq Rolls Out Encryption-as-a-Service Platform Aimed at Developers

Encryption has always been a fundamental part of computing — many of the early uses of computers were for cracking codes — and the technology has always been difficult to implement correctly. Despite the fact that there are many open source encryption efforts, adoption has remained low until the data-security capabilities could be integrated into technology. Even companies immersed in security and technology have had poor adoption rates. Google, for example, only had encryption implemented in half of its products in 2014, although the company claims that share is 95% today. On the development side, encryption errors continue to be prevalent among applications, irrespective of the programming language. Cryptographic errors are the second most common software vulnerability, occurring in 62% of applications, just behind information leakage, which occurs in 64%, according to application security firm Veracode. Encryption failures are also a significant factor in the severity of many data breaches. From the theft of unencrypted e-mails from Stratfor in 2012 to the failure to encrypt data in publicly accessible databases and Amazon S3 buckets, the failure of developers and operations workers to lock down every step in the data life cycle has led to reoccurring breaches.


Researchers open the door to new distribution methods for secret cryptographic keys

The researchers suggest a simple do-it-yourself lesson to help us better understand framed knots, those three-dimensional objects that can also be described as a surface. “Take a narrow strip of a paper and try to make a knot,” said first author Hugo Larocque, uOttawa alumnus and current PhD student at MIT. “The resulting object is referred to as a framed knot and has very interesting and important mathematical features.” The group tried to achieve the same result but within an optical beam, which presents a higher level of difficulty. After a few tries (and knots that looked more like knotted strings), the group came up with what they were looking for: a knotted ribbon structure that is quintessential to framed knots. “In order to add this ribbon, our group relied on beam-shaping techniques manipulating the vectorial nature of light,” explained Hugo Larocque. “By modifying the oscillation direction of the light field along an “unframed” optical knot, we were able to assign a frame to the latter by “gluing” together the lines traced out by these oscillating fields.” According to the researchers, structured light beams are being widely exploited for encoding and distributing information.


Learn what to test in a mobile application

Mobile devices present different issues than desktop computers and laptops. For example, tilting a mobile device could cause the app to render in landscape form and look odd -- this won't happen on a laptop. A user can lose network connection briefly, which causes state problems. And, in some cases, notifications from other applications can interrupt the system. Anyone on a mobile device could experience these issues during everyday use. These problems might be impossible to simulate with a test automation tool. Automated mobile test scripts don't offer enough value to justify the time necessary to write them for every possible condition. Testers can be more successful if they follow the 80/20 rule: Assume 80% of failed tests stem from 20% of test cases. When these test scripts break, something is likely broken with the application. Check for these kinds of issues when the team rewrites the UI, or brings in a new GUI library or component. Test the software as a system when it first comes together, and before major releases under challenging conditions. The first few times QA professionals field test an app -- i.e., take a mobile device on a long car ride, or swap between cellular data and Wi-Fi -- it might take a few days.


Translating lost languages using machine learning

Spearheaded by MIT Professor Regina Barzilay, the system relies on several principles grounded in insights from historical linguistics, such as the fact that languages generally only evolve in certain predictable ways. For instance, while a given language rarely adds or deletes an entire sound, certain sound substitutions are likely to occur. A word with a “p” in the parent language may change into a “b” in the descendant language, but changing to a “k” is less likely due to the significant pronunciation gap. By incorporating these and other linguistic constraints, Barzilay and MIT PhD student Jiaming Luo developed a decipherment algorithm that can handle the vast space of possible transformations and the scarcity of a guiding signal in the input. The algorithm learns to embed language sounds into a multidimensional space where differences in pronunciation are reflected in the distance between corresponding vectors. This design enables them to capture pertinent patterns of language change and express them as computational constraints. The resulting model can segment words in an ancient language and map them to counterparts in a related language. 


On the trail of the XMRig miner

Alongside well-known groups that make money from data theft and ransomware (for example, Maze, which is suspected of the recent attacks on SK Hynix and LG Electronics), many would-be attackers are attracted by the high-profile successes of cybercrime. In terms of technical capabilities, such amateurs lag far behind organized groups and therefore use publicly available ransomware, targeting ordinary users instead of the corporate sector. The outlays on such attacks are often quite small, so the miscreants have to resort to various stratagems to maximize the payout from each infected machine. For example, in August of this year, we noticed a rather curious infection method: on the victim’s machine, a Trojan (a common one detected by our solutions as Trojan.Win32.Generic) was run, which installed administration programs, added a new user, and opened RDP access to the computer. Next, the ransomware Trojan-Ransom.Win32.Crusis started on the same machine, followed by the loader of the XMRig miner, which then set about mining Monero cryptocurrency. As a result, the computer would already start earning money for the cybercriminals just as the user saw the ransom note.


5 steps to learn any programming language

Some people love learning new programming languages. Other people can't imagine having to learn even one. In this article, I'm going to show you how to think like a coder so that you can confidently learn any programming language you want. The truth is, once you've learned how to program, the language you use becomes less of a hurdle and more of a formality. In fact, that's just one of the many reasons educators say to teach kids to code early. Regardless of how simple their introductory language may be, the logic remains the same across everything else children (or adult learners) are likely to encounter later. With just a little programming experience, which you can gain from any one of several introductory articles here on Opensource.com, you can go on to learn any programming language in just a few days (sometimes less). Now, this isn't magic, and you do have to put some effort into it. And admittedly, it takes a lot longer than just a few days to learn every library available to a language or to learn the nuances of packaging your code for delivery. But getting started is easier than you might think, and the rest comes naturally with practice.


Articulating Leadership through Nemawashi and Collaborative Boards

Many meetings are just conversations with no conclusion and it seems that we cannot get over that. The point is that we need both: meetings and conversations, but we shouldn’t mix them. Nemawashi puts order here, separating conversations and meetings, similar to what Scrum does with the different events, where each one has a clear purpose. Meetings are formal, concrete, to the point; and there should be no surprises. It is the official acknowledgement of everything previously discussed and we just get together to have everyone on the same page. It is the formal moment when decisions are communicated and officially agreed on. Conversations instead take place ad-hoc, as often and as long as needed, involving only the necessary (and engaged) participants. This is where focused discussions take place. ... People are deciding on things anyway all the time, but on the wrong things. One clear symptom is too much effort on details and important points being missed or late, while everyone is "busy". Collaborative Boards is where teams and leaders meet. They articulate top-down challenges through bottom-up proposals, keeping them aligned towards the vision and focusing on what really matters.



Quote for the day:

"Failures only triumph if we don't have the courage to try again. -- Gordon Tredgold

Daily Tech Digest - October 21, 2020

6 tips for CIOs managing technical debt

Many applications are created to solve a specific business problem that exists in the here-and-now, without thought about how that problem will evolve or what other adjacencies it pertains to. For example, a development team might jump into solving the problem of creating a database to manage customer accounts without taking into consideration how that database is integrated with the sales/prospecting database. This can lead to thousands of staff-hours downstream spent transforming contacts and importing them from the sales to the customer database. ... One of the best-known problems in large organizations is the disconnect between development and operations where engineers design a product without first considering how their peers in operations will support it, thus resulting in support processes that are cumbersome, error-prone and inefficient. The entire programming discipline of DevOps exists in large part to resolve this problem by including representatives from the operations team on the development team -- but the DevOps split exists outside programming. Infrastructure engineers may roll out routers, edge computers or SD-WAN devices without knowing how the devices will be patched or upgraded.


The Third Wave of Open Source Migration

The first and second open-source migration waves were periods of rapid expansion for companies that rose up to provide commercial assurances for Linux and the open-source databases, like Red Hat, MongoDB, and Cloudera. Or platforms that made it easier to host open source workloads in a reliable, consistent, and flexible manner via the cloud, like Amazon Web Services, Google Cloud, and Microsoft Azure. This trend will continue in the third wave of open source migration, as organizations interested in reducing cost without sacrificing development speed will look to migrate more of their applications to open source. They’ll need a new breed of vendor—akin to Red Hat or AWS—to provide the commercial assurances they need to do it safely.  It’s been hard to be optimistic over the last few months. But as I look for a silver lining in the current crisis, I believe there is an enormous opportunity for organizations to get even more nimble in their use of open source. The last 20+ years of technology history have shown that open source is a powerful weapon organizations can use to navigate a global downturn.


It’s Time to Implement Fair and Ethical AI

Companies have gotten the message that artificial intelligence should be implemented in a manner that is fair and ethical. In fact, a recent study from Deloitte indicates that a majority of companies have actually slowed down their AI implementations to make sure these requirements are met. But the next step is the most difficult one: actually implementing AI in a fair and ethical way. A Deloitte study from late 2019 and early 2020 found that 95% of executives surveyed said they were concerned about ethical risk in AI adoption. While machine learning brings the possibility to improve the quantity and quality of decision-making based on data, it also brings the potential for companies to damage their brand and reduce the trust that customers have placed in it if AI is implemented poorly. In fact, these risks were so palpable to executives that 56% of them say they have slowed down their AI adoptions, according to Deloitte’s study. While progress has been made in getting the message out about fair and ethical AI, there is still a lot of work to be done, says Beena Ammanath, the executive director of the Deloitte AI Institute. “The first step is well underway, raising awareness. Now I think most companies are aware of the risk associated” with AI deployments, Ammanath says.


C# designer Torgersen: Why the programming language is still so popular and where it's going next

Like all modern programming languages, C# continues to evolve. With C# 9.0 on course to arrive in November, the next update will focus on supporting "terse and immutable" (i.e. unchangeable) representation of data shapes. "C# 9.0 is trying to take some next steps for C# in making it easier to deal with data that comes over the wire, and to express the right semantics for data, if you will, that comes out of what we call an object-oriented paradigm originally," says Torgersen. C# 9.0 takes the next step in that direction with a feature called Records, says Torgersen. These are a reference type that allow a whole object to be immutable and instead make it act like a value. "We've found ourselves, for a long time now, borrowing ideas from functional programming to supplement the object-oriented programming in a way that really helps with, for instance, cloud-oriented programming, and helps with data manipulation," Torgersen explains. "Records is a key feature of C# 9.0 that will help with that." Beyond C# 9.0 is where things get more theoretical, though. Torgersen insists that there's no concrete 'endgame' for the programming language – or at least, not until it finally reaches some as-yet unknown expiration date.


DOJ's antitrust fight with Google: how we got here

The DOJ said in its filing that this case is "just beginning." The government also says it's seeking to change Google's practices and that "nothing is off the table" when it comes to undoing the "harm" caused by more than a decade of anticompetitive business. Is it hard to compete with Google? The numbers speak for themselves. But that's because the company is darn good at what it does. Does Google use your data to help it improve search and advertising? Yes, it does. But this suit is not about privacy. It's about Google's lucrative advertising business. Just two years ago, the European Commission (EC) fined Google over €8 billion for various advertising violations. Though the DOJ is taking a similar tack, Google has done away with its most egregious requirements. These included exclusivity clauses, which stopped companies from placing competitors' search advertisements on their results pages and Premium Placement, which reserved the most valuable page real estate for Google AdSense ads. It's also true that Google has gotten much more aggressive about using its own search pages to hawk its own preferred partners. As The Washington Post's Geoffrey A. Fowler recently pointed out: if you search for "T Shirts" on Google, the first real search result appears not on row one, two, or three — those are reserved for advertising — or even rows four through eight.


7 Hard-Earned Lessons Learned Migrating a Monolith to Microservices

It’s tempting to go from legacy right to the bleeding edge. And it’s an understandable urge. You’re seeking to future-proof this time around so that you won’t face another refactor again anytime soon. But I’d urge caution in this regard, and to consider taking an established route. Otherwise, you may find yourself wrangling two problems at once, and getting caught in a fresh new rabbit hole. Most companies can’t afford to pioneer new technology and the ones that can tend to do it outside of any critical path for the business. ... For all its limitations, a monolithic architecture does have several intrinsic benefits. One of which is that it’s generally simple. You have a single pipeline and a single set of development tools. Venturing into a distributed architecture involves a lot of additional complexity, and there are lots of moving parts to consider, particularly if this is your first time doing it. You’ll need to compose a set of tools to make the developer experience palatable, possibly write some of your own, (although I’d caution against this if you can avoid it), and factor in the discovery and learning process for all that as well.


What is confidential computing? How can you use it?

To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors. To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology. Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic. ... With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.


What A CIO Wants You to Know About IT Decision Making

CIOs know the organization needs new ideas, new products, new services, etc. as well as changes to current rules, regulations, and business processes to grow markets and stay ahead of competition. CIOs also know that the rules, regulations, and processes are the foundations of trust. Those things that seem to inhibit new ideas are the things that open customer’s minds to the next new thing an organization might offer. Without the trust established by following the rules, adhering to regulations, and at the far extreme, simply obeying the law, customers would not stick around to try the next new thing. For proof, look at the stock price of organizations that publicly announce IT hacks, data loss, or other trust breaking events. Customers leave when trust is broken, and part of the CIO’s role is to maintain that trust. While CIOs know the standards that must be upheld, they also know how to navigate those standards to support new ideas and change requests. Supporting new ideas and adapting to change requires input from you as the user, the employee or another member of the IT department, beyond just submitting the IT change form or other automated process.


The Biggest Reason Not to Go All In on Kubernetes

Here’s the big thing that gets missed when a huge company open-sources their internal tooling – you’re most likely not on their scale. You don’t have the same resources, or the same problems as that huge company. Sure, you are working your hardest to make your company so big that you have the same scaling problems as Google, but you’re probably not there yet. Don’t get me wrong: I love when large enterprises open-source some of their internal tooling, as it’s beneficial to the open-source community and it’s a great learning opportunity, but I have to remind myself that they are solving a fundamentally different problem than I am. While I’m not suggesting that you avoid planning ahead for scalability, getting something like Kubernetes set up and configured instead of developing your main business application can waste valuable time and funds. There is a considerable time and overhead investment for getting your operations team up to speed on Kubernetes that may not pay out. Google can afford to have its teams learning, deploying, and managing new technology. But especially for smaller organizations, premature scaling or premature optimization are legitimate concerns. You may be attracted to the scalability, and it’s exciting. But, if you implement too early, you will only get the complexity without any of the benefit.


Did Domain Driven Design help me to ease out the approach towards Event Driven Architecture?

The most important aspect of the Domain Driven Design is setting the context of a domain/sub-domain where domain would be a very high-level segregation for different areas of business and sub-domain would be a particular part in the domain representing a structure where users use a specific ubiquitous language with domain model. Without going into much detail of the DDD, another paradigm that one should be aware of is context mapping which consists of identifying and classifying the relationships between bounded contexts within the domain. One or more contexts can be related to each other in terms of goals, reuse components (codes), a consumer and a producer. ... The principles guiding the conglomeration of DDD and events help us to shift the focus from the nouns (the domain objects) to the verbs (the events) in the domain. Focusing on flow of events helps us to understand how change propagates in the system — things like communication patterns, workflow, figuring out who is talking to whom, who is responsible for what data, and so on. Events represent facts about the domain and should be part of the Ubiquitous Language of the domain. 



Quote for the day:

“The only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle.” -- Steve Jobs

Daily Tech Digest - October 20, 2020

Five ways the pandemic has changed compliance—perhaps permanently

There is a strong acknowledgment that compliance will be forced to rely heavily on technology to ensure an adequate level of visibility to emerging issues. We need to strategically leverage technology and efficient systems to monitor risk. This is causing some speculation that a greater skills overlap will be required of CCO and CISO roles. This, however, also raises privacy concerns. Taylor believes the remote environment will lead to “exponential growth” in employee surveillance and that compliance officers will need to tread carefully given that this can undermine ethical culture: “Just because the tools exist, doesn’t mean you have to use them,” she says. Compliance veteran and advisor Keith Darcy predicts dynamic and continuous risk assessment—one that considers “the rapidly deteriorating and changing business conditions. ‘One-and-done’ assessments are completely inadequate.” Some predict that investigation interviews conducted on video conference and remote auditing will become the norm. Others are concerned that policies cannot be monitored or enforced without being in the office together; that compliance will be “out of sight, out of mind” to some degree. Communication must be a top priority for compliance, as the reduction of informal contacts with stakeholders and employees makes effectiveness more challenging.


In the Search of Code Quality

In general functional and statically typed languages were less error-prone than dynamically typed, scripting, or procedural languages. Interestingly defect types correlated stronger with language than the number of defects. In general, the results were not surprising, confirming what the majority of the community believed to be true. The study got popularity and was extensively cited. There is one caveat, the results were statistical and interpreting statistical results one must be careful. Statistical significance does not always entail practical significance and, as the authors rightfully warn, correlation is not causation. The results of the study do not imply (although many readers have interpreted it in such a way) that if you change C to Haskell you will have fewer bugs in the code. Anyway, the paper at least provided data-backed arguments. But that’s not the end of the story. As one of the cornerstones of the scientific method is replication, a team of researchers tried to replicate the study from 2016. The result, after correcting some methodological shortcomings found in the original paper, was published in 2019 in the paper On the Impact of Programming Languages on Code Quality A Reproduction Study.


3 unexpected predictions for cloud computing next year

With more than 90 percent of enterprises using multicloud, there is a need for intercloud orchestration. The capability to bind resources together in a larger process that spans public cloud providers is vital. Invoking application and database APIs that span clouds in sequence can solve a specific business problem; for example, inventory reorder points based on a common process between two systems that exist in different clouds. Emerging technology has attempted to fill this gap, such as cloud management platforms and cloud service brokers. However, they have fallen short. They only provide resource management between cloud brands, typically not addressing the larger intercloud resource and process binding. This a gap that innovative startups are moving to fill. Moreover, if the public cloud providers want to truly protect their market share, they may want to address this problem as well. Second: cloudops automation with prebuilt corrective behaviors. Self-healing is a feature where a tool can take automated corrective action to restore systems to operation. However, you have to build these behaviors yourself, including automations, or wait as the tool learns over time. We’ve all seen the growth of AIops, and the future is that these behaviors will come prebuilt with pre-existing knowledge that can operate distributed or centralized. 


How Organizations Can Build Analytics Agility

Data and analytics leaders must frame investments in the current context and prioritize data investments wisely by taking a complete view of what is happening to the business across a number of functions. For example, customers bank very differently in a time of crisis, and this requires banks to change how they operate in order to accommodate them. The COVID-19 pandemic forced banks to take another look at the multiple channels their customers traverse — branches, mobile, online banking, ATMs — and how their comfort levels with each shifted. How customers bank, and what journeys they engage in at what times and in what sequence, are all highly relevant to helping them achieve their financial goals. The rapid collection and analysis of data from across channels, paired with key economic factors, provided context that allowed banks to better serve customers in the moment. New and different sources of information — be it transaction-level data, payment behaviors, or real-time credit bureau information — can help ensure that customer credit is protected and that fraudulent activity is kept at bay. Making the business case for data investments suddenly makes sense as business leaders live through data gap implications in real time.


Cisco targets WAN edge with new router family

The platform makes it possible to create a fully software-defined branch, including connectivity, edge compute, and storage. Compute and switching capabilities can be added via UCS-E Series blades and UADP-powered switch modules. Application hosting is supported using containers running on the Catalyst 8300’s multi-core, high-performance x86 processor, according JL Valente, vice president of product management for Cisco’s Intent-Based Networking Group in a blog about the new gear. Cisco said the Catalyst 8000V Edge Software is a virtual routing platform that can run on any x86 platform, or on Cisco’s Enterprise Network Compute System or appliance in a private or public cloud. Depending on what features customers need. the new family supports Cisco SD-WAN software, including Umbrella security software and Cisco Cloud On-Ramp that lets customers tie distributed cloud applications from AWS, Microsoft and Google back to a branch office or private data center. The platforms produce telemetry that can be used in Cisco vAnalytics to provide insights into device and fabric performance as well as spot anomalies in the network and perform capacity planning.


2021 Will Be the Year of Catch-Up

With renewed focus on technology to bring about the changes needed, it’s crucial that organizations recognize that infrastructure must be secure. Our new office environment is anywhere we can find a connection to Wi-Fi, and that opens many more doors to cyber-attacks. The rapid shift in business operations significantly impacted the cyberthreat landscape – as companies fast-tracked the migration of digital assets to the cloud, they also inadvertently increased the attack surfaces from which hackers can try to gain access to their data and applications. C-suite executives are moving quickly with network plans to support exploding customer and supplier demand for contactless interactions and the unplanned need to connect a remote workforce, yet they are also aware that they are not fully prepared to adequately protect their organizations from unknown threats. The situation is further compounded by the cloud shared responsibility model, which says that cloud service providers are responsible for the security of the cloud while customers are responsible for securing the data they put into the cloud. Many organizations rely on their third-party providers to certify security management services, but the decentralized nature of this model can add complexity to how applications and computing resources are secured.


BA breach penalty sets new GDPR precedents

The reduction in the fine also adds fuel to the ongoing class action lawsuit against BA, said Long at Lewis Silkin. “Completely separate from the £20m fine by the ICO, British Airways customers, and indeed any staff impacted, are likely to be entitled to compensation for any loss they have suffered, any distress and inconvenience they have suffered, and indeed possibly any loss of control over their data they have suffered,” she said. “This might only be £500 a pop but if only 20,000 people claim that is another potential £10m hit, and if 100,000 then £50m. So whilst a win today, this is very much only round one for BA.” Darren Wray, co-founder and CTO of privacy specialist Guardum, said it was easy to imagine many of the breach’s actual victims would be put out by the ICO’s decision. “Many will feel their data and their fight to recover any financial losses resulting from the airline’s inability to keep their data safe has been somewhat marginalised,” he said. “This can only strengthen the case of the group pursuing a class action case against BA. The GDPR and the UK DPA 2018 do after all allow for such action and if the regulator isn’t seen as enforcing the rules strongly enough, it leaves those whose data was lost few alternative options,” said Wray.


Is Artificial Intelligence Closer to Common Sense?

COMET relies on surface patterns in its training data rather than understanding concepts. The key idea would be to supply surface patterns with more information outside of language such as visual perceptions or embodied sensations. First person representations, not language, would be the basis for common sense. Ellie Pavlick is attempting to teach intelligent agents common sense by having them interact with virtual reality. Pavlick notes that common sense would still exist even without the ability to talk to other people. Presumably, humans were using common sense to understand the world before they were communicating. The idea is to teach intelligent agents to interact with the world the way a child does. Instead of associating the idea of eating with a textual description, an intelligent agent would be told, “We are now going to eat,” and then it would see the associated actions such as, gathering food from the refrigerator, preparing the meal, and then see its consumption. Concept and action would be associated with each other. It could then generate similar words when seeing similar actions. Nazneen Rajani is investigating whether language models can reason using basic physics. For example, if a ball is inside a jar, and the jar is tipped over, the ball will fall out.


Russia planned cyber-attack on Tokyo Olympics, says UK

The UK is the first government to confirm details of the breadth of a previously reported Russian attempt to disrupt the 2018 winter Olympics and Paralympics in Pyeongchang, South Korea. It declared with what it described as 95% confidence that the disruption of both the winter and summer Olympics was carried out remotely by the GRU unit 74455. In Pyeongchang, according to the UK, the GRU’s cyber-unit attempted to disguise itself as North Korean and Chinese hackers when it targeted the opening ceremony of the 2018 winter Games, crashing the website so spectators could not print out tickets and crashing the wifi in the stadium. The key targets also included broadcasters, a ski resort, Olympic officials, service providers and sponsors of the games in 2018, meaning the objects of the attacks were not just in Korea. The GRU also deployed data-deletion malware against the winter Games IT systems and targeted devices across South Korea using VPNFilter malware. The UK assumes that the reconnaissance work for the summer Olympics – including spearphishing to gather key account details, setting up fake websites and researching individual account security – was designed to mount the same form of disruption, making the Games a logistical nightmare for business, spectators and athletes.


What intelligent workload balancing means for RPA

“To be truly effective, a bot must be able to work across a wide set of parameters. Let’s say, for example, a rule involves a bot to complete work for goods returned that are less than $100 in value, but during peak times when returns are high, the rules may dynamically change the threshold to a higher number. The bot should still be able to perform all the necessary steps for that amount of approval without having to be reconfigured every time.” Gopal Ramasubramanian, senior director, intelligent automation & technology at Cognizant, added: “If there are 100,000 transactions that need to be performed and instead of manually assigning transactions to different robots, the intelligent workload balancing feature of the RPA platform will automatically distribute the 100,000 transactions across different robots and ensure transactions are completed as soon as possible. “If a service level agreement (SLA) is tied to the completion of these transactions and the robots will not be able to meet the SLA, intelligent workload balancing can also commission additional robots on demand to distribute the workload and ensure any given task is completed on time.”



Quote for the day:

"You can build a throne with bayonets, but you can_t sit on it for long." -- Boris Yeltsin