Daily Tech Digest - October 26, 2020

How to hold Three Amigos meetings in Agile development

Three Amigos meetings remove uncertainty from development projects, as they provide a specified time for everyone to get on the same page about what to -- or not to -- build. "The meeting exposes any potential assumptions and forces explicit answers," said Jeff Sing, lead software QA engineer at Optimizely, a digital experience optimization platform. "Everyone walks away with crystal-clear guidelines on what will be delivered and gets ahead of any potential scope creep." For example, a new feature entails new business requirements, engineering changes, UX flow and design. Each team faces its own challenges and requirements. The business requirements focus on a broad problem space, and how to monetize the product. The engineering requirements center on the technical solution and hurdles. The UX requirements define product usability. The design requirements ensure the product looks finished. All of these requirements might align -- or they might not. "This is why a formalized meeting needs to occur to hash out how to achieve everyone's goals, or which requirements will not be met and need to be dropped in order to build the right product on the right time schedule," Sing said.


Key success factors behind intelligent automation

For an intelligent automation programme to really deliver, a strategy and purpose is needed. This could be improving data quality, operational efficiency, process quality and employee empowerment, or enhancing stakeholder experiences by providing quicker, more accurate responses. Whatever the rationale, an intelligent automation strategy must be aligned to the wider needs of the business. Ideally, key stakeholders should be involved in creating the vision; if they haven’t, engage them now. If they see intelligent automation as a strategic business project, they’ll support it and provide the necessary financial and human resources too. Although intelligent automation is usually managed by a business team, it will still be governed by the IT team using existing practices, so they must also be involved at the beginning. IT will support intelligent automation on many critical fronts, such as compliance with IT security, auditability, the supporting infrastructure, its configuration and scalability. So intelligent automation can scale as demand increases, plan where it sits within the business. A centralised approach encompasses the entire organisation, so it may be beneficial to embed this into a ‘centre of excellence’ (CoE) or start moving towards creating this operating environment.,/div.


Why Most Organizations’ Investments in AI Fall Flat

A common mistake companies make is creating and deploying AI models using Agile approaches fit for software development, like Scrum or DevOps. These frameworks traditionally require breaking down a large project into small components so that they can be tackled quickly and independently, culminating in iterative yet stable releases, like constructing a building floor by floor. However, AI is more like a science experiment than a building. It is experiment-driven, where the whole model development life cycle needs to be iterated—from data processing to model development and eventually monitoring—and not just built from independent components. These processes feed back into one another; therefore, a model is never quite “done.” ... We know AI requires specialized skill sets—data scientists remain highly sought-after hires in any enterprise. But it’s not just the data scientists who build the models and product owners who manage the functional requirements who are necessary in order for AI to work. The emerging role of machine-learning engineer is required to help scale AI into reusable and stable processes that your business can depend on. Professionals in model operations (model ops) are specialized technicians who manage post-deployment model performance and are ultimately responsible for ongoing stability and continuity of operations.


Cybersecurity as a public good

The necessity to privately provision cyber security has resulted in a significant gap between the demand for cyber security professionals and the supply of professionals with appropriate skills. Multiple studies have identified cyber security as the domain with one of the highest skills gap. When a significant skills gap occurs in the market, it results in two things. The remuneration demanded by the professionals will sky rocket since there are many chasing the scarce resources. Professionals who are not so skilled will also survive — rather thrive — since lack of alternatives means they will continue to be in demand. ...  Security as a public good involves trade-offs with privacy. Whether it is police patrols, or CCTV cameras — a trade-off with privacy is imperative to make security a public good. The privacy trade-off risks will be higher in the cyber world because technology would provide the capability to conduct surveillance at larger scale and also larger depth. It is crucial , delicate — and hence difficult — to strike the right balance between security and privacy such that the extent of privacy sacrificed meets the test of proportionality. However, the complexity of the task, or the associated risks with it, should not prevent us from getting out of the path down a rabbit hole.


The Art and Science of Architecting Continuous Intelligence

Loosely defined, machine data is generated by computers rather than individuals. IoT equipment sensors, cloud infrastructure, security firewalls and websites all throw off a blizzard of machine data that measures machine status, performance and usage. In many cases the same math can analyze machine data for distinct domains, identifying patterns, outliers, etc. Enterprises have well-established processes such as security information and event management (SIEM), and IT operations (ITOps), that process machine data. Security administrators, IT managers and other functional specialists use mature SIEM and ITOps processes on a daily basis. Generally, these architectures perform similar functions as in the first approach, although streaming is a more recent addition. Another difference is that many machine-data architectures have more mature search and index capabilities, as well as tighter integration with business tasks and workflow. Data teams typically need to add the same two functions to complete the CI picture. First, they need to integrate doses of contextual data to achieve similar advantages as those outlined above. Second, they need to trigger business processes, which in this case might mean hooking into robotic process automation tools.


Fintech Startups Broke Apart Financial Services. Now The Sector Is Rebundling

When fintech companies began unbundling, the tools got better but consumers ended up with 15 personal finance apps on their phones. Now, a lot of new fintechs are looking at their offerings and figuring out how to manage all of a person’s personal finances so that other products can be enhanced, said Barnes. “We are not trying to be a bunch of products, but more about how each product helps the other,” Barnes said. “If we offer a checking account, we can see income coming in and be able to give you better access to borrowing. That is the rebuild—how does fintech serve all of the needs, and how do we leverage it for others?” Traditional banking revolves around relationships for which banks can sell many products to maximize lifetime value, said Chris Rothstein, co-founder and CEO of San Francisco-based sales engagement platform Groove, in an interview. Rebundling will become a core part of workflow and a way for fintechs to leverage those relationships to then be able to refer them to other products, he said. “It makes sense long-term,” Rothstein said in an interview. “In financial services, many people don’t want all of these organizations to have their sensitive data. Rebundling will also force incumbents to get better.”


Microsoft Glazes 5G Operator Strategy

Microsoft’s 5G strategy links the private Azure Edge Zones service it announced earlier this year, Azure IoT Central, virtualized evolved packet core (vEPC) software it gained by acquiring Affirmed Networks, and cloud-native network functions it brought onboard when it acquired Metaswitch Networks. Combining those services under a broader portfolio allows Microsoft to “deliver virtualized and/or containerized network functions as a service on top of a cloud platform that meets the operators where they are, in a model that is accretive to their business,” Hakl said.  “We want to harness the power of the Azure ecosystem, which means the developer ecosystem, to help [operators] monetize network slicing, IoT, network APIs … [and] use the power of the cloud” to create the same type of elastic and scalable architecture that many enterprises rely on today, he explained. That vision is split into two parts: the Azure Edge Zones, which effectively extends the cloud to a private edge environment, and the various pieces of software that Microsoft has assembled for network operators. On the latter, Hakl said Microsoft “could have gone out and had our customers teach us that over time. Instead, we acquired two companies that brought in hundreds of engineers that have telco DNA and understand the space.”


Artificial intelligence for brain diseases: A systematic review

Among the various ML solutions, Deep Neural Networks (DNNs) are nowadays considered as the state-of-the-art solution for many problems, including tasks on brain images. Such human brain-inspired algorithms have been proven to be capable of extracting highly meaningful statistical patterns from large-scale and high-dimensional datasets. A DNN is a DL algorithm aiming to approximate some function f ∗. For example, a classifier can be seen as a function y = f * ( x , θ ) mapping a given input x to a category labeled as y. θ is the vector of parameters that the model learns in order to make the best approximation of f ∗. Artificial Neural Networks (ANNs) are built out of a densely interconnected set of simple units, where each unit takes a number of real-valued inputs (possibly the outputs of other units) and produces a single real-valued output (which may become the input to many other units). DNNs are called networks because they are typically represented by composing together many functions. The overall length of the chain gives the depth of the model; from this terminology, the name “deep learning” arises. 


Things to Consider about Brain-Computer Interface Tech

A BCI is a system that provides a direct connection between your brain and an electronic device. Since your brain runs on electrical signals like a computer, it could control electronics if you could connect the two. BCIs attempt to give you that connection. There are two main types of BCI — invasive and non-invasive. Invasive devices, like the Neuarlink chip, require surgery to implant them into your brain. Non-invasive BCIs, as you might’ve guessed, use external gear you wear on your head instead. ... A recent study suggested that brain-computer interface technology and NeuraTech in general could measure worker comfort levels in response to their environment. They could then automatically adjust the lights and temperature to make workers more comfortable and minimize distractions. Since distractions take up an average of 2.1 hours a day, these BCIs could mean considerable productivity boosts. The Department of Defense is developing BCIs for soldiers in the field. They hope these devices could let troops communicate silently or control drones with their minds. As promising as BCIs may be, there are still some lingering concerns with the technology. While the Neuralink chip may be physically safe, it raised a lot of questions about digital security. 


Microsoft did some research. Now it's angry about what it found

A fundamental problem, said Brill is the lack of trust in society today. In bold letters, she declared: "The United States has fallen far behind the rest of the world in privacy protection." I can't imagine it's fallen behind Russia, but how poetic if that was true. Still, Brill really isn't happy with our government: "In total, over 130 countries and jurisdictions have enacted privacy laws. Yet, one country has not done so yet: the United States." Brill worries our isolation isn't too splendid. She mused: "In contrast to the role our country has traditionally played on global issues, the US is not leading, or even participating in, the discussion over common privacy norms." That's like Microsoft not participating in the creation of excellent smartphones. It's not too smart. Brill fears other parts of the world will continue to lead in privacy, while the US continues to lead in inaction and chaos. It sounds like the whole company is mad as hell and isn't going to take it anymore. Yet it's not as if Microsoft has truly spent the last 20 years championing privacy much more than most other big tech companies. In common with its west coast brethren, it's been too busy making money.



Quote for the day:

"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold

Daily Tech Digest - October 25, 2020

Meet modern compliance: Using AI and data to manage business risk better

Strong, tech-enabled, third-party risk management capabilities can strengthen corporate governance, which will in turn enhance reputation and build trust. In essence, compliance should no longer be seen simply as a backroom cost center. Rather, it is a means of strengthening the business brand, increasing productivity, and driving growth of market share, with relevance at the C suite and at the board level. ... “By engaging early in the sales contract life cycle and providing compliance oversight and ongoing risk education, we [at Microsoft] have been able to realize better, more compliant deal construction. This is critical at quarter-end when deal volumes spike. Sellers internalize the risk guidance and proactively ensure their contract meets the company’s compliance standards — often reducing monetary concessions that improve margin and profitability.” Four years ago, PwC and Microsoft worked closely together to further develop a tech-enabled compliance analytics suite of tools called Risk Command. “We started the journey to respond to internal and external pressures to embrace a ‘data-driven’ approach,” Gibson recalled. “But it appears to be what regulators are now expecting and serves as a benchmark for what others may want to do.”


Is The Cybersecurity Industry Selling Lemons? Apparently Lots Of Important CISOs Think it Is

If it’s true that poor products have contributed to the success of cyberattacks then something must be wrong, but what? The report’s thesis – which borrows its title from economist George Akerlof’s Nobel Prize winning 1970 paper on the same topic – doesn’t sugar coat it: cybersecurity has become an industry that keeps churning out lemons that not enough people complain about. Searing tech skepticism is nothing new of course – Clifford Stoll’s Silicon Snake Oil or Michael Lewis’s satirical The New New Thing come to mind – but those were about issues (the Internet will go wrong, dotcom excess), people have already processed. Cybersecurity, by contrast, is all that stands between us and a world where criminality contorts the system in ways that cost livelihoods and whole economies. Bad cybersecurity isn’t just inconvenient, it’s dangerous and somebody needs to say this now. The authors believe the underlying problem is economic rather than technical. Technology doesn’t work as claimed because the market relationship between customers and the vendors has broken down. This manifests as an ‘information asymmetry’ where vendors know how good their product is, but their customers not only don’t know but don’t have time to find out.


How advanced AI language tools could change the workplace

Within the last decade, some of the most notable breakthroughs in artificial intelligence (AI) have come in the form of computer vision. Essentially giving robotics systems ‘eyesight’, in the ability to identify and classify objects using image or video recognition, the technology has been put to use in anything from facial recognition systems and quality control in manufacturing to anomalies in MRI scans and self-driving vehicle systems. And while computer vision applications are still comparatively nascent, the ‘breakthrough’ AI applications of the decade ahead might well come in the form of advances in language-based applications. AI research and deployment company OpenAI developed the largest language model ever created this year, GPT-3. The software can generate human-like text on demand and is set to be turned into a commercial product later this year, as a paid-for subscription via the cloud for businesses. It represents a leap forward from previous language processing models that used hand-coded rules, statistical techniques, and increasingly artificial neural networks — which can learn from raw data, with less reliance on data labelling — to perform language processing.


Open-source software detects potential collisions in radiotherapy plans

The RadCollision software needs to be embedded into each TPS database and a folder (STL files) with the 3D models of the machines prepared. RadCollision is currently limited to use with the RayStation TPS, but versions for use with other commercial TPS are planned, says first author Fernando Hueso-González. The researchers quantitatively evaluated their software using the RayStation TPS with four patient treatment plans that were found infeasible during previous collision checks by therapists. The software reported collisions with the couch at similar angles to those reported experimentally. The team also tested the software with a model of a proton treatment room and a robotic patient positioning system. “In one case, we tested in the RadCollision software a beam where the dosimetrist doubted that there was enough clearance with the toe of a patient’s foot. RadCollision predicted that clearance would be very tight, but the irradiation-optimized TPS was feasible,” comments Remillard. “When we performed a dry run, there was no collision.” The team note that the reliability of the collision assessment depends upon the accuracy of the input data.


Technology is about to destroy millions of jobs. But, if we're lucky, it will create even more

CIOs are effectively banking on AI systems and machines to carry out tasks that would have previously been taken on manually. For example, the WEF predicts that in 2025, machines will be performing up to 65% of information and data processing and retrieval, leaving only 35% of the job to humans. This means that some roles are set to become increasingly redundant in the next few years. Data entry clerks, accountants and auditors, and factory workers are among the jobs that the WEF expects to be particularly displaced by automation. At the same time, growth in so-called "jobs of tomorrow" will offset the lack of demand for workers in jobs that can be filled by machines. Leading the polls for positions in growing demand are roles linked to the green economy, data, AI, and cloud computing. Think data analysts, machine-learning specialists, robotics engineers or software developers. Jumping from a redundant job to one in high demand is no easy challenge. The "jobs of tomorrow" will require new skills; in fact, the vast majority of employers (94%) surveyed by the WEF said that they expected employees to pick up new skills on the job. The past few months have seen employees and employers alike getting started with tackling the issue. 


How Artificial Intelligence is Transforming the Insurance Space

Although insurance CEOs are conscious of the herald of digital disruption breaking through the industry, it will be a whole new challenge to keep up with these revolutionary changes and to see it beyond the plain integration of modern technology. Intelligent solutions must be innovative enough to foster better customer relationship and deliver customer experience in a way that inspires much-needed poise between incipient market expectations and cost optimization. Apart from these, another pressure point is coming from emerging InsurTech entrants who are giving rise to tough competition by creating affordable solutions to reach and serve customers. What is relaxing is that to surpass this challenge, industry leaders are prepared to embrace new innovative possibilities and appreciate the role of creativity in evolving the processes and becoming a beloved brand in the financial marketplace. Over the last two years, we have seen the widespread advent and adoption of AI across multiple industries (be it hospitality or be it healthcare). The idea of digital technologies ruling the financial market isn’t exactly new since Nasdaq in its early days established a secure connected network of trading desks for integrated customer data records.


Voice Payment in Banking: The New Revolution in Fintech

Voice recognition methods use biometrics data to identify who’s speaking with virtual assistants. The robotic assistants have gone through so much changes and updating that you won’t be able to differentiate whether it’s a human or AI is talking to you. However there are a lot of privacy concerns around smart speakers. 33% of U.S. surveyed adults said they had security concerns which restrain them from purchasing the devices. Estimated that in January 2019 26% of people showed a strong concern about speaker’s privacy risk. The number jumped to 30% in January 2020. The reason is exposure to recorded conversations. All of the world’s biggest voice assistant providers Amazon, Google, Apple, Microsoft, Facebook are listening to some utterances because machine learning won’t be efficient if there would be no improvements in conversations between humans and devices. However, some situations were real leakage of consumer secret information which caused many doubts and indicated privacy as key risk in voice assistant technology.AI updates will facilitate the ability to understand accents, dialects, intonations and more. Fingertips is unique biometrics data which is an important secure measure. 48% of people have used biometrics to make payment.


How blockchain is used to transform the lives of people in marginalised communities

A key aim of the Building Blocks project is to provide people in refugee camps with the means of buying food and necessities quickly and securely using direct cash transfers. Another objective is to ensure they no longer have to worry about food vouchers being lost or stolen or about third party organisations, such as banks, having access to their personal data. Direct cash transfers, according to WFP research, are often the most effective and efficient way to distribute humanitarian assistance as well as support local economies. But being able to distribute it relies on the support of local financial institutions, which are not always in a position to do so, not least because many refugees face restrictions in opening bank accounts. To try and address the situation, in early 2017 the WFP introduced a proof-of-concept blockchain-based system to register and authenticate transactions in Sindh province, Pakistan, which did not require a bank to act as an intermediary to connect both parties. The system is now being used to support 106,000 Syrian refugees in the Azraq and Zaatari camps in Jordan and 500,000 Rohingyas in the Cox's Bazar camp in Bangladesh.


Chip industry is going to need a lot more software to catch Nvidia’s lead in AI

"Software is the hardest word," quipped Gwennap, referring to the struggles of competitors. He noted how companies either don't support some aspects of popular AI frameworks, such as TensorFlow, or how some AI applications for competing chips may not even compile properly. "To compete against deep software stacks from companies such as Nvidia and Intel, these vendors must support a broad range of frameworks and development environments with drivers, compilers, and debug tools that deliver full acceleration and optimal performance for a variety of customer workloads." ... The use of AI is spreading from cloud computing data centers where it has traditionally been developed to embedded devices in automobiles and infrastructure. Vendors such as the UK's Imagination and Think Silicon, a division of chip equipment giant Applied Materials, are pushing the boundaries in low-power designs that can go into power-constrained devices, such as battery-powered, microcontroller gadgets.  The stakes seem suddenly higher since Nvidia announced last month that it intends to buy Arm Plc for $40 billion. Arm makes the intellectual property at the heart of all the chips made by all the challengers in the chip industry. Hence, Nvidia's software is poised to gain even greater sway.


JP Morgan Veteran Daniel Masters Explains How Blockchain Will End Commercial Banks

The most interesting aspect of CBDCs is the impact they will have on commercial banks and the financial system as a whole. Today, central banks issue currency to a slew of commercial banks like Chase and Bank of America. These banks do two things—create products and services such as mortgages, and deal with the end users. I think we are going into a new paradigm where central banks issue CBDCs, commercial banks cease to exist and the service layer is filled by crazy new emerging companies like Compound Finance, Uniswap, SushiSwap, and people that are really getting distributed, decentralized finance done today. Then the final interesting layer is who actually faces the consumer. You can already see that there are multiple choices. Coinbase would like to get to all the users, as would Binance though probably not in America. You’ve got wallet infrastructures like Blockchain.com that already have 50 million outstanding wallets. That said, you could get incumbents as well. Samsung is putting chips into phones now, making them essentially hardware wallets. Amazon could come out with a digital wallet. Whoever owns that level at the bottom is critical.



Quote for the day:

"We are drowning in information, but starved for knowledge." -- John Naisbitt

Daily Tech Digest - October 24, 2020

How will self-driving cars affect public health?

The researchers created a conceptual model to systematically identify the pathways through which AVs can affect public health. The proposed model summarizes the potential changes in transportation after AV implementation into seven points of impact: transportation infrastructure; land use and the built environment; traffic flow; transportation mode choice; transportation equity; and jobs related to transportation and traffic safety. The changes in transportation are then attributed to potential health impacts. In optimistic views, AVs are expected to prevent 94% of traffic crashes by eliminating driver error, but AVs’ operation introduces new safety issues such as the potential of malfunctioning sensors in detecting objects, misinterpretation of data, and poorly executed responses, which can jeopardize the reliability of AVs and cause serious safety consequences in an automated environment. Another possible safety consideration is the riskier behavior of users because of their overreliance on AVs—for example, neglecting the use of seatbelts due to an increased false sense of safety. AVs have the potential to shift people from public transportation and active transportation such as walking and biking to private vehicles in urban areas, which can result in more air pollution and greenhouse gas emissions and create the potential loss of driving jobs for those in the public transit or freight transport industries.


Now’s The Time For Long-Term Thinking

For most financial institutions, the strategic planning process for 2021 is far different than any in the past. As opposed to an iterative adjustment to plans from the previous year, this year’s planning must take into account a level of change in technology, competition, consumer behaviors, society and many other areas that is far less defined than before. The uncertainty about the future requires a combination of a solid strategic foundation with sensing capabilities and the ability to respond to threats and opportunities as quickly as possible. For many banks and credit unions, this will require organizational restructuring, the reallocation of resources, revamping processes, finding new outside partners and a culture that will support flexibility in plans that never was required before. There is also the need to build a marketplace sensing capability across the entire organization and from a broader array of sources. This includes customers, internal staff (especially customer-facing employees), suppliers, strategic partners, research organizations, boards of directors and even competition. Gathering the insights is only half the battle. There must also be a centralized location to gather and analyze the insights collected.


Rapid Threat Evolution Spurs Crucial Healthcare Cybersecurity Needs

Cybercriminals have been actively taking advantage of the global pandemic, with an increase in cyberattacks, phishing, spear-phishing, and business email compromise (BEC) attempts. And on the healthcare side of things, NSCA Executive Director, Kelvin Coleman, said it’s not a huge surprise.  Even in the early 1900s during the Spanish flu pandemic, folks would put articles in newspapers to take advantage of the crisis with hoaxes and scams, Coleman explained. “Bad actors take advantage of crises,” he said. “Hackers are being aggressive, leveraging targeted emails and phishing attempts. Josh Corman, cofounder of IAmTheCalvary.org and DHS CISA Visiting Researcher, stressed that when a provider is forced into EHR downtime and to divert patient care, it’s even more nightmarish during a pandemic. In Germany, a patient died earlier this month after a ransomware attack shut down operations at a hospital, and she was diverted to another hospital. These are criminals without scruples, Corman explained. The attacks were happening before the pandemic, but there’s been no cease- fire amid the crisis. In healthcare, hackers continue to rely on previously successful attack methods – especially phishing. It continues to be a successful attack method. 


FBI, CISA: Russian hackers breached US government networks, exfiltrated data

US officials identified the Russian hacker group as Energetic Bear, a codename used by the cybersecurity industry. Other names for the same group also include TEMP.Isotope, Berserk Bear, TeamSpy, Dragonfly, Havex, Crouching Yeti, and Koala. Officials said the group has been targeting dozens of US state, local, territorial, and tribal (SLTT) government networks since at least February 2020. Companies in the aviation industry were also targeted, CISA and FBI said. The two agencies said Energetic Bear "successfully compromised network infrastructure, and as of October 1, 2020, exfiltrated data from at least two victim servers." The intrusions detailed in today's CISA and FBI advisory are a continuation of attacks detailed in a previous CISA and FBI joint alert, dated October 9. The previous advisory described how hackers had breached US government networks by combining VPN appliances and Windows bugs. Today's advisory attributes those intrusions to the Russian hacker group but also provides additional details about Energetic Bear's tactics. According to the technical advisory, Russian hackers used publicly known vulnerabilities to breach networking gear, pivot to internal networks, elevate privileges, and steal sensitive data.


Secure NTP with NTS

NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients. NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks. The default NTP client in Fedora is chrony. Chrony added NTS support in version 4.0. The default configuration hasn’t changed. Chrony still uses public servers from the pool.ntp.org project and NTS is not enabled by default. Currently, there are very few public NTP servers that support NTS. The two major providers are Cloudflare and Netnod.


Non-Intimidating Ways To Introduce AI/ML To Children

The brainchild of IBM, Machine Learning for Kids is a free, web-based tool to introduce children to machine learning systems and applications of AI in the real world. Machine Learning for Kids is built by Dale Lane using APIs from IBM Watson. It provides hands-on experiments to train ML systems that recognise texts, images, sounds, and numbers. It leverages platforms such as Scratch and App Inventor to create interesting projects and games. It is also being used in schools as a significant resource to teach AI and ML to students. Teachers can also form their own admin page to manage their access to students. A product from the MIT Media Lab, Cognimates is an open-source AI learning platform for young children starting from age 7. Children can learn how to build games, robots, and train their own AI modes. Like Machine Learning for Kids, Cognimates is also based on Scratch programming language. It provides a library of tools and activities for learning AI. This platform even allows children to program intelligent devices such as Alexa. Another offering from Google in order to make learning AI fun and engaging is AIY. The name is an intelligent wordplay with AI and do-it-yourself (DIY).


How RPA differs from conversational AI, and the benefits of both

Enterprises are working to digitally transform core business processes to enable greater automation of backend processes and to encourage more seamless customer experiences and self-service at the frontend. We are seeing banks, insurers, retailers, energy providers and telcos working to develop their own digital assistants with a growing number of skills, while still providing a consistent brand experience. Developing bots doesn’t have to be complex. It is more important to carefully identify the right use cases where these technologies will deliver clear ROI with the least amount of effort. Whether an enterprise is applying RPA or conversational AI, or both, it’s important to first understand the business problem that needs to be solved, and then identify where bots will make an immediate difference. Then consider the investment required, barriers to successful implementation, and the expected business outcomes. It’s better to start small with a narrowly focused use case and achievable KPIs, rather than trying to do too much at once. Conversational AI and RPA are very powerful automation technologies. When designed well, a chatbot can automate up to 80% of routine queries that come into a customer service centre or IT helpdesk, saving an organisation time and money and enabling it to scale its operations.


Things to consider when running visual tests in CI/CD pipelines: Getting Started

Testing – it’s an important part of a developer’s day-to-day, but it’s also crucial to the operations engineer. In a world where DevOps is more than just a buzzword, where it’s become accepted as a mindset shift and culture change, we all need to consider running quality tests. Traditional testing may include UI testing, integration testing, code coverage checks, and so forth, but at some point, we still need eyeballs on a physical page. How many times have we seen a funny looking page because of CSS errors? Or worse yet, an important button like say, “Buy now” “missing” because someone changed the CSS and now the button blends in with the background? Logically, the page still works, and even from a traditional test perspective, the button can be clicked, and the DOM (used in UI Test verification) is perfect. Visually, however, the page is broken; this is where visual testing comes into play. Visual testing allows us to use automated UI testing with the power of AI to help us determine if a page “looks right” aside from just “functions right.” Earlier this year, I partnered with Angie Jones from Applitools in a joint webinar where we talked about best practices as it pertains to both Visual Testing and also CI/CD. This blog post is a summary of that webinar and how to handle visual testing in CI/CD.


Design patterns – for faster, more reliable programming

Every design has a pattern and everything has a template, whether it be a cup, house, or dress. No one would consider attaching a cup’s handle to the inside – apart from novelty item manufacturers. It has simply been proven that these components should be attached to the outside for practical purposes. If you are taking a pottery class and want to make a pot with handles, you already know what the basic shape should be. It is stored in your head as a design pattern, in a manner of speaking. The same general idea applies to computer programming. Certain procedures are repeated frequently, so it was no great leap to think of creating something like pattern templates. In our guide, we will show you how these design patterns can simplify programming. The term “design pattern” was originally coined by the American architect Christopher Alexander who created a collection of reusable patterns. His plan was to involve future users of the structures in the design process. This idea was then adopted by a number of computer scientists. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (sometimes referred to as the Gang of Four or GoF) helped software patterns break through and gain acceptance with their book “Design Patterns – Elements of Reusable Object-Oriented Software” in 1994.


Public and Private Blockchain: How to Differentiate Them and Their Use Cases

Public blockchain is the model of Bitcoin, Ethereum, and Litecoin and is essentially considered to be the original distributed ledger structure. This type of blockchain is completely open and anyone can join and participate in the network. It can receive and send transactions from anybody in the world, and can also be audited by anyone who is in the system. Each node (a computer connected to the network) has as much transmission and power as any other, making public blockchains not only decentralized, but fully distributed, as well. ... Private blockchains, on the other hand, are essentially forks of the originator but are deployed in what is called a permissioned manner. In order to gain access to a private blockchain network, one must be invited and then validated by either the network starter or by specific rules that were put into place by the network starter. Once the invitation is accepted, the new entity can contribute to the maintenance of the blockchain in the customary manner. Due to the fact that the blockchain is on a closed network, it offers the benefits of the technology but not necessarily the distributed characteristics of the public blockchain.



Quote for the day:

"Every moment is a golden one for those who have the vision to recognize it as such." -- Henry Miller