Daily Tech Digest - July 13, 2020

How to choose a robot for your company

There are lots of reasons a company might entertain automating processes with robots. According to Kern, the main reason is a labor shortage. Prior to COVID-19-related slowdowns, a competitive labor landscape and rising costs of living in many countries around the globe made hiring tough for skilled and unskilled positions alike. Automation, which often promises ROI efficiencies over time, particularly when it comes to repeatable tasks, is an attractive solution. "Robots can save money over time, not just by directly eliminating human labor, but by cutting out worker training and turnover," according to the Lux report for which Kern served as lead. "Most companies turn to automation and robotic solutions to deal with labor shortages, which is common in industries with repetitive tasks that have a high employee turnover rate. Companies also frequently use robots to automate dangerous tasks, keeping their employees out of harm's way." Post-COVID-19, there are also considerations like sanitation and worker volatility. As I've written, the perception of automation is changing almost overnight. Where robots were once, very recently, associated primarily with lost jobs, there's been a new spin in the industry to tout automation solutions as commonsense in a world where workers are risking infection when they show up at physical locations.

How the cloud fractures application delivery infrastructure ops

The traditional infrastructure team still operates ADCs and load balancers in the data center, while preferring the vendors they have worked with in the past. DevOps and CloudOps have taken control in the public cloud, choosing to use software and cloud provider services that are more integrated with their DevOps toolchains. This fractured operations model is problematic. Companies with divided Layer 4-7 operations are less likely to be successful with this infrastructure. EMA research participants also revealed why they feel a need to close this operational gap. First, 43% of enterprises said this situation has introduced security risks. In most enterprises, application delivery infrastructure is an important component of overall security architecture. Companies need to take a unified approach to network security. Research participants identified compliance problems (36%) and operational efficiency (36%) as the top secondary challenges associated with fractured operations. And 30% said platform problems -- such as issues with scale, performance, functionality or stability -- are a major challenge.

The enormous opportunity in fintech

Technology providers to specific areas of finance have created significant businesses. Across the insurance ecosystem, Guidewire, Applied Systems, and Vertafore capture $10 billion of value. BlackKnight, the leading analytics provider to the mortgage industry, is an $11 billion business. Are you thinking about managing financial documents for your public company? You may turn to Broadridge, which makes a pretty penny in this business, boasting a $13 billion market cap. While these are massive markets, it is not easy to disrupt incumbents. A combination of regulatory hurdles, entrenched behavior, low risk-tolerance, and the benefits of larger balance sheets have kept upstarts at bay for decades. However, as venture capital supports the ecosystem, modern technology creeps into the sector (cloud, APIs), connectivity and data exchanges improve, and consumers grow tired of incumbents, the tide continues to shift. This shift and the challenge to the status quo by fintech upstarts will have lasting effects. Even when incumbents acquire their biggest disruptors, such as Visa’s acquisition of Plaid, innovations pioneered by those startups become integrated into the system and help move the industry forward.

Somehow, Microsoft is the best thing to happen to Chrome

What strange times we live in. Who’d have thought that I’d be writing an article on how Microsoft is the best thing to happen to Google Chrome? A few years ago the idea of Microsoft getting involved in an open source project would cause a mixture of laughter and dread. You know… Microsoft, the foe of open source who had a CEO that once said that Linux was “a cancer that attaches itself in an intellectual property sense to everything it touches.” The company that couldn’t make a decent web browser to save its life. But, believe it or not, I really do think that Microsoft’s involvement has made Chrome a much better browser. ... Basically, since dropping its opposition to open source, and not only embracing it, but putting its money where its mouth is, the thought of Microsoft being involved with an open source project is no longer the stuff of nightmares. It’s proved to be a valuable contributor to the open source community already. But how does this affect Google’s Chrome browser? Well, ever since Microsoft stopped using its own web engine, EdgeHTML, for its Edge web browser, and instead built a brand-new version that’s based on Chromium, it’s been contributing a steady stream of fixes and new features to Chromium – and those have not just been benefitting Edge, but Chrome as well.

IBM just changed the automation game. Hello Extreme Automation

The technology provides a low code, cloud-based authoring experience for the business user to create bot scripts with a desktop recorder, without the need of IT. These scripts are executed by digital robots to complete tasks. Digital robots can run on-demand by the end-user or by an automated scheduler. Arguably, WDG is on a par with Softomotive – acquired by Microsoft for considerably more money. What is clear is these RPA firms are offering pretty much the same functionality for the basic scripting and recording.  WDG is focused heavily on quality customer service ops and is great at integrating with chatbots, digital associates and other AI tools. Pre-Covid, most RPA was focused on low-risk back-office processes, especially in finance. Now customers are desperate to automate the customer-facing and revenue-generating processes and need tools proven to work in the environments. Noone has a huge advantage in the CX automation space so this provides a greenfield opportunity for IBM. The WDG automation software sits under IBM Cognitive and Cloud giving it a broader playing field to compete with the likes of MSFT, Pega, Appian, and even ServiceNow. Arguably, this is the real play that excites IBM’s top brass.

The Importance of Domain Experience in Data Science

Restated — domain knowledge is the learned skill to communicate fluently in a group’s data dialect. Its component parts are: general business acumen + vertical knowledge + data lineage understanding. For example, a data scientist in people analytics requires a foundational knowledge of the business + human resources + the inner-workings of their company’s HR tools and processes which create the data they work with. Those processes and other inputs to the dataset are crucial. A data scientist can’t create meaningful insights before they understand what the data is saying today. Is it telling a story? Is it, or subsets of it, too polluted to use today? Are some data points proxies for or inputs to others? The more complex your business processes and associated data lineage, the longer your data dialect will take to learn. For digital native companies whose data collection is automated with intuitive dialects (i.e. a “click” is a “click”), domain knowledge can be developed much more quickly than for large, longstanding companies which have undergone transformations, acquisitions and/or divestitures. If you hire a data scientist, how long will it take them to learn your data dialect? And can you provide air cover for them to do so before applying pressure to produce “insights?”

Hiring developers: While coding is important, there are other things to consider

A recruiter can learn a lot about the candidate in that half hour, including any side projects they might be involved in or games they've written. These "are often a window into a developer's willingness to take initiative," Volodarsky said. Learning what a developer does in their spare time can also provide great insight into their personality, he said. "Hiring great coders is important, but you also want to collaborate with interesting people, too." When it comes to hiring freelance developers it's important that they understand both the code and the nuances of the business they're contracting for, and this will come through in that conversation over a falafel, or the like, he said. In terms of motivating factors, not surprisingly, an overwhelming 70% said they were looking for better compensation, while 58.5% said they want to work with new technologies, and 57% said they were curious about other opportunities. Close to 70% of respondents said they learn about a company during a job hunt by turning to reviews on third-party sites such as Glassdoor and Blind. However, a large number also said they learned from viewing company-sponsored media, such as blogs and company culture videos.

Is Singapore ready to govern a digital population?

Singapore over the past several years has invested significant resources towards becoming a digital economy, rolling out an ambitious smart nation roadmap, driving the adoption of emerging technologies, and overhauling its own ICT infrastructure. With the global pandemic now adding new impetus to digital transformation, the government has made a concerted effort to drive digital adoption deeper into the business community and local population. It established a new office to work alongside the business community and local population to push the "national digitalisation movement". Initiatives would include the deployment of 1,000 "digital ambassadors" to help stallholders and seniors go digital and setting up of 50 digital community hubs across the island to offer one-to-one assistance on digital skills. A new ministerial committee will also coordinate the country's digitalisation efforts and focus on priorities such as assisting people in learning new skills and galvanising small businesses to go digital. More funds and resources have been further directed to facilitate digital transformation initiatives.

AIOps tools expand as users warm slowly to autoremediation

AIOps has generated industry hype since 2017, as advances in machine learning algorithms prompted IT monitoring vendors to envision a new method of automation for their products. At the same time, complex microservices infrastructures became impossible to manage entirely by human hands alone. Since then, AIOps tools have grown more sophisticated, adding automated remediation features to event correlation and automated root cause analysis, and AIOps vendors that began in specialized areas have also broadened the workloads their tools can support. Most recently, those vendors include Epsagon, which emerged in 2018 with AI-supported distributed tracing for serverless environments and expanded in 2019 to include container and cloud workloads. It now offers AIOps features it calls Applied Observability, which automate menial incident resolution tasks in response to metrics and logs in addition to traces. Last month, Epsagon launched a partnership with Microsoft centered on Kubernetes environments after previously inking a deal with AWS focused on its Lambda serverless compute service.

How Microfrontends Can Help to Focus on Business Needs

The concept of building sites from small web applications integrated via hyperlinks is (still) very common. There have also been a lot of concepts of rendering pages from smaller, independent building blocks in the past, such as Java Portlets. Even if the term microfrontend nowadays is used to refer to modern JavaScript apps, there are multiple possible approaches. So, when I use it in this article I refer to an application that: is basically a JavaScript Rich Client (for example a SPA or a Web Component) that runs isolated within an arbitrary DOM node and is as small and performant as possible; does not install global libraries, fonts, or styles; does not assume anything about the site it is embedded in; especially it does not assume any existing paths, so all the base paths to assets and APIs must be configurable; has a well-defined interface consisting of the startup configuration and some runtime messages (events); should be instantiable; ideally inherits the shared styles from the site and ships only styles absolutely necessary to define its layout.

Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - July 12, 2020

Study Reveals a ‘Skills Gap’ That Jeopardizes Future of Banking Workforce

Over a period of only a couple months, entire workforces were required to familiarize themselves with digital tools which never were needed in a traditional work environment. At the same time, financial institutions were required to connect with customers using mobile apps, online tools and digital engagement capabilities that were foreign to many. The impact of these changes was felt most by the employees who had been with their financial institution the longest or were in areas of an organization that had not adjusted to recent marketplace realities. Many financial institutions responded to internal and external digital needs with mid-term solutions, understanding that significantly more is needed. The impact of COVID-19 has forced banks and credit unions to quickly assess the digital competency of their teams, while looking to internal training and the marketplace to provide longer term solutions. This comes at a time when every industry is looking to address a massive digital and technology skills gap. The research from the Digital Banking Report found that 72% of financial services executives believed there was either a moderate (37%) or significant (35%) skills gap. Less than three in ten thought there was only a minor or no threat.

Deployment and Productionization of Machine Learning Models

A machine infrastructure encompasses almost every stage of the machine learning workflow. To train, test, and deploy machine learning models you need services from data scientists, data engineers, software prog engineers, and DevOps engineers. The infrastructure allows people from all these domains to collaborate and empower them to associate for an end to end execution of the project. Some examples of tools and platforms are AWS(amazon web services, Google Cloud, Microsoft Azure machine learning studio, Kubeflow: Machine-Learning Toolkit for Kubernetes. Architecture deals with the arrangement of these components(things discussed above) and also takes care of how they must interact with them. Think of it as building a machine learning home where bricks, concrete, iron, are integral to the infrastructure, applications, etc. The architecture shapes our home by using these materials. Similarly, the architecture here provides that interaction among these components. ... In machine learning, for the given data different models are built and we keep track through version control tools like DVC and Git. Version control will keep track of changes made to the model at each stage and keep a repository.

Seamlessly Scaling AI for Distributed Big Data

Conventional approaches usually set up two separate clusters, one dedicated to Big Data processing, and the other dedicated to deep learning (e.g., a GPU cluster), with “connector” (or glue code) deployed in between. Unfortunately, this “connector approach” not only introduces a lot of overheads (e.g., data copy, extra cluster maintenance, fragmented workflow, etc.), but also suffers from impedance mismatches that arise from crossing boundaries between heterogeneous components (more on this in the next section). To address these challenges, we have developed open source technologies that directly support new AI algorithms on Big Data platforms. ... Before diving into the technical details of BigDL and Analytics Zoo, I shared a motivating example in the tutorial. JD is one of the largest online shopping websites in China; they have stored hundreds of millions of merchandise pictures in HBase, and built an end-to-end object feature extraction application to process these pictures (for image-similarity search, picture deduplication, etc.). While object detection and feature extraction are standard computer vision algorithms, this turns out to be a fairly complex data analysis pipeline when scaling to hundreds of millions pictures in production, as shown in the slide below.

‘Undeletable’ Malware Shows Up in Yet Another Android Device

While it was not immediately obvious that the trojan was present on the device, researchers were able to detect it given its similarity to another malware downloader. “Proof of infection is based on several similarities to other variants of Downloader Wotby,” Collier explained. “Although the infected Settings app is heavily obfuscated, we were able to find identical malicious code. Additionally, it shares the same receiver name: com.sek.y.ac; service name: com.sek.y.as; and activity names: com.sek.y.st, com.sek.y.st2, and com.sek.y.st3.” The app did not trigger any malicious activity when researchers analyzed the device, which they expected; however, the smartphone they examined also did not have a SIM card installed, which also could affect how the malware behaves, he said. “Nevertheless, there is enough evidence that this Settings app has the ability to download apps from a third-party app store,” he wrote. “This is not okay.” The other malware variant came preinstalled in the UL40’s Wireless Update app, which functions as the device’s main way of updating security patches, the operating system and other apps.

6 Coding Books Every Programmers and Software Developers should Read

Refactoring, Improving the design of existing code: This book is written in Java as it’s the principal language, but the concept and idea are applicable to any Object-oriented language, like C++ or C#. This book will teach you how to convert a mediocre code into a great code that can stand production load and real-world software development nightmare, the CHANGE. The great part is that Martin literally walks you the steps by taking a code you often see and then step by step converting into more flexible, more usable code. You will learn the true definition of clean code by going through his examples. ... The Art of Unit Testing: If there is one thing I would like to improve on projects, as well as programmers, are their ability to unit test. After so many years or recognition that Unit testing is must have practiced for a professional developer, you will hardly find developers who are a good verse of Unit testing and follows TDD. Though I am not hard on following TDD, at a bare minimum, you must write the Unit test for the code you wrote and also for the code you maintain. Projects are also not different, apart from open source projects, many commercial in-house enterprise projects suffer from the lack of Unit test.

How to become an effective software development manager and team leader

I learn by doing, and I learn from others. So first of all, I don't think anyone is born with these skills. I mean, some people are better communicators than other people, but a lot of the things that you actually have to learn like how to manage somebody, how to... the good news is it can be learned and the way I learned it is by doing and getting better every time I did it. But I was also fortunate that I was able to surround myself with really great people every step along the way, both in Drupal and at Acquia frankly. So surrounding yourself with experienced managers, or experienced leaders is very helpful and fast tracks that learning, right? ... I think about it almost everyday actually. But I prioritize it lower than a lot of other things that I do. Literally, when I wake up I try to think, "What should I do today that has the biggest impact on Drupal and Acquia?" It's almost never coding for me, unfortunately. I secretly hope it would be one day it's like, "Wow, go code. Go write this piece of code." But it usually involves unblocking other people or teams, or helping to fundraise for the Drupal Association right now. So the coding is often reserved for evenings and weekends. I like to dabble with code still.

Whiteapp ASP.NET Core using Onion Architecture 

It is Architecture pattern which is introduced by Jeffrey Palermo in 2008, which will solve problems in maintaining application. In traditional architecture, where we use to implement by Database centeric architecture. Onion Architecture is based on the inversion of control principle. It's composed of domain concentric architecture where layers interface with each other towards the Domain (Entities/Classes). Main benefit of Onion architecture is higher flexibility and de-coupling. In this approach, we can see that all the Layers are dependent only on the Domain layer (or sometimes, it called as Core layer). ... Testability: As it decoupled all layers, so it is easy to write test case for each Components; Adaptability/Enhance: Adding new way to interact with application is very easy;  Sustainability: We can keep all third party libraries in Infrastructure layer and hence maintainence will be easy; Database Independent: Since database is separated from data access, it is quite easy switch database providers; Clean code: As business logic is away from presentation layer, it is easy to implement UI;

In the age of disruption, comprehensive network visibility is key

In an age of dynamic disruption, IT is increasingly challenged to maintain optimal service delivery, while implementing remote working at an unprecedented scale. It’s not surprising, then, that nearly 60 percent of study respondents cite the need for greater visibility into remote user experiences. The top challenge for troubleshooting applications is the ability to understand end-user experience (nearly 47 percent). “As remote working becomes the new norm, IT teams are challenged to find and adapt technologies, such as flow-based reporting to manage bandwidth consumption, VPN oversubscription and troubleshooting applications. To guarantee the best performance and reduce cybersecurity threats, increasing network visibility is now a must for all businesses,” said Charles Thompson, Senior Director, Enterprise and Cloud, VIAVI. “By empowering NetOps, as well as application and security teams with network visibility, IT can mitigate the impact of disruptive migrations, incidents and new technologies like SD-WAN to achieve consistent operational excellence.”

Prepare for Artificial Intelligence to Produce Less Wizardry

“Deep neural networks are very computationally expensive,” says Song Han, an assistant professor at MIT who specializes in developing more efficient forms of deep learning and is not an author on Thompson’s paper. “This is a critical issue.” Han’s group has created more efficient versions of popular AI algorithms using novel neural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go,” to make deep learning less compute-hungry. Other researchers have noted the soaring computational demands. The head of Facebook’s AI research lab, Jerome Pesenti, told WIRED last year that AI researchers were starting to feel the effects of this computation crunch. Thompson believes that, without clever new algorithms, the limits of deep learning could slow advances in multiple fields, affecting the rate at which computers replace human tasks. “The automation of jobs will probably happen more gradually than expected, since getting to human-level performance will be much more expensive than anticipated,” he says.

Ransomware Characteristics and Attack Chains – What you Need to Know about Recent Campaigns

Ransomware is a type of malware that prevents users from accessing their system or personal files and demands a “ransom payment” in order to regain access. There are two types of campaigns for ransomware “Human-operated” and “Auto-spreading”, this article focusing on the human-operated campaigns. Human-operated campaigns tend to have common attack patterns which include: Gaining initial access, credential theft, lateral movement and persistence. For many of the human-operated campaigns, typical access comes from RDP brute force, a vulnerable internet-facing system, or weak application settings. Once attackers have gained access they can deploy a plethora of tools to get user credentials. After gaining credentials lateral movement takes place with either deploying a widely known commercial penetration testing suite called Cobalt Strike, changing settings of the WMI (Windows Management Instrument) or abusing management tools with low-level privilege. Finally, attackers want to keep a connection and make it persistent; this is done by creating new accounts, making GPO (Group Policy Object) changes, creating scheduled tasks, manipulating service registration, or by deploying shadow tools.

Quote for the day:

"Nobody in your organization will be able to sustain a level of motivation higher than you have as their leader." -- Danny Cox

Daily Tech Digest - July 11, 2020

Software as a Service (SaaS): A cheat sheet

Beyond reliability, and depending on the nature of your business applications, it is also vitally important to evaluate the capacity provided by your chosen ISP. Querying large databases or moving large media files will require more bandwidth than is typical for less-intense applications like email; however, even extremely large bandwidth may not be enough, if there are also latency issues. There are similar reliability concerns when choosing the service provider for the SaaS applications themselves. Business organizations have to think about the longevity of their provider, their commitment to security, their willingness to customize applications, and their plans for feature upgrades. SaaS requires a business to relinquish some control in order to reap the benefits of the distribution system. Relinquishing control may also cause problems when the SaaS provider updates certain application features that the business does not want changed. Some feature upgrades will break existing use cases, especially if the business is using a customized version of the software. Some SaaS vendors have been known to eliminate aggregately under used features from their software, which causes problems for businesses that choose to adopt those features.

APT Group Targets Fintech Companies

Once the targeted victim clicks on the LNK file to view one of the documents, the malware begins to load in the background and infect their device, according to the report. Once the attackers successfully infect devices and a network, the malware steals sensitive corporate data, such as customer lists, credit card information and other personally identifiable data, along with the firm's investments and trading operations data, the ESET researchers report. In the next phase of the attack, the JavaScript components deploy other malware the Evilnum operators purchased from other hackers, including code written in C# from the malware-as-a-service provider Golden Chickens, the report notes. The attackers also use Python-based tools in their toolkits, the researchers add. While the JavaScript component acts as a backdoor and handles communications with the command-and-control server, the C# code takes on other tasks, including grabbing a screenshot whenever the mouse is moved over a certain length of time, sending system information back to the operators as well as stealing cookies and credentials. Eventually, this process will kill the malware when the campaign is complete, according to the report.

Why Segmentation is More Effective Than Firewalls For Securing Industrial IoT

As we’re so accustomed to using firewalls in our everyday lives (particularly on our own private computers, tablets, and smartphones) it might seem intuitive to use a firewall as a safeguard for IIoT-connected devices as well. However, the choice isn’t quite so straightforward as it might at first seem. Internal firewalls are expensive and complex to implement. It could be that for genuinely reliable protection, you need to install a firewall at every IIoT connection point. This could mean that hundreds (perhaps even thousands) of firewalls are required. We’ve already discussed how businesses’ technology security budgets are often overstretched. Taking this into account, security spend needs to be very carefully calculated and targeted. Segmentation, on the other hand, makes it possible to keep particular types of devices siloed off in a certain segment, thereby enhancing security. It also helps to enhance visibility and simplify classification of different device types. Organisations can then create risk profiles and relevant security policies for device groups.

How data and AI will shape the post-pandemic future

The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade, and it is becoming far more accepted that AI is something that can be trusted. It does have its limitations. It's not going to turn into the Terminator and take over the world. The fact that we are seeing AI more in our day-to-day lives means people are beginning to depend on the results of AI, at least from the understanding of the pandemic, but that drives that exception. When you start looking at how it will enable people to get back to somewhat of a normal existence―to go to the store more often, to be able to start traveling again, and to be able to return to the office―there is that dependency that Arti mentioned around video analytics to ensure social distancing or temperatures of people using thermal detection. All of that will allow people to move on with their lives and so AI will become more accepted. I think AI softens the blow of what some people might see as a civil liberty being eroded. It softens the blow of that in ways and says, "This is the benefit already and this is as far as it goes." So it at least forms discussions whenever it was formed before.

IoT: device management and security are crucial

Operational challenges abound from the beginning of the IoT journey to its end. For example, how do you efficiently roll out hundreds of thousands or even a million devices in a timely manner? Once up and running, device firmware and IoT application software will need to be updated – possibly multiple times – during the course of the device’s life. Additionally, the device should be monitored against established baselines. This creates the environment for an early warning system that can highlight possible software bugs or security exploits. Devices also may experience an “upgrade” during their life cycles, as new capabilities may be activated and enabled over-the-air, based on needs and business cases. Ownership changes require re-assignment of control, and at the end, devices need to be decommissioned and brought to end-of-life in an efficient manner. These development and deployment challenges are prompting companies to re-examine how they allocate resources more efficiently. For example, only 15% of overall IoT systems development time is IoT application development. But a full 30% is device-management issues (provisioning, onboarding, and updating devices and systems), while 40% is taken up by developing the device stacks.

More pre-installed malware has been found in budget US smartphones

While the app does function as an over-the-air updater for security fixes and as an updater to the operating system itself, the software also installs four variants of HiddenAds, a Trojan family found on Android handsets. HiddenAds is a strain of adware that bombards users with adverts. In order to verify where the malware originated from, Malwarebytes disabled WirelessUpdate and then re-enabled the app. Within 24 hours, four adware strains were covertly installed. As the malware on the UMX and ANS differ, the team wanted to see if there were any ties linking the brands. A common thread was the use of a digital certificate used to sign the ANS Settings app under the name teleepoch. Upon further investigation, the certificate was traced back to TeleEpoch Ltd, which is registered as UMX in the United States. "We have a Settings app found on an ANS UL40 with a digital certificate signed by a company that is a registered brand of UMX," Collier says. "That's two different Settings apps with two different malware variants on two different phone manufactures & models that appear to all tie back to TeleEpoch Ltd. ..."

Increasing demand for RegTech to Meet Regulatory Burden

The demand has grown exponentially high since the Global Financial Crisis of 2008, businesses need to comply with regulatory reforms related to Anti-Money Laundering (AML) and due diligence (KYC) requirements. The cost to comply with regulations was staggering, but the non-compliance costs more due to hefty amounts of fines. Digitization of regulatory compliance assists businesses in meeting the needs of regulation, that too, by cutting the cost. According to the study, the cost of compliance across all banks from 2014 to 2016 averaged approximately 7.0% of their noninterest expenses. RegTech startups are experiencing growth and investment as firms are realizing the need to capitalize on compliance efficiency. Businesses can use it for a competitive edge in the industry. There is great potential for powering the future of financial regulation by integrating RegTech. It has major implications as it provides reduced regulatory costs and improved operational efficiency. The main target of RegTech was the finance industry.

Why businesses are adopting AI to improve operations

AI has improved productivity in an array of sectors. AI-powered contact center software has allowed companies to become incredibly efficient. In a shop, a digital SKU system is far more efficient at keeping tabs on stock levels than a manual one. It can record and analyze the demand for certain articles. More will automatically get ordered. A fashion store can see when a garment is selling like hot cakes and get more before the trend runs its course. This maximizes profit on the item. For teleconferencing solutions or other software providers, one of the biggest problems faced is customer churn. Retention schemes try to contact as many customers as possible whose contract is due to run out. Discounts and other enticements are offered to remain. But some of those customers would have stayed anyway. Others, who were more likely to leave, may not have been contacted. Customer services can't get in touch with every single person whose contract is due to be up. What the firm needs to understand are the factors influencing people to stay or go. An AI program is able to analyze the data from thousands of customers. It works out the risk factors and pulls out a list of people most likely to leave.

10 Ways AI Is Improving New Product Development

From startups to enterprises racing to get new products launched, AI and machine learning (ML) are making solid contributions to accelerating new product development. There are 15,400 job positions for DevOps and product development engineers with AI and machine learning today on Indeed, LinkedIn and Monster combined. Capgemini predicts the size of the connected products market will range between $519B to $685B this year with AI and ML-enabled services revenue models becoming commonplace. Rapid advances in AI-based apps, products and services will also force the consolidation of the IoT platform market. The IoT platform providers concentrating on business challenges in vertical markets stand the best chance of surviving the coming IoT platform shakeout. As AI and ML get more ingrained in new product development, the IoT platforms and ecosystems supporting smarter, more connected products need to make plans now how they're going to keep up. Relying on technology alone, like many IoT platforms are today, isn't going to be enough to keep up with the pace of change coming.

CDO Leadership Skills That Matter

Persistence is a key trait of successful leaders—they don’t get demotivated too easily. Whereas some people retreat back to their caves after failed attempts to collaborate with the organization, choosing to focus only on internal marketing or just a few pilots, I find that leaders who are persistent have a seat at the strategic table with their peers, have a strategy, and have a roadmap. They’re constantly thinking through how their capabilities could be used across the organization. They’re not easily defeated when something doesn’t go right. Persistence is important because the failure rate of data strategies and data governance teams is high; you’re building in a function that you’re not consolidating under one person, one business function. You’re often using a distributed leadership and organization model, which takes hard work to set the right expectations and have ongoing communications. On a regular basis, you have to give different people the WIIFM, the goals and objectives, that apply to their particular situation, and try to drive adoption and change in a way that fits with how each team works.

Quote for the day:

"Humility is a great quality of leadership which derives respect and not just fear or hatred." -- Yousef Munayyer