Daily Tech Digest - December 04, 2021

Universal Stablecoins, the End of Cash and CBDCs: 5 Predictions for the Future of Money

Many of the features that decentralized finance, or DeFi, brings to the table will be copied by regular finance in the future. For instance, there’s no reason that regular finance can’t copy the automaticity and programmability that DeFi offers, without bothering with the blockchain part. Even as regular finance copies the useful bits from DeFi, DeFi will emulate regular finance by pulling itself into the same regulatory framework. That is, DeFi tools will become compliant with anti-money laundering/know your customer (AML/KYC) rules, Securities and Exchange Commission-registered or licensed with the Office of the Comptroller of the Currency (OCC). And not necessarily because they are forced to do so. (It’s hard to force a truly decentralized protocol to do anything.) Tools will comply voluntarily. Most of the world’s capital is licit capital. Licit capital wants to be on regulated venues, not illegal ones. To capture this capital, DeFi has no choice but to get compliant. The upshot is that over time DeFi and traditional finance (TradFi) will blur together. 

10 Rules for Better Cloud Security

Security in the cloud is following a pattern known as the shared responsibility model, which states that the provider is only responsible for security ‘of’ the cloud, while customers are responsible for security ‘in’ the cloud. This essentially means that to operate in the cloud, you still need to take your share of work for secure configuration and management. The scope of your commitment can vary widely because it depends on the services you are using: if you’ve subscribed to an Infrastructure as a Service (IaaS) product, you are responsible for OS patches and updates. If you only require object storage, your responsibility scope will be limited to data loss prevention. Despite this great diversity, there are some guidelines that apply no matter what your situation is. And the reason for this is simply because all the cloud vulnerabilities are essentially reduced to one thing: misconfigurations. Cloud providers have put at your disposal powerful security tools, yet we know that they will fail at some point. People make mistakes, and misconfigurations are easy. 

Unit testing vs integration testing

Tests need to run to be effective. One of the great advantages of automated tests is that they can run unattended. Automating tests in CI/CD pipelines is considered a best practice, if not mandatory according to most DevOps principles. There are multiple stages when the system can and should trigger tests. First, tests should run when someone pushes code to one of the main branches. This situation may be part of a pull request. In any case, you need to protect the actual merging of code into main branches to make sure that all tests pass before code is merged. Set up CD tooling so code changes deploy only when all tests have passed. This setup can apply to any environment or just to the production environment. This failsafe is crucial to avoid shipping quick fixes for issues without properly checking for side effects. While the additional check may slow you down a bit, it is usually worth the extra time. You may also want to run tests periodically against resources in production, or some other environment. This practice lets you know that everything is still up and running. Service monitoring is even more important to guard your production environment against unwanted disruptions.

Vulnerability Management | A Complete Guide and Best Practices

Managing vulnerabilities helps organizations avoid unauthorized access, illicit credential usage, and data breaches. This ongoing process starts with a vulnerability assessment. A vulnerability assessment identifies, classifies, and prioritizes flaws in an organization's digital assets, network infrastructure, and technology systems. Assessments are typically recurring and rely on scanners to identify vulnerabilities. Vulnerability scanners look for security weaknesses in an organization's network and systems. Vulnerability scanning can also identify issues such as system misconfigurations, improper file sharing, and outdated software. Most organizations first use vulnerability scanners to capture known flaws. Then, for more comprehensive vulnerability discovery, they use ethical hackers to find new, often high-risk or critical vulnerabilities. Organizations have access to several vulnerability management tools to help look for security gaps in their systems and networks.

How Web 3.0 is Going to Impact the Digital World?

The concept of a trustless network is not new. The exclusion of any so-called “trusted” third parties from any sort of virtual transactions or interactions has long been an in-demand ideology. Considering how data theft is a prominent concern among internet users worldwide, trusting third parties with our data doesn’t seem right. Trustless networks ensure that no intermediaries interfere in any online transactions or interactions. A close example of truthfulness is the uber-popular blockchain technology. Blockchain is mostly used in transactions involving cryptocurrencies. It defines a protocol as per which only the individuals participating in a transaction are connected in a peer-to-peer manner. No intermediary is involved. Social media enjoys immense popularity today. And understandably so, for it allows us to connect and interact with known ones and strangers sans any geographical limits. But firms that own social media platforms are few. And these few firms hold the information of millions of people. Sounds scary right? 

Is TypeScript the New JavaScript?

As a static language, TypeScript performs type checks upon compilation, flagging type errors and helping developers spot mistakes early on in development. Reducing errors when working with large codebases can save hours of development time. Clear and readable code is easy to maintain, even for newly onboarded developers. Because TypeScript calls for assigning types, the code instantly becomes easier to work with and understand. In essence, TypeScript code is self-documenting, allowing distributed teams to work much more efficiently. Teams don’t have to spend inordinate amounts of time familiarizing themselves with a project. TypeScript’s integration with editors also makes it much easier to validate the code thanks to context-aware suggestions. TypeScript can determine what methods and properties can be assigned to specific objects, and these suggestions tend to increase developer productivity. TypeScript is widely used to automate the deployment of infrastructure and CI/CD pipelines for backend and web applications. Moreover, the client part and the backend can be written in the same language—TypeScript.

4 signs you’re experiencing burnout, according to a cognitive scientist

One key sign of burnout is that you don’t have motivation to get any work done. You might not even have the motivation to want to come to work at all. Instead, you dread the thought of the work you have to do. You find yourself hating both the specific tasks you have to do at work, as well as the mission of the organization you’re working for. You just can’t generate enthusiasm about work at all. A second symptom is a lack of resilience. Resilience is your ability to get over a setback and get yourself back on course. It’s natural for a failure, bad news, or criticism to make you feel down temporarily. But, if you find yourself sad or angry for a few days because of something that happened at work, your level of resilience is low. When you’re feeling burned out, you also tend to have bad interactions with your colleagues and coworkers. You find it hard to resist saying something negative or mean. You can’t hide your negative feelings about things or people that can upset others. In this way, your negative feelings about work become self-fulfilling, because they actually create more unpleasant situations.

Spotting a Modern Business Crisis — Before It Strikes

Modern technologies such as more-efficient supply chain operations, the internet, and social media have not only increased the pace of change in business but have also drawn more attention to its impact on society. Fifty years ago, oversight of companies was largely the domain of regulatory agencies and specialized consumer groups. What the public knew was largely defined by what businesses were required to disclose. Today, however, public perception of businesses is affected by a diverse range of stakeholders — consumers, activists, local or national governments, nongovernmental organizations, international agencies, and religious, cultural, or scientific groups, among others. ... There are a few ways businesses can identify risks. One, externalize expertise through insurance and consulting companies that identify sociopolitical or climate risks. Two, hire the right talent for risk assessment. Three, rely on government agencies, media, industry-specific institutions, or business leaders’ own experience of risk perception. A fail-safe approach is to use all three mechanisms in tandem, if possible.

Today’s Most Vital Question: What is the Value of Your Data?

Data has latent value; that is, data has potential value that has not yet realized. And the possession of data in of itself provides zero economic value, and in fact, the possession of data has associated storage, management, security, and backup costs and potential regulatory and compliance liabilities. ... Data must be “activated” or put into use in order to convert that latent (potential) value of data into kinetic (realized) value. The key is getting the key business stakeholders to envision where and how to apply data (and analytics) to create new sources of customer, product, service, and operational value. The good news is that most organizations are very clear as to where and how they create value. ... The value of the organization’s data is tied directly to its ability to support quantifiable business outcomes or Use Cases. ... Many data management and data governance projects stall out because organizations lack a business-centric methodology for determining which of their data sources are the most valuable. 

Federal watchdog warns security of US infrastructure 'in jeopardy' without action

The report was released in conjunction with a hearing on securing the nation’s infrastructure held by the House Transportation and Infrastructure Committee on Thursday. Nick Marinos, the director of Information Technology and Cybersecurity at GAO, raised concerns in his testimony that the U.S. is “constantly operating behind the eight ball” on addressing cyber threats. “The reality is that it just takes one successful cyberattack to take down an organization, and each federal agency, as well as owners and operators of critical infrastructure, have to protect themselves against countless numbers of attacks, and so in order to do that, we need our federal government to be operating in the most strategic way possible,” Marinos testified to the committee. According to the report, GAO has made over 3,700 recommendations related to cybersecurity at the federal level since 2010, and around 900 of those recommendations have not been addressed. Marinos noted that 50 of the unaddressed concerns are related to critical infrastructure cybersecurity.

Quote for the day:

"Self-control is a critical leadership skill. Leaders generally are able to plan and work at a task over a longer time span than those they lead." -- Gerald Faust

Daily Tech Digest - December 03, 2021

IT threat evolution Q3 2021

Earlier this year, while investigating the rise of attacks against Exchange servers, we noticed a recurring cluster of activity that appeared in several distinct compromised networks. We attribute the activity to a previously unknown threat actor that we have called GhostEmperor. This cluster stood out because it used a formerly unknown Windows kernel mode rootkit that we dubbed Demodex; and a sophisticated multi-stage malware framework aimed at providing remote control over the attacked servers. The rootkit is used to hide the user mode malware’s artefacts from investigators and security solutions, while demonstrating an interesting loading scheme involving the kernel mode component of an open-source project named Cheat Engine to bypass the Windows Driver Signature Enforcement mechanism. ... The majority of GhostEmperor infections were deployed on public-facing servers, as many of the malicious artefacts were installed by the httpd.exe Apache server process, the w3wp.exe IIS Windows server process, or the oc4j.jar Oracle server process.

USB Devices the Common Denominator in All Attacks on Air-Gapped Systems

There have been numerous instances over the past several years where threat actors managed to bridge the air gap and access mission-critical systems and infrastructure. The Stuxnet attack on Iran — believed to have been led by US and Israeli cybersecurity teams — remains one of the most notable examples. In that campaign, operatives managed to insert a USB device containing the Stuxnet worm into a target Windows system, where it exploited a vulnerability (CVE-2010-2568) that triggered a chain of events that eventually resulted in numerous centrifuges at Iran's Natanz uranium enrichment facility being destroyed. Other frameworks that have been developed and used in attacks on air-gapped systems over the years include South Korean hacking group DarkHotel's Ramsay, China-based Mustang Panda's PlugX, the likely NSA-affiliated Equation Group's Fanny, and China-based Goblin Panda's USBCulprit. ESET analyzed these malware frameworks, and others that have not be specifically attributed to any group such as ProjectSauron and agent.btz.

How to do data science without big data

When you have visibility on the organizational strategy and the business problems to be solved, the next step is to finalize your analytics approach. Find out whether you need descriptive, diagnostic, or predictive analytics and how the insights will be used. This will clarify the data you should collect. If sourcing data is a challenge, phase out the collection process to allow for iterative progress with the analytics solution. For example, executives at a large computer manufacturer we worked with wanted to understand what drove customer satisfaction, so they set up a customer experience analytics program that started with direct feedback from the customer through voice-of-customer surveys. Descriptive insights presented as data stories helped improve the net promoter scores during the next survey. Over the next few quarters, they expanded their analytics to include social media feedback and competitor performance using sources such as Twitter, discussion forums, and double-blind market surveys. To analyze this data, they used advanced machine learning techniques.

Applying Social Leadership to Enhance Collaboration and Nurture Communities

Social leadership seems to differ as it is not a form of leadership that is granted, as is often the case in formal hierarchical environments. Organisations that have more “traditional management” structures and approaches tend to grant managers authority, accountabilities and power. Also, as I imagine you have seen, there has been much commentary over the years on the fact that management and leadership are not the same things. Some years ago when I was undertaking the Chartered Manager program with the Chartered Management Institute(CMI), I came across the definition that Management is “doing things right,” whereas leadership is “doing the right thing”. I find this succinct explanation of the difference refreshing and have continued to use this within my own coaching and mentoring work since. It feels to me that “doing the right thing” is the modus operandi of the social leader. Also, we talk a lot about the problems with accidental managers: those who have been promoted into managerial roles, often by having in the past been successful in their technical domains.

Report: APTs Adopting New Phishing Methods to Drop Payload

"When an RTF Remote Template Injection file is opened using Microsoft Word, the application will retrieve the resource from the specified URL before proceeding to display the lure content of the file. This technique is successful despite the inserted URL not being a valid document template file," Raggi says. Researchers demonstrated a process in which the RTF file was weaponized to retrieve the documentation page for RTF version from a URL at the time the file is opened. "The technique is also valid in the .rtf file extension format, however a message is displayed when opened in Word which indicates that the content of the specified URL is being downloaded and in some instances an error message is displayed in which the application specifies that an invalid document template was utilized prior to then displaying the lure content within the file," Raggi says. The weaponization part of the RTF file is made possible by creating or altering an existing RTF file’s document property bytes using a hex editor, which is a computer program that allows for manipulation of the fundamental binary data.

A blockchain connected motorbike: what Web 3.0 means for mobility and why you should care

We’ve been hearing about the potential of Web 3.0 for years – a decentralized web where information is distributed across nodes, making it more resistant to shutdowns and censorship. Specifically, its foundation lies in edge computing, artificial intelligence, and decentralized data networks. But what we haven’t talked enough about, is the massive impact Web 3.0 will have on mobility. Web 3.0 aims to build a new scalable economy where transactions are powered by blockchain technology, eschewing the need for a central intermediary or platform. And in the mobility space, there are lots of things happening. ... Pave Bikes connect to a private blockchain network. When you get your bike, you receive a non-fungible token (NFT). This is effectively a private key or token-based on ERC721. It is used to unlock the ebike via the Pave+ App. To be exact, the Pave mobile app is technically a dApp, a decentralized application connected to the blockchain. It enables riders to securely authenticate their proof of purchase and access their bike using Bluetooth, even without an internet connection.

Open banking will continue its exponential rise in the UK in 2022

Over the next year and beyond, it will be interesting to see how Variable Recurring Payments (VRPs) will continue to develop to allow businesses to connect to authorised payment providers to make payments on the customer’s behalf. Direct debits, which is the main mechanism in use today, are expensive, slow and have a painful, mainly paper-based process today. This is long overdue for digital transformation. I anticipate 2022 will be the year we begin to see VRPs in full effect. This will provide countless opportunities for consumers to find new ways to manage their finances. As VRPs progress, we will discover that they will do far more than simply paying bills and will unlock aspects of smart saving, one-click payments, and control over subscriptions. It will also be important to address issues that work against the great benefits of open banking in the near future. The 90-day reauthorisation rule, which requires open banking providers to re-confirm consent with the customer every 90 days, must be addressed. This rule currently undermines the principles of convenience and ease that open banking has been working on showcasing.

Major trends in online identity verification for 2022

As both consumer and investor demand for fintech startups continues to heat up, we expect to see even more neobanks and cryptocurrency investment platforms launching in the coming year. Unfortunately, bad actors are ready and they often target these nascent platforms, with the expectation that fraud prevention may be an afterthought at launch. But we expect that, as these startups go to market, these companies will shift their initial focus from purely optimizing for new user sign-ups to preventing fraud on their platforms, shifting from the required risk and compliance checks to more comprehensive anti-fraud solutions. Fortunately, there are ID verification solutions that can help with both, preventing fraud while still optimizing for sign-up conversions. Likewise, the tight hiring market for software developers will lead these new fintech firms to look for no-code or low-code ID verification and compliance solutions, rather than attempting to build them in-house.

AI-Based Software Testing: The Future of Test Automation

The success of digital technologies, and by their extension, businesses, is underpinned by the optimal performance of the software systems that form the core of operations in these enterprises. Many times, such enterprises make a trade-off between delivering a superior user experience and a faster time to market. As a consequence, the quality of the software systems often suffers from inadequacies, and enterprises cannot make much of their early ingress into the market. This results in the loss of revenue and brand value for such enterprises. The alternative is to go for comprehensive and rigorous software testing to find and fix bugs before the actual deployment. In fact, methodologies such as Agile and DevOps have given enterprises the means to achieve both: a superior user experience and a faster time to market. This is where AI-based automation comes into play and makes testing accurate, comprehensive, predictive, cost-effective, and quick. Artificial Intelligence, or AI, has become the buzzword for anything state-of-the-art or futuristic and is poised to make our lives more convenient.

Will Automation Fill Gaps Left by the ‘Great Resignation’?

From Lane’s perspective, the main areas DevOps teams should be looking to automate are continuous integration and continuous delivery (CI/CD), IaC and AIOps-enabled incident management platforms. “By taking the manual nature of day-to-day work off of DevOps engineers’ plates, they are freed to focus on digital transformation,” he said. “The number-one stumbling block is not starting with process.” Lane noted unless you understand all the steps in a procedure that you’re trying to automate, it is very difficult to maximize the power of automation tools. “Much of the process that is still adhered to today is outdated for the digital age,” he said. “Spend the time up front to map out what you hope to achieve with an automation project, what all the touchpoints are and how one can measure the quality of automation when it’s implemented.” Michaels added that while the internet is flooded by companies shouting they have the “best” tools, that proclamation of “best” is going to be determined by budget and known languages.

Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - December 02, 2021

Web 3.0: The New Internet Is About to Arrive

Some experts believe this decentralized Web, which is also referred to as Web 3.0, will bring more transparency and democratization to the digital world. Web 3.0 may establish a decentralized digital ecosystem where users will be able to own and control every aspect of their digital presence. Some hope that it will put an end to the existing centralized systems that encourage data exploitation and privacy violation. ... As a user, you will have a unique identity on Web 3.0 that will enable you to access and control all your assets, data, and services without logging in on a platform or seeking permission from a particular service provider. You will be able to access the internet from anywhere for free, and you will be the only owner of your digital assets. Apart from experiencing the internet on a screen in 2D, users will also get to participate in a larger variety of 3D environments. From anywhere, you could visit the 3D VR version of any historical place you search, play games while being in the game as a 3D player, try clothing on your virtual self before you buy. 

Report: Aberebot-2.0 Hits Banking Apps and Crypto Wallets

Based on the Aberebot-2 creator's claim and Cyble's findings, the banking malware's new variant appears to have multiple capabilities. It can steal information such as SMS, contact lists and device IPs, and it also can perform keylogging and detection evasion by disabling Play Protect - Google's safety check that is designed to detect spurious apps, according to the researchers. Cyble says the "new and improved" version of the banking Trojan can steal messages from messaging apps and Gmail, inject values into financial applications, collect files on the victim's device and inject URLs to steal cookies. Medhe says that Aberebot-2.0 has 18 different permissions, including internet permission, and 11 of the permissions are dangerous. One key difference between the earlier and the latest version of the Aberebot malware, he says, is the use of the Telegram API. "In the newer version, the malware author has included features such as the ability to inject or modify values in application forms, such as receiver details or the amount during financial transactions.

New Ransomware Variant Could Become Next Big Threat

Symantec's investigation of Yanluowang activity showed the former Thieflock affiliate is using a variety of legitimate and open source tools in its campaign to distribute the ransomware. This has included the use of PowerShell to download a backdoor called BazarLoader for assisting with initial reconnaissance and the subsequent delivery of a legitimate remote access tool called ConnectWise. To move laterally and identify high-value targets, such as an organization's Active Directory server, the threat actor has used tools such as SoftPerfect Network Scanner and Adfind, a free tool for querying AD. "The tool is frequently abused by threat actors to find critical servers within organizations," Neville says. "The tool can be used to extract information pertaining to machines on the network, user account information, and more." Other tools the attacker is using in Yanluowang attacks include several for credential theft, such as GrabFF for dumping passwords from Firefox, a similar tool for Chrome called GrabChrome, and one for Internet Explorer and other browsers called BrowserPassView.

Cloud computing is evolving: Here's where it's going next

"The era of multi-cloud is here, driven by digital transformation, cost concerns and organizations wanting to avoid vendor lock-in. Incredibly, more than half of the respondents of our survey have already experienced business value from a multi-cloud strategy," said Armon Dadgar, co-founder and CTO, HashiCorp in a statement. "However, not all organizations have been able to operationalize multi-cloud, as a result of skills shortages, inconsistent workflows across cloud environments, and teams working in silos." ... The focus is now on overcoming the various barriers to successful multi-cloud deployment, which include skills shortages and workflow differences between cloud environments. Cloud spend management is a continuing issue, while infrastructure automation tools are becoming increasingly important, particularly when it comes to provisioning and application deployment. In five years' time, we won't be talking about the pros and cons of hybrid/multi-cloud architecture. Instead, the discussion will be all about enterprises as efficient developers of industry-specific cloud-native apps, and automatic, optimised and AI-driven workload deployment.

Recovering from ransomware: One organisation’s inside story

As far as the ransom demand itself was concerned, the service provider warned that it was important Manutan not respond, even more so that it not pay. In the case of this particular gang, as soon as the victim shows up to negotiate, the criminals activate a three-week timer at the end of which – if there is no resolution – they make good on a series of threats, disclosing the victim’s sensitive information and irreparably destroying the data. Therefore, to pretend that Manutan had not yet realised it had been attacked – in effect, to play dead – would serve to buy it valuable time. In terms of actually paying, this could make the gang ask for more and would not provide any guarantee that the data would be recovered. “We spent time determining what data they had recovered and the risk it posed. We concluded that it was not critical – for example, they did not access our contracts with suppliers. Then we evaluated our ability to put a functioning IT system back together, which we could do, and we decided that we would not pay,” says Marchandiau.

How Decryption of Network Traffic Can Improve Security

Today, it’s nearly impossible to tell the good from the bad without the ability to decrypt traffic securely. The ability to remain invisible has given cyberattackers the upper hand. Encrypted traffic has been exploited in some of the biggest cyberattacks and exploit techniques of the past year, from Sunburst and Kaseya to PrintNightmare and ProxyLogon. Attack techniques such as living-off-the-land and Active Directory Golden Ticket are only successful because attackers can exploit organizations’ encrypted traffic. Ransomware is also top of mind for enterprises right now, yet many are crippled by the fact that they cannot see what is happening laterally within the east-west traffic corridor. Organizations have been wary to embrace decryption due to concerns around compliance, privacy and security, as well as performance impacts and high compute costs. But there are ways to decrypt traffic without compromising compliance, security, privacy or performance. Let’s debunk some of the common myths and misconceptions.

5 (more) Common Misconceptions about Scrum

Many people think that Scrum Team members shouldn’t be assigned to a team part-time. However, there is nothing in the Scrum Guide prohibiting it. There are, of course, trade-offs for part-time Scrum Team members. If too many individuals are part-time, the team may not accomplish as much meaningful work during a Sprint. Additionally, with part-time members it can be more difficult for the team to learn how much work they can achieve during a Sprint, particularly if a member’s part-time status fluctuates. Moreover, if the part-time members support multiple Scrum Teams, they can feel exhausted attending numerous Daily Scrum meetings and splitting their focus. The Scrum Team should consider these trade-offs when self-organizing into teams that include part-time members. ... Timeboxes are an essential part of all Scrum events because they help limit waste and support empiricism, making decisions based on what is known. For example, the result of the Sprint Planning event should be enough of a plan for the team to get started. 

What Will AI Bring to the Cybersecurity Space in 2022

When you deploy AI to monitor your company network, for example, it creates an activity profile for every user in that network. What files they access, what apps they use, when, and where. If that behavior suddenly changes, the user is flagged for a deep scan. This is a vast improvement in threat detection. Currently, a lot of time is lost before an attack is even noticed. According to IBM’s 2020 Data Breach Report, businesses take 280 days on average to detect and contain a breach. That’s plenty of time for hackers to cause massive damage. AI cuts that time short. It instantly spotlights irregularities, allowing businesses to contain breaches fast. One of the major issues with this, however, is the fact that there's always a strong risk that some clean behaviors may appear as though they are problematic when they're not. Current generation ML-based threat detection algorithms rely almost exclusively on the adaptation of neural networks that more or less replicate the perceived functioning of human thought patterns. These systems use validation subroutines that crosscheck behavior patterns against previous behaviors.
So far, only 9 countries have commercialized 5G mmWave. However, this is not surprising given that, the main restriction of mmWave transmissions is their low propagation range. Telecom companies would not employ the mmWave frequency band for national coverage. Looking at telecom operators’ deployment strategies, we can see that low-frequency bands (for example, 700 MHz) are used for national coverage, whereas sub-6 GHz bands are utilized for city coverage, and mmWave is used for megacity hotspots. ... One crucial part of deploying a large-scale 5G network employing massive MIMO gear is that the radio must be lightweight and have a compact footprint, as these characteristics will help operators save significant money on overall deployment. This is where silicon comes in. Si’s performance will have a huge influence on a radio’s essential aspects, such as connection, capacity, power consumption, product size, and weight, and, ultimately, cost. In the 5G system sector, all of these are critical.

7 ways to balance agility and planning

By building learning and development (L&D) into planning, your organization can enhance employee engagement and investment in strategic goals. A Quantum Workplace trend report found employee engagement was at its peak in 2020 (up 3 percent from 2019), with 77 percent of employees reporting high engagement. Spring and fall of 2020 indicated the greatest engagement levels at 80 percent, with a 7 percent drop by the summer of 2021. Leadership communication also tapered off since the emergence of COVID, creating a downward trend in employees’ transparency, communication, and leadership trust perceptions. Consequently, many employees felt their career paths were stunted or unclear. These findings underscore the importance of L&D in keeping employees engaged and motivated and in fostering more consistent communication between managers and their teams. From the organization’s perspective, employees are encouraged to flex their adaptability muscles as they learn, galvanizing them to become more agile and enabling the organization to pivot efficiently.

Quote for the day:

"It is, after all, the responsibility of the expert to operate the familiar and that of the leader to transcend it." -- Henry A. Kissinger