Daily Tech Digest - October 20, 2021

The challenges of cloud data management

IT departments are facing a growing challenge to stay abreast of advancements in cloud technologies, provide day-to-day support for increasingly complex systems, and adhere to ever-changing regulatory requirements. In addition, they must ensure the systems they support are able to scale to meet performance objectives and are secured against unauthorized access. ... Much like data security, adhering to regulatory compliance frameworks is a shared responsibility between the customer and cloud provider. Larger cloud vendors will provide third-party auditor compliance reports and attestations for the regulatory frameworks they support. It will be up to each organization to read the documentation and ensure the contents meet specific compliance needs. Most leading platforms will also provide tools to help clients configure identity and access management, secure and monitor their data, and implement audit trails. But the responsibility for ensuring the tools' configuration and usage meet the framework's control objectives relies solely with the customer. ... We know one of IT's core responsibilities is to transform raw data into actionable insights.


Learning to learn: will machines acquire knowledge as naturally as children do?

We create new-to-the-world machines, with sophisticated specifications, that are hugely capable. But to reach their potential, we have to expose them to hundreds of thousands of training examples for every single task. They just don’t ‘get’ things like humans do. One way to get machines to learn more naturally, is to help them to learn from limited data. We can use generative adversarial networks (GANs) to create new examples from a small core of training data rather than having to capture every situation in the real world. It is ‘adversarial’ because one neural network is pitted against another to generate new synthetic data. Then there’s synthetic data rendering – using gaming engines or computer graphics to render new scenarios. Finally, there are algorithmic techniques such as Domain Adaption which involves transferable knowledge (using data in summer that you have collected in winter, for example) or Few Shot Learning, which making predictions from a limited number of samples. Taking a different limited-data route is multi-task Learning, where commonalities and differences are exploited to solve multiple tasks simultaneously.


IT hiring: 5 signs of a continuous learner

Whatever you call it, it’s an important attribute to consider when hiring or grooming the most capable IT professionals today. A continuous learner can offer more bang for the buck in one of the strongest job markets in recent years. “We have found that many companies, while their job descriptions state they are looking for a certain number of years of experience in a laundry list of technologies, are being more flexible and hiring candidates that may be more junior, or those who lack a few main technologies,” Spathis says, noting that many organizations are willing to take the risk on more junior or less specifically experienced candidates who are eager, trainable, and able to learn new skills. There’s definite agreement on the demand for continuous learners in the IT function today. “To thrive during these changing times, it’s imperative that IT organizations continuously grow and change with changing needs,” says Dr. Sunni Lampasso, executive coach and founder of Shaping Success. “As a result, IT organizations that employ continuous learners are better equipped to navigate the changing work world and meet changing demands.”


Ethical and Productivity Implications of Intelligent Code Creation

AI technology is changing the working process of software engineers and test engineers. It is promoting productivity, quality, and speed. Businesses use AI algorithms to improve everything from project planning and estimation to quality testing and the user experience. Application development continues to evolve in its sophistication, while the business increasingly expects solutions to be delivered faster than ever. Most of the time, organizations have to deal with challenging problems like errors, defects, and other complexities while developing complex software. Development and Testing teams no longer have the luxury of time when monthly product launches were the gold standard. Instead, today’s enterprises demand weekly releases and updates that trickle in even more frequently. This is where self-coded applications come into play. Applications that generate the code themselves help the programmers accomplish a task in less time and increase their programming ability. Artificial intelligence is the result of coding, but now coding is the result of Artificial intelligence. It is now helping almost every sector of the business and coders to enhance the software development process. 


How To Transition From Data Analyst To Data Scientist

Before even thinking about making the transition, one has to be very clear about what a data scientist does and introspect what has to be done to fill the gaps that are needed to make the transition and the skills the person has now. A data scientist not only handles data but provides much deeper insights from it. Other than gaining the right mathematical and statistical know-how, training yourself to look at business problems with the mindset of a data scientist and not just like a data analyst will be of great help. This means that while looking into a problem, developing your critical thinking and analytical skills, getting deep into the problem to be solved at hand, and coming up with the right way to approach the solution will train you for the future. A data analyst might not have great coding skills but surely has to know it well. Data scientists use tools like R and Python to derive interpretations from the massive data sets they handle. As a data analyst, if you are not great at coding or don’t know the common tools, it would be wise to start taking basic courses on them and use them then in real-world applications.


Application Security Manager: Developer or Security Officer?

First, an ASM has to understand what a supervised project is about. This is especially important for agile development, where, unlike the waterfall model, you don’t have two months to perform a pre-release review. An АSМ’s job is to make sure that the requirements set at the design stage are correctly interpreted by the team, properly adopted in the architecture, are generally feasible, and will not cause serious technical problems in the future. Typically, the ASM is the main person who reads, interprets, and assesses automated reports and third-party audits. ... Second, an ASM should know about various domains, including development processes and information security principles. Hard skills are also important because it’s very difficult to assess the results provided by narrow specialists and automated tools if you can’t read the code and don’t understand how vulnerabilities can be exploited. When a code analysis or penetration test reveals a critical vulnerability, it’s quite common for developers (who are also committed to creating a secure system) to not accept the results and claim that auditors failed to exploit the vulnerability. 


Top Open Source Security Tools

WhiteSource detects all vulnerable open source components, including transitive dependencies, in more than 200 programming languages. It matches reported vulnerabilities to the open source libraries in code, reducing the number of alerts. With more than 270 million open source components and 13 billion files, its vulnerability database continuously monitors multiple resources and a wide range of security advisories and issue trackers. WhiteSource is also a CVE Numbering Authority, which allows it to responsibly disclose new security vulnerabilities found through its own research. ... Black Duck software composition analysis (SCA) by Synopsys helps teams manage the security, quality, and license compliance risks that come from the use of open source and third-party code in applications and containers. It integrates with build tools like Maven and Gradle to track declared and transitive open source dependencies in applications’ built-in languages like Java and C#. It maps string, file, and directory information to the Black Duck KnowledgeBase to identify open source and third-party components in applications built using languages like C and C++. 


Why You Don't Need to Be a Business Insider in Order to Succeed

No matter what anyone tells you, it’s not a zero-sum game. There is abundance out there for everyone. Of course, money becomes concentrated with various people, but wealth-mobility is very real and happening all the time. We hear people talk about the 1% all the time (often in an effort to paint them as a monolithic, evil, controlling class). What they fail to recognize is that people are constantly moving in and out of the 1% all the time. Some of this is down to inherited wealth, and some is down to hard work — but it’s happening all the time. What really lies at the heart of this is fear. We abdicate our power to an imagined ruling class because we’re afraid of the unknown. And before you think this is about blaming you: It is our subconscious being unwilling to take the risk that stops us. You have a built-in stowaway in your mind who wants to maintain a status quo. Therefore, any new growth opportunities — while intellectually exciting and appealing — will be met with emotional resistance at some point. I’m sure you’ve had this happen to you before: You get a new career-changing offer, you do a little dance and head off to celebrate. 


Why a new approach to eDiscovery is needed to decrease corporate risk

For businesses, the combination of these factors has led to a big increase in corporate risk, putting significant pressure on any corporate investigations that need to be conducted and making the eDiscovery process much more difficult. Not only are employees and their devices a lot less accessible than they used to be, but the growing use of personal devices, many of which lack proper security protocols or use unsecured networks, leaves company data much more vulnerable to theft or loss. If that wasn’t enough, heightened privacy concerns and the likelihood that personal data will be unintentionally swept up in any eDiscovery processes can make employees even more reluctant to hand over their devices to investigators if/when needed (if investigators can even get hold of them). As a result, many companies are suddenly finding themselves between a rock and a hard place. How can they operate a more employee friendly hybrid working model while still maintaining the ability to carry out corporate investigations and eDiscovery in the event it’s required?


Three key areas CIOs should focus on to generate value

CIOs and IT executives should focus on three types of partner connections: one-to-one, one-to-many and many-to-many. A one-to-one connection can be taken to the next level and become a generative partnership where the enterprise and technology partner work together to create and build a solution that doesn’t currently exist. The resulting assets are co-owned and produce benefits and revenue for both partners. Generative partnerships are becoming more common. In fact, Gartner forecasts that generative-based IT spending will grow at 31% over the next five years. Beyond one-to-one connections is the formation of ecosystems of multiple partners. One-to-many partnerships work best when a single enterprise needs to focus many players on jointly solving a single problem – such as a city bringing together public and private entities to serve the citizen. Many-to-many partnerships are created when a platform brings many different enterprises’ products and services together, to be offered to many different customers. Often called platform business models, these marketplaces and app/API stores enable the many to help the many at ecosystem scale.



Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis

Daily Tech Digest - October 19, 2021

Micro Frontend Architecture

The idea behind Micro Frontends is to think about a web app as a composition of features that are owned by independent teams. Each team has a distinct area of business it cares about and specializes in. A team is cross-functional and develops its features end-to-end, from database to user interface. ... But why do we need micro frontends? Let’s find out. In the Modern Era, with new web apps, the front end is becoming bigger and bigger, and the back end is getting less important. Most of the code is the Micro Frontend Architecture and the Monolith approach doesn’t work for a larger web application. There needs to be a tool for breaking it up into smaller modules that act independently. The solution to the problem is the Micro frontend. ... It heavily depends on your business case, whether you should or should not use micro frontends. If you have a small project and team, micro frontend architecture is not as such required. At the same time, large projects with distributed teams and a large number of requests benefit a lot from building micro frontend applications. That is why today, micro frontend architecture is widely used by many large companies, and that is why you should opt for it too.


CodeSee Helps Developers ‘Understand the Codebase’

As a developer, you’ve likely faced one problem again and again throughout your career: struggling to understand a new codebase. Whether it’s a lack of documentation, or simply poorly-written and confusing code, working to understand a codebase can take a lot of time and effort, but CodeSee aims to help developers not only gain an initial understanding, but to continually understand large codebases as they evolve over time. “We really are trying to help developers to master the understanding of codebases. We do that by visualizing their code, because we think that a picture is really worth a thousand words, a thousand lines of code,” said CodeSee CEO and co-founder Shanea Leven. “What we’re trying to do is really ensure that developers, with all of the code that we have to manage out there — and our codebases have grown exponentially over the past decade — that we can deeply understand how our code works in an instant.” Earlier this month, CodeSee, which is still in beta, launched OSS Port to bring its code visibility and “continuous understanding” product to open source projects, as well as give potential contributors and maintainers a way to find their next project.


Non-Coder to Data Scientist! 5 Inspiring Stories and Valuable Lessons

While looking for inspiring journeys I focus on people coming from a non-traditional background. People coming from non-technology backgrounds. People having zero coding experience. I guess this makes their story inspiring. All those who found their success in data science were willing to learn to code. They were not intimidated by the Kaggle notebooks that they were not able to understand initially. They all understood that it takes time to gain knowledge and pursued till they acquired all the required knowledge. Programming is one of the biggest show stoppers. It is this particular skill that makes many frustrated. It even makes them give up their passion for a career in data science. Programming is not exactly a hard thing to learn. ... Having a growth mindset plays a major role in data science. There are many topics to learn and it can be overwhelming. Instead of saying, I can’t learn math, I can’t be a good programmer, I can never understand statistics. People with a growth mindset tend to stay positive and keep trying. 


How To Stay Ahead of the Competition as an Average Programmer

Apart from getting the satisfaction of being helpful, it has multiple career benefits too. One, I get to learn a lot more by helping others. Two, continuously helping others builds trusted relationships within the organization.In the software industry, your allies come to your help more than you realize. They can return the favor during application integration, defect resolution, challenging meetings, or even in promotion discussions. If you know people and have helped them before, they will be happy to bail you out from difficult situations. Hence, never hesitate to help others at your workplace. ... Simultaneously, it might not be possible for you to be of help to everyone. But you can justify why you are unable to help. Being arrogant or repeatedly rejecting the requests as not your responsibility makes others think you are not a team player. ... While working in a team environment, you are bound to face challenges. You need to follow company policies and processes that you might find hindering your productivity. You will have to work with people who slow down the team’s progress due to their poor contribution.


A real-world introduction to event-driven architecture

An event-driven architecture eliminates the need for a consumer to poll for updates; it instead receives notifications whenever an event of interest occurs. Decoupling the event producer and consumer components is also advantageous to scalability because it separates the communication logic and business logic. A publisher can avoid bottlenecks and remain unaffected if its subscribers go offline, or if their consumption slows down. If any subscriber has trouble keeping up with the rate of events, the event stream records them for future retrieval. The publisher can continue to pump out notifications without throughput limitations and with high resilience to failure. Using a broker means that a publisher does not know its subscribers and is unaffected if the number of interested parties scales up. Publishing to the broker offers the event producer the opportunity to deliver notifications to a range of consumers across different devices and platforms. Estimates suggest that 30% of all global data consumed by 2025 will result from information exchange in realtime.


Pros and cons of cloud infrastructure types and strategies

A multi-cloud strategy simply means that an organisation has chosen to use multiple public cloud providers to host their environments. A hybrid cloud approach means that a company is using a combination of on-premises infrastructure, private cloud and public cloud — and possibly more than one of the latter, meaning that company would be implementing a multi-cloud strategy with a hybrid approach. At times, these terms are used interchangeably. Companies choose a multi-cloud strategy for a multitude of reasons, not least of which is avoiding vendor lock-in. Spreading workloads across multiple cloud providers increases reliability, as a company is able to fail over to a secondary provider if another provider experiences an outage. Optionality is a huge benefit to companies who want to be able to pick and choose which services will most seamlessly integrate into their environments, as each major public cloud provider provides some unique services for different types of workloads. Furthermore, when a company uses multiple public cloud providers, it retains flexibility and can transfer workloads from one provider to another.


Gartner: Top strategic technology trends for 2022

The first of those trends is the growth of the distributed enterprise. Driven by the massive growth in remote and hybrid working patterns, traditional office-centric organizations are evolving into geographically distributed enterprises. “For every organization, from retail to education, their delivery model has to be reconfigured to embrace distributed services,” Groombridge said. Such operations will stress the network that supports users and consumers alike, and businesses will need to rearchitect and redesign to handle it. ... “Data is widely scattered in many organization and some of that valuable data can be trapped in siloes,” Groombridge said. “Data fabrics can provide integration and interconnectivity between multiple silos to unlock those resources.” Groombridge added that data-fabric deployments will also force significant network-topology readjustments and in some cases, to work effectively, could require their own edge-networking capabilities . The result is that the fabric will unlock data that can be used by AI and analytics platforms to support new applications bring about business innovations more quickly, Groombridge said. 


BlackMatter Ransomware Defense: Just-In-Time Admin Access

To be fair, the BlackMatter alert, beyond including intrusion system rules, also details the group's known tactics, techniques and procedures, and includes additional recommended defenses, such as implementing "time-based access for accounts set at the admin-level and higher," due to ransomware-wielding attackers' propensity to attack organizations after hours, over weekends, on Christmas Eve or any other inconvenient time. What does time-based access look like? One approach is just-in-time access, which enforces least-privileged access except for temporarily granting higher access levels via Active Directory. "This is a process where a network-wide policy is set in place to automatically disable admin accounts at the AD level when the account is not in direct need," according to the advisory. "When the account is needed, individual users submit their requests through an automated process that enables access to a system, but only for a set timeframe to support task completion."


Why is collaboration between the CISO and the C-suite so hard to achieve?

Poor communication between the CISO and business unit heads is a major barrier to safe and successful business transformation. To properly educate people within the organisation about the realities of a cyber attack, the CISO must move beyond data, buzzwords and technical jargon and tell a story that brings the threat to life for those without subject-matter insight. If the CISO can intelligibly and clearly articulate the threats and the steps necessary to mitigate them, they are much more likely to capture executives’ attention and help ensure that all key stakeholders understand the trade-offs between new technology and added risk. If they’re able to adapt their language to specific individuals and business functions, they’ll have even greater success. For instance, a chief marketing officer is most likely interested in the risks to customer data, while chief financial officers will want to better understand how to secure banking information. ... “CISOs still have more work to do in breaking down the communication barriers by talking in less technical language for boards to better understand potential business risks.”


Developer Learning isn’t just Important, it’s Imperative

Every company that isn’t consistently upgrading its codebase or shifting to new frameworks is facing a serious business problem. If your codebase is getting older and older, you face the risk of massive future migrations. And if you’re not moving to new framework versions, you’re missing important benefits that your team could otherwise leverage. Technical debt naturally increases over time. The longer it goes unaddressed, the sooner you’ll get stuck paying high costs in migration, hiring, or massive upskilling efforts that take weeks or months. Like saving for retirement, incremental upskilling pays dividends in the long run. Every industry leader I’ve talked to worries about the scarcity of high-quality software engineers. That means companies feel serious pressure to constantly hire new, better developers. But rather than looking externally for a solution, what if companies looked internally Here’s the reality: meaningful developer learning helps companies convert silver medalists into gold medalists.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - October 18, 2021

Magnanimous machines: Why AI work should work for people and not the other way around

The consolidation of power amongst a Big Tech elite fused with state intelligence grows ever stronger. These entities can know everything about us, yet carefully hide their own clandestine obfuscated activities. The best defence to this asymmetry is radical, mandated, and cryptographically secure transparency. To illustrate, in commercial aircraft, we have two essential data recorders (‘black boxes’). One monitors the aircraft itself, and the other monitors cockpit chatter. Both recorders are necessary to understand why an incident has occurred. For the same reasons, we need a similar approach to humane technology. Transparency is the foundation upon which every other aspect of ethical technology rests. It is essential to understand a system, its functions, as well as attributes of the organisation, and not to forget, those who steer it. Through transparency, we can understand that the incentive structures within organisations are aligned towards producing honest good faith outcomes. We can understand what may have gone wrong and how to fix it in the future.


8 Keys to Failproof App Modernization

Typically, modernization initiatives are strategized or rolled out before major events or milestones like data center contracts and vendor contracts coming up for renewal, Software and hardware platforms going End of Service and Support Life, government-imposed deadlines to implement regulatory and compliance requirements, ageing workforce and the risk of shortage of skills. In all such scenarios, since the accumulated technical debt is so high, these become multi-year, multi-million modernization programs. Risks are equally higher in such large programs. And to optimize costs and minimize risks, the temptation sometimes is to somehow get these workloads to the target platform [containerize or rehost without really changing the underlying architecture]. This will result in more technical debt and will necessitate another modernization initiative in a few years and so it goes. The chances of success are much higher if the initiatives are incremental in nature and time-bound say 3-6 months. In fact, it is a recommended practice in agile development practices to pay down technical debt regularly every single sprint.


Three key issues to tackle before smart cities become a reality

Many smart systems require data to be validated and assimilated in real-time for it to be relevant. This poses a problem, in that it requires every citizen to agree to their data being collected and shared, which in turn requires trust. That means the collection of data and its use to influence critical decisions in smart cities, needs careful consideration. However, citizens often worry about being ‘tracked’ – a difficult perception to eradicate in a world where privacy and security are among the biggest challenges each of us faces. Overcoming it requires us to build a comprehensive data privacy and security strategy into any smart city development, with local governments then responsible for educating individuals and society on how their data will be stored, who has access, and how it can be used. Such strategies require careful consideration, as any mistakes that harm public trust could impact the success of smart cities. The NHS COVID-19 app is a good example of this – once people lacked trust in the application, it took only a matter of days for thousands of people to delete it. 


Engineering Digital Transformation for Continuous Improvement

Getting organizations to invest in improvements and embrace new ways of working is a challenge. They don’t just need the right technical solutions, they also need to address the organizational change management challenges that are creating resistance to new ways of working. Organizations frequently have champions that have ideas for improvement and are trying to influence change without a lot of success. These champions find that the harder they push for change, the more people resist. We can have all the best approaches in the world, but if we can’t figure out how to overcome this resistance, organizations will never adopt them and realize the benefits. While pushing for change is the natural approach, research by organizational change management experts, like Professor Jonah Bergerin his book “The Catalyst,” suggests this is the wrong approach. His research shows that the harder you push for change, the more people resist. Whenever they feel like they are trying to be influenced, their anti-persuasion radar kicks in and instead they start shooting down ideas and resisting the change being offered.


A transactional approach to power

As Battilana and Casciaro tell it, it’s not your personal or positional power that determines your effectiveness in any given situation. It is your ability to understand what resources the involved parties want and how the resources are distributed—that is, the balance of power. “We find this extremely compelling,” explains Casciaro, “because it brings power relationships—whether they are interpersonal, intergroup, interorganizational, or international—down to four simple factors.” Taking this a step further, the ability to shift the balance of power within a situation determines your success at exercising power. Battilana and Casciaro find there are several key strategies that support this ability to rebalance power. If you have resources the other party values, attraction is a key strategy. You try to increase the value of those resources for the other party. Personal and corporate brand-building are organized around this strategy. If the other party has too many paths to access your resources, consolidation is a key strategy. You try to eliminate or otherwise lessen the alternatives. Employees join unions to limit the alternatives of employers and increase their power.


Treasury Dept. to Crypto Companies: Comply with Sanctions

The announcement is the latest in a series of moves from the Biden administration to combat ransomware, following high-profile attacks this year that have disrupted the East Coast's fuel supply during the Colonial Pipeline incident; jeopardized the nation's meat supply by attacking JBS USA; and knocking some 1,500 downstream organizations offline by zeroing in on managed service provider, Kaseya, over the July Fourth holiday. Last month, the Treasury Department blacklisted Russia-based cryptocurrency exchange, Suex, for allegedly laundering tens of millions of dollars for ransomware operators, scammers and darknet markets. In its latest issuance, the department alleges that over 40% of Suex’s transaction history had been associated with illicit actors, involving the proceeds from at least eight ransomware variants. Similarly, this week, the White House National Security Council facilitated a 30-nation, two-day "counter-ransomware" event, which found senior officials strategizing on ways to improve network resiliency, addressing illicit cryptocurrency usage, and ways to heighten law enforcement collaboration and diplomacy. 


DevSecOps: 11 questions to ask about your security strategy now

Where does friction exist between security and business goals? The question is relatively self-explanatory: DevSecOps exists in part to remove friction and bottlenecks that have historically introduced risks rather than reduce them. The question also has a subtext: What are we doing about it? This friction often goes unaddressed because, well, it’s unaddressed – as in, people avoid pointing it out or talking about it, whether because of poor relationships, fear factors, cultural acceptance, or other reasons. Leaders need to take an active role here by showing their willingness to talk about it, without finger-pointing or other toxic behaviors. “Leaders should constantly be probing and trying to understand the friction points between the business and DevSecOps,” says Jerry Gamblin, director of security research at Kenna Security, now part of Cisco. “These often uncomfortable conversations will help you refocus your team’s goals on the company’s goals.” A willingness to have those uncomfortable conversations as a pathway to positive long-term change is a key characteristic of a healthy culture.


The importance of crisis management in the age of ransomware

How to prepare for ransomware attacks is an often-asked question. From my point of view, the best action is to go through the checklist of security controls that prevent hackers from taking control of your network. Organizations like Servadus offer a Ransomware Readiness Assessment which helps organizational leadership identify current risks to the corporation. Of course, having up-to-date incident response and business continuity plans are part of that assessment. Outside, the real value comes from remediating weak cybersecurity controls. Additionally, organizations implement a framework to shore security control implementation and sustainability. Many organizations try to maintain compliance and security controls but are vulnerable to attacks 3 to 6 months after validating security in channels in place. The long-term strategy is about validating sustainable security controls. The service framework also allows organizations to evaluate threats to the organization and vulnerabilities of the system software in use. 


The importance of staff diversity when it comes to information security

The information security team’s decision-making is diversified, which contributes to the organisation’s overall strength. The fact that each employee provides an unique perspective to the problem makes it simpler to recognise and address hidden vulnerabilities in the security operations, as well as to identify and correct the deficiencies of other employees, helping them to grow in their own areas of expertise. Consider the likelihood of a breach, as well as the “red team” that will be assigned to deal with the situation. SOC analyst looks for the logs, security engineer looks for the vulnerability and some other team members look for the defensive approach to defend against the vulnerability. As a result, in order to maintain a strong information security team, it is critical to have a diverse workforce. It facilitates task efficiency while encouraging alternative viewpoints. ... An organisation’s security cannot be handled by a single product, and in order to maintain and handle those security products, companies require a large number of employees. So, in order to work efficiently and effectively without being reliant on a single person or product, this should be common knowledge.


Is it right or productive to watch workers?

The recent rise in employee surveillance accelerated during the pandemic, largely because it had to, but the bottom line is that we are now more than ever accustomed to being watched. We accept the intrusion of cameras in novel spaces under the promise of increased safety; doorbell cameras spring to mind, but so too do webcams and smartphones; we accept data tracking to prove we’re “not a robot” on websites; we accept that our information, our clicks, and our preferences are observed and noted. We seem to be primed now to accept that companies have a reasonable expectation to protect their own safety, so to speak, by monitoring us. One recent survey by media researcher Clutch of 400 US workers found that only 22% of 18- to 34-year-old employees were concerned about their employers having access to their personal information and activity from their work computers. Meanwhile, in a pre-pandemic survey of US workers by US media group Axios from August 2019, 62% of respondents agreed that employers should be able to use technology to monitor employees.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor