Daily Tech Digest - December 26, 2021

All You Need to Know About Unsupervised Reinforcement Learning

Unsupervised learning can be considered as the approach to learning from the huge amount of unannotated data and reinforcement learning can be considered as the approach to learning from the very low amount of data. A combination of these learning methods can be considered as unsupervised reinforcement learning which is basically a betterment in reinforcement learning. In this article, we are going to discuss unsupervised Reinforcement learning in detail along with special features and application areas. ... When we talk about the basic process followed by unsupervised learning, we define objective functions on it such that the process can be capable of performing categorization of the unannotated data or unlabeled data. There are various problems that can be dealt with using unsupervised learning. Some of them are as follows: Label creation, annotation, and maintenance is a changing discipline that also requires a lot of time and effort; Many domains require expertise in annotation like law, medicine, ethics, etc; In reinforcement learning, reward annotation is also confusing. 


Explainable AI (XAI) Methods Part 1 — Partial Dependence Plot (PDP)

Partial Dependence (PD) is a global and model-agnostic XAI method. Global methods give a comprehensive explanation on the entire data set, describing the impact of feature(s) on the target variable in the context of the overall data. Local methods, on the other hand, describes the impact of feature(s) on an observation level. Model-agnostic means that the method can be applied to any algorithm or model. Simply put, PDP shows the marginal effect or contribution of individual feature(s) to the predictive value of your black box model ... Unfortunately, PDP is not some magic wand that you can waver in any occasion. It has a major assumption that is made. The so-called assumption of independence is the biggest issue with PD plots. ... “If the feature for which you computed the PDP is not correlated with the other features, then the PDPs perfectly represent how the feature influences the prediction on average. In the uncorrelated case, the interpretation is clear: The partial dependence plot shows how the average prediction in your dataset changes when the j-th feature is changed.”


Worried about super-intelligent machines? They are already here

For anyone who thinks that living in a world dominated by super-intelligent machines is a “not in my lifetime” prospect, here’s a salutary thought: we already live in such a world! The AIs in question are called corporations. They are definitely super-intelligent, in that the collective IQ of the humans they employ dwarfs that of ordinary people and, indeed, often of governments. They have immense wealth and resources. Their lifespans greatly exceed that of mere humans. And they exist to achieve one overriding objective: to increase and thereby maximise shareholder value. In order to achieve that they will relentlessly do whatever it takes, regardless of ethical considerations, collateral damage to society, democracy or the planet. One such super-intelligent machine is called Facebook. ... “We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we have to do to bring more communication in. The work we will likely have to do in China some day. All of it.”


Supervised vs. Unsupervised vs. Reinforcement Learning: What’s the Difference?

Reinforcement learning is a technique that provides training feedback using a reward mechanism. The learning process occurs as a machine, or Agent, that interacts with an environment and tries a variety of methods to reach an outcome. The Agent is rewarded or punished when it reaches a desirable or undesirable State. The Agent learns which states lead to good outcomes and which are disastrous and must be avoided. Success is measured with a score (denoted as Q, thus reinforcement learning is sometimes called Q-learning) so that the Agent can iteratively learn to achieve a higher score. Reinforcement learning can be applied to the control of a simple machine like a car driving down a winding road. The Agent would observe its current State by taking measurements such as current speed, direction relative to the road, and distances to the sides of the road. The Agent can take actions that change its state like turning the wheel or applying the gas or brakes.


Quantum computers: Eight ways quantum computing is going to change the world

Discovering new drugs takes so long: scientists mostly adopt a trial-and-error approach, in which they test thousands of molecules against a target disease in the hope that a successful match will eventually be found. Quantum computers, however, have the potential to one day resolve the molecular simulation problem in minutes. The systems are designed to be able to carry out many calculations at the same time, meaning that they could seamlessly simulate all of the most complex interactions between particles that make up molecules, enabling scientists to rapidly identify candidates for successful drugs. This would mean that life-saving drugs, which currently take an average 10 years to reach the market, could be designed faster -- and much more cost-efficiently. Pharmaceutical companies are paying attention: earlier this year, healthcare giant Roche announced a partnership with Cambridge Quantum Computing (CQC) to support efforts in research tackling Alzheimer's disease.


What is a honeypot crypto scam and how to spot it?

Even though it looks like a part of the network, it is isolated and monitored. Because legitimate users have no motive to access a honeypot, all attempts to communicate with it are regarded as hostile. Honeypots are frequently deployed in a network's demilitarized zone (DMZ). This strategy separates it from the leading production network while keeping it connected. A honeypot in the DMZ may be monitored from afar while attackers access it, reducing the danger of a compromised main network. To detect attempts to infiltrate the internal network, honeypots can be placed outside the external firewall, facing the internet. The actual location of the honeypot depends on how intricate it is, the type of traffic it wants to attract and how close it is to critical business resources. It will always be isolated from the production environment, regardless of where it is placed. Logging and viewing honeypot activity provides insight into the degree and sorts of threats that a network infrastructure confronts while diverting attackers' attention away from real-world assets.


From DeFi to NFTs to metaverse, digital assets revolution is remaking the world

This decentralised concept offers both opportunities and challenges. How could a system work among a group of participants—there could be bad apples—if they were given the option of pseudonymity? Who will update the ledger? How will we reach a uniform version of truth? Bitcoin solved a lot of the long-standing issues with cryptographic consensus methods with a combination of private and public keys, and carefully aligned economic incentives. Suppose User A wants to transfer 1 bitcoin to User B. The transaction data would be authenticated, verified, and moved to the ‘mempool’ (memory pool is a holding room for all unconfirmed transactions), where they will be collected in groups or ‘blocks’. One block becomes one entry in the Bitcoin ledger, and around 3,000 transactions will appear in one block. The ledger would be updated every 10 minutes, and the system would converge on the latest single version of truth. The next big question is, who in the system gets to write the next entry in the ledger? That is where the consensus protocol comes into play.


The privacy dangers of web3 and DeFi – and the projects trying to fix them

Less discussed is the impact of web3 and DeFi on user privacy. Proponents argue that web3 will improve user privacy by putting individuals in control of their data, via distributed personal data stores. But critics say that the transparent nature of public distributed ledgers, which make transactions visible to all participants, is antithetical to privacy. “Right now, web3 requires you to give up privacy entirely,” Tor Bair, co-founder of private blockchain The Secrecy Network, tweeted earlier this year. “NFTs and blockchains are all public-by-default and terrible for ownership and security.” Participants in public blockchains don’t typically need to make their identities known, but researchers have demonstrated how transactions recorded on a blockchain could be linked to individuals. A recent paper by researchers at browser maker Brave and Imperial College London found that many DeFi apps incorporate third-party web services that can access the users’ Ethereum addresses. “We find that several DeFi sites rely on third parties and occasionally even leak your Ethereum address to those third parties – mostly to API and analytics providers,” the researchers wrote.


The Importance of People in Enterprise Architecture

All the employees in the organization should have a shared understanding of the overarching future state and be empowered to update the future state for their part of the whole. The communication and democratization discussed in the AS-IS section is also necessary for TO-BE. People need regular, self-service access to a continuously evolving future-state architecture description. Each person should have access that provides views specific to that person, their role, and links to other people who will collaborate to promote that understanding and evolve the design. Progressive companies are moving away from the plan-build-run mentality and this is changing the role of the architecture review board (ARB) that is operated by a central EA team. These traditionally act as a bureaucratic toll-gate, performing their role after the design is finished to ensure all system qualities are accounted for and the design is aligned with the future state approach. However, democratizing the enterprise architecture role and sharing design autonomy now requires collaboration on the initial phase of design at the start of an increment. This collaboration is to ensure the reasoning behind the enterprise-wide future-state is understood, and the desired system qualities are carefully evaluated.


The Best Way to Manage Unstructured Data Efficiently

A lot of people seem to place a lot of focus on data analysis techniques and machine learning models when building a high-quality ML production pipeline. However, what a lot of people miss is that storage is one of the most important aspects of your pipeline. This is because the pipeline has 3 main components: collecting data, storing it, and consuming it. Effective storage methods do not only boost storage capabilities but also help in more efficient collection and consumption. The ease of searching with customizable metadata is available in object storage and helps in doing both of those. Not only do you want to choose the correct storage tech, but you also want to choose the correct provider. AWS comes to mind as one of the best object storage providers mainly because its infrastructure provides smooth service and ease of scaling. Furthermore, for effective consumption of data, there must be a software layer that runs on top of this storage for data aggregation and collection purposes. This is also an important choice and needs to be discussed in another article dedicated to the topic.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - December 25, 2021

10 data-driven strategies to spark conversions in 2022

Conversion begins with a click. And clicks come after you have successfully grabbed your user’s attention. A headline is often the first thing your users come across, and hence an excellent tool to use for grabbing their attention. Therefore, using attention-grabbing headlines (paired with other factors) can lead to better conversions. This is not your pass to creating controversial and low-value titles. Grab attention while delivering value and maintaining class. Again, tap into website analytics to find out which headlines have worked the best for you. If you are entirely new to the website world, know that headlines with numbers have shown to have 30% higher conversions than those without numbers. Additionally, short and concise headlines, which have a negative superlative (like x number of things you have never seen before or x killer Instagram profiles you need to follow), have a higher tendency to earn more clicks. A/B testing or split testing reveals incredibly insightful data that can work wonders on your bottom line. 


TechScape: can AI really predict crime?

The LAPD is working with a company called Voyager Analytics on a trial basis. Documents the Guardian reviewed and wrote about in November show that Voyager Analytics claimed it could use AI to analyse social media profiles to detect emerging threats based on a person’s friends, groups, posts and more. It was essentially Operation Laser for the digital world. Instead of focusing on physical places or people, Voyager looked at the digital worlds of people of interest to determine whether they were involved in crime rings or planned to commit future crimes, based on who they interacted with, things they’ve posted, and even their friends of friends. “It’s a ‘guilt by association’ system,” said Meredith Broussard, a New York University data journalism professor. Voyager claims all of this information on individuals, groups and pages allows its software to conduct real-time “sentiment analysis” and find new leads when investigating “ideological solidarity”. “We don’t just connect existing dots,” a Voyager promotional document read. “We create new dots. What seem like random and inconsequential interactions, behaviours or interests, suddenly become clear and comprehensible.”


Privacy and Confidentiality in Security Testing

Now when we understand the difference between privacy and confidentiality and how it can affect a person, we can talk about keeping these privacy and confidentiality safe while testing. The increasing number of malware bots makes business owners concerned about keeping data confidential. It also makes implementing security testing vital for any software development, and especially for web applications. Knowing how to test software to prevent any personal data from being compromised from their site is essential. For this, let’s go through the steps QA testers can take to implement security testing. To illustrate our suggestions we'll use the interface of aqua ALM that is popular among QA teams for test management in security testing. ... The main goal of security testing is to prevent applications from malware penetrations and others access and also protect the confidentiality and privacy of a person. 


An introduction to the magic of machine learning

We hear about machine learning a lot these days, and in fact it’s all around us. It can sound kind of mysterious, or even scary, but it turns out that machine learning is just math. And to prove that it’s just math, I will write this article the old-school way, with hand-written equations instead of code. If you prefer to learn by… To explain what machine learning is and how math makes it work, we will do a full walk-through of logistic regression, a fairly simple but fundamental model that is in some sense the building block of more complex models like neural networks. If I had to pick one machine learning model to understand really well, this would be it. Most often, we use logistic regression for a task called binary classification. In binary classification, we want to learn how to predict whether a data point belongs to one of two groups or classes, labeled 0 and 1. ... These training data allow us to learn the optimal theta parameters. What does optimal mean? Well, one reasonable and quite common definition is to say that the optimal theta is the set of parameters that maximizes the probability of obtaining our training data.


Alternative Feature Selection Methods in Machine Learning

The "Wrapper Methods" category includes greedy algorithms that will try every possible feature combination based on a step forward, step backward, or exhaustive search. For each feature combination, these methods will train a machine learning model, usually with cross-validation, and determine its performance. Thus, wrapper methods are very computationally expensive, and often, impossible to carry out. The "Embedded Methods," on the other hand, train a single machine learning model and select features based on the feature importance returned by that model. They tend to work very well in practice and are faster to compute. On the downside, we can’t derive feature importance values from all machine learning models. For example, we can’t derive importance values from nearest neighbours. In addition, co-linearity will affect the coefficient values returned by linear models, or the importance values returned by decision tree based algorithms, which may mask their real importance. Finally, decision tree based algorithms may not perform well in very big feature spaces, and thus, the importance values might be unreliable.


Diversity in cybersecurity: Barriers and opportunities for women and minorities

Our world is getting increasingly digitized, and cybercrime continues to break new records. As cyber risks intensify, organizations are beefing up defenses and adding more outside consultants and resources to their teams. But to their sad misfortune, they are getting hit by a major roadblock—a long-standing shortage of qualified cybersecurity talent. A closer look at the numbers reveal an even more startling statistic: women comprise only 25% of the cybersecurity workforce, according to research from ISC2, despite outpacing men in overall college enrollment. There are a number of reasons why women and minorities pursuing cybersecurity careers can be significantly beneficial to the overall industry. Here are two: People from different genders, ethnicities and backgrounds can provide a fresh perspective to solving highly complex security problems. And then there’s the simple fact that leaving cybersecurity jobs unfilled puts businesses at risk. As the cybersecurity skills gap continues to grow, that risk only increases.


Half-Billion Compromised Credentials Lurking on Open Cloud Server

“Through analysis, it became clear that these credentials were an accumulation of breached datasets known and unknown,” the NCA said in a statement provided to Hunt. “The fact that they had been placed on a U.K. business’s cloud storage facility by unknown criminal actors meant the credentials now existed in the public domain, and could be accessed by other third parties to commit further fraud or cyber-offenses.” The passwords have been added to HIBP, which means they’re searchable by individuals and companies worldwide seeking to verify the security risk of a password before usage. Previously unseen passwords include flamingo228, Alexei2005, 91177700, 123Tests and aganesq, Hunt said in a blog posting Monday. “It is a both unfortunate and mind boggling that over 200 million of the passwords that were shared by U.K. NCA were brand new to the HIBP service,” Baber Amin, COO at Veridium, said via email. “It points to the sheer size of the problem, the problem being passwords, an archaic method of proving one’s bonafides. If there was ever a call to action to work towards eliminating passwords and finding alternates, then this has to be it.”


A cybersecurity expert explains Log4Shell – the new vulnerability that affects computers worldwide

Log4Shell works by abusing a feature in Log4j that allows users to specify custom code for formatting a log message. This feature allows Log4j to, for example, log not only the username associated with each attempt to log in to the server but also the person’s real name, if a separate server holds a directory linking user names and real names. To do so, the Log4j server has to communicate with the server holding the real names. Unfortunately, this kind of code can be used for more than just formatting log messages. Log4j allows third-party servers to submit software code that can perform all kinds of actions on the targeted computer. This opens the door for nefarious activities such as stealing sensitive information, taking control of the targeted system and slipping malicious content to other users communicating with the affected server. It is relatively simple to exploit Log4Shell. I was able to reproduce the problem in my copy of Ghidra, a reverse-engineering framework for security researchers, in just a couple of minutes. 


The Metaverse is Overhyped; But by 2050, AI Will Make It Real

The metaverse today is not a place to go so much as a collection of technologies surrounding tools like NVIDIA’s Omniverse that can create simulations used to train robots and autonomous cars. It is an easier-to-use and more comprehensive tool set, like what architects have used to create virtual building, but with far more realistic results, including lighting effects, reflections, and a limited application of physics. For point simulation, the metaverse concept is workable, but it really is just a better simulation platform for point projects today, and nowhere near the full virtual world we expect. By the end of the decade, NVIDIA’S Earth-2 project should be viable. This is currently the most aggressive public project in process, and Earth 2 could well become the foundation of a far broader use of the concept. Initially, Earth 2 will be limited by the technology available at the time, but once it is workable, it will be able to predict weather events more accurately and model potential climate change remedies better than the simulations we currently have.


Eliminating artificial intelligence bias is everyone's job

As new tools are provided around the auditability of AI, we'll see a lot more companies regularly reviewing their AI results. Today, many companies either buy a product that has an AI feature or capability embedded or it's part of the proprietary feature of that product, which doesn't expose the auditability. Companies may also stand up the basic AI capabilities for a specific use case, usually in that AI discover level of usage. However, in each of these cases the auditing is usually limited. Where auditing really becomes important is in "recommend" and "action" levels of AI. In these two phases, it's important to use an auditing tool to not introduce bias and skew the results. One of the best ways to help with auditing AI is to use one of the bigger cloud service providers' AI and ML services. Many of those vendors have tools and tech stacks that allow you to track this information. Also key is for identifying bias or bias-like behavior to be part of the training for data scientists and AI and ML developers. The more people are educated on what to look out for, the more prepared companies will be to identify and mitigate AI bias.  



Quote for the day:

“Hard times are sometimes blessings in disguise. We do have to suffer but in the end it makes us strong, better and wise.” -- Anurag Prakash Ray

Daily Tech Digest - December 24, 2021

A CIO’s Guide To Hybrid Work

CIOs reimagining an organization’s digital strategy need to ensure that their employees can communicate effectively and have complete access to resources needed to perform their jobs. This means that employees do not receive just their laptops and an email account but have full access to a complete tech stack and set of solutions that empower them to interact with their peers and customers. AI- and ML-powered solutions help enhance the employee experience by saving time for people to connect with their teams and helping infuse mental well-being along with a company’s values and purpose. The best way to understand whether your employees are well supported to carry on their job is by gathering feedback from them. Send out a simple form with both open and closed questions on the potential communication gaps, remote work support and access to available resources. Once you have all the information, analyze the gaps and improvement opportunities to pick the right tools. Make sure that the tools you choose integrate with your organization’s tech ecosystem while delivering value.


Whatever Happened to Business Supercomputers?

Supercomputers are primarily used in areas in which sizeable models are developed to make predictions involving a vast number of measurements, notes Francisco Webber, CEO at Cortical.io, a firm that specializes in extracting value from unstructured documents. “The same algorithm is applied over and over on many observational instances that can be computed in parallel," says Webber, hence the acceleration potential when run on large numbers of CPUs.” Supercomputer applications, he explains, can range from experiments in the Large Hadron Collider, which can generate up to a petabyte of data per day, to meteorology, where complex weather phenomena are broken down to the behavior of myriads of particles. There's also a growing interest in graphics processing unit (GPU)-and tensor processing unit (TPU)-based supercomputers. “These machines may be well suited to certain artificial intelligence and machine learning problems, such as training algorithms [and] analyzing large volumes of image data,” Buchholz says.


The State of Hybrid Workforce Security 2021

The time is right for IT leaders to turn to their teams and gain a clear understanding of what they actually have in place. While the initial response to the pandemic was reactionary, now is a moment to assess an organization’s app and security landscape and what is actually providing access to users no matter where they are, whether they’re at home, in the branch, or anywhere in between. Rationalizing the purpose and usage of solutions that are in place today provides a real opportunity for consolidation—one that did not seriously exist previously. Many organizations will be able to drive better outcomes around security posture, reducing risk, and improving total cost of ownership. Consolidating the number of disparate tools in use to provide secure user access improves security posture consistency and reduces the number of policies that have to be administered. Besides reducing needed multi-product training and management effort, a platform approach drives better economies of scale, resulting in a lower total cost of ownership. Net-net, consolidation delivers a far more effective approach for security.


What is Web3, is it the new phase of the Internet and why are Elon Musk and Jack Dorsey against it?

In the Web3 world, search engines, marketplaces and social networks will have no overriding overlord. So you can control your own data and have a single personalised account where you could flit from your emails to online shopping and social media, creating a public record of your activity on the blockchain system in the process. A blockchain is a secure database that is operated by users collectively and can be searched by anyone. People are also rewarded with tokens for participating. It comes in the form of a shared ledger that uses cryptography to secure information. This ledger takes the form of a series of records or “blocks” that are each added onto the previous block in the chain, hence the name. Each block contains a timestamp, data, and a hash. This is a unique identifier for all the contents of the block, sort of like a digital fingerprint. ... The idea of a decentralised internet may sound far-fetched but big tech companies are already betting big on it and even assembling Web3 teams.


Will A.I. Guarantees Our Humane Futures?

Both private firms and governments, which would be adopting A.I. drove technologies, could be attracted to the opportunity of violating the individual’s privacy and data security for their own selfish reasons. Large private corporations, especially technology and social media companies such as the big four of the big tech, which includes Google, Amazon, Apple, and Facebook, they’re already sitting on massive quantities of user data, which they’re looking to monetize, and such monetization of data in the name of customized services and targeted advertisements could have a disastrous impact on the user’s privacy and data security. The bigger threat will emerge when such sensitive user data is misused for social engineering to alter the customer's behavior and choices. ... Today, algorithms are so sophisticated that they can predict the user's next action based on their private data analysis. It’s very much possible to make use of such user data to nudge the individual discretely to alter his behavior and choices, and this has far-reaching implications for the economy, for society, and as well as for the security of a democratic nation.


Protection against the worst consequences of a cyberattack

Businesses need an incident response plan that will clearly outline the steps to be followed when a data breach occurs. By neglecting to do so, the organization will become the low hanging fruit that attackers go after. Even a rudimentary plan is better than no plan at all, and those without one will suffer a much higher impact. The incident response plan needs to outline the steps to be followed when a data breach occurs. Teams need to identify and classify data to understand what levels of protection are needed, a step that is regrettably missed all the time. For instance, personal identifiable customer information needs a different level of protection to the photos from the last Christmas party. Teams also need to maintain cyber hygiene through regular patching, and since 90% of breaches start with an email, it is very important to have email protection, multi-factor authentication and end-point protection to prevent any lateral movements by cybercriminals. Perhaps my biggest piece of advice is to have experienced personnel monitoring your environment 24/7, 365 days a year (including Christmas). 


Initial access brokers: How are IABs related to the rise in ransomware attacks?

Initial access brokers sell access to corporate networks to any person wanting to buy it. Initially, IABs were selling company access to cybercriminals with various interests: getting a foothold in a company to steal its intellectual property or corporate secrets (cyberespionage), finding accounting data allowing financial fraud or even just credit card numbers, adding corporate machines to some botnets, using the access to send spam, destroying data, etc. There are many cases for which buying access to a company can be interesting for a fraudster, but that was before the ransomware era. ... Ransomware groups saw an opportunity here to suddenly stop spending time on the initial compromise of companies and to focus on the internal deployment of their ransomware and sometimes the complete erasing of the companies' backup data. The cost for access is negligible compared with the ransom that is demanded of the victims. IAB activities became increasingly popular in the cybercriminal underground forums and marketplaces. 


8 Real Ways CIOs Can Drive Sustainability, Fight Climate Change

The concept of the circular economy has been around for a while, but it’s now taking off in a big way. NTT’s Lombard says that it’s a key to getting to net zero. This means establishing business and IT supply chains that focus on optimizing the lifespan of equipment, moving toward zero-emission closed loop recycling and curtailing e-waste. For example, there’s a growing second-hand market for high-end gear, including hyperscale infrastructure. Companies like IT Renew recertify these systems and place them under warranty. “Everyone wins,” says Lucas Beran, principal analyst at consulting firm Dell’Oro Group. “The original user gets two or three years of use; the buyer gets another three or four years -- all while TCO and the carbon footprint drop.” ... Data centers are expected to consume about 8% of the world's electricity by 2030. While refreshing legacy servers, optimizing data, virtualizing workloads, consolidating virtual machines and green hosting all deliver benefits, these strategies aren’t enough to tackle climate change. Organizations must fundamentally rethink data center design and function.


How Safety Became One of The Most Critical Smart City Applications

For cities, it can be challenging to ensure citizen and worker safety when natural disasters occur. Incidents such as hurricanes, floods, fires and gas leaks are unpredictable and often impossible to prevent. To put it in perspective, most people have lived through some disaster, with 87% of consumers saying they’ve been impacted by one in the last five years (not counting the COVID pandemic). Safety will only become more critical over the next few decades as natural disasters are becoming more frequent, intense and costly. Since 1970, the number of disasters worldwide has more than quadrupled to around 400 a year. Since 1998, natural disasters worldwide have killed more than 1.3 million people and left another 4.4 billion injured, homeless, displaced, or in need of emergency assistance. Smart sensors and advanced analytics can help communities better predict, prepare and respond to these emergency situations. For example, IoT sensors, such as pole tilt, electric distribution line, leak detection and air quality sensors, can be leveraged to mitigate risk minimize damage.


Avoiding Technical Bankruptcy: a Whole-Organization Perspective on Technical Debt

It is regrettable that the meaning of the technical debt metaphor has been diluted in this way, but in language as in life in general, pragmatics trump intentions. This is where we are: what counts as "technical debt" is largely just the by-product of normal software development. Of course, no-one wants code problems to accumulate in this way, so the question becomes: why do we seem to incur so much inadvertent technical debt? What is it about the way we do software development that leads to this unwanted result? These questions are important, since if we can go into technical debt, then it follows that we can become technically insolvent and go technically bankrupt. In fact, this is exactly what seems to be happening to many software development efforts. Ward Cunningham notes that "entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation". That stand-still is technical bankruptcy.



Quote for the day:

“When you take risks you learn that there will be times when you succeed and there will be times when you fail, and both are equally important.” -- Ellen DeGeneres

Daily Tech Digest - December 23, 2021

Top 6 trends in data and analytics for 2022

A data fabric is an architecture that provides visibility of data and the ability to move, replicate and access data across hybrid storage and cloud resources. Through near real-time analytics, it puts data owners in control of where their data lives across clouds and storage so that data can reside in the right place at the right time. IT and storage managers will choose data fabric architectures to unlock data from storage and enable data-centric vs. storage-centric management. For example, instead of storing all medical images on the same NAS, storage pros can use analytics and user feedback to segment these files, such as by copying medical images for access by machine learning in a clinical study or moving critical data to immutable cloud storage to defend against ransomware. Many organizations today have a hybrid cloud environment in which the bulk of data is stored and backed up in private datacenters across multiple vendor systems. As unstructured (file) data has grown exponentially, the cloud is being used as a secondary or tertiary storage tier. It can be difficult to see across the silos to manage costs, ensure performance and manage risk. 


2022 technology trend review, part one: Open source, cloud, blockchain

Blockchain platforms are by and large open source too, but although data-related, theirs is a different story. Let's get that out of the way: was 2021 a breakout year for blockchain? No, not really. Will 2022 be a breakout year for blockchain? Probably not. But that's not the point. Blockchain's sudden rise to stardom in 2017 was rather abrupt and premature. The concepts and the technology are still under development, while mainstream adoption is still tentative. To speak in hype cycle terms, blockchain is going through the Trough of Disillusionment. But that does not mean it's without significance. To reiterate: the transformational potential is there, but there's still a long way to go, both on the technical and on the organizational and operational side of things. In 2020, blockchain-powered DeFi rose to prominence. In 2021, DeFi hit the reality wall. DeFi stands for Decentralized Finance. In short, DeFi's promise is to be able to cut out middlemen from all kinds of transactions. In 2020, DeFi saw lots of growth, some of it warranted, we noted last year.


Best of 2021 – 7 Popular Open Source CI/CD Tools

Argo CD is a CI/CD tool for Kubernetes development. It is an open source project which is currently in the incubation status at the Cloud Native Computing Foundation (CNCF). It uses Git repositories to store the state of Kubernetes applications, monitors applications and can resync clusters to the desired state, as represented in the git configuration. This innovative approach also allows you to store multiple desired states of a Kubernetes application, using branches, tags, or by pinning manifest versions using a Git commit. This provides a flexible environment for managing Kubernetes configurations during the development process. ... CircleCI is an open source CI/CD tool. It includes features for job orchestration, resource configuration, caching, debugging, security and dashboard reports. CircleCI integrates with a variety of tools, including GitHub, Heroku, Slack and Docker. CircleCI is available in three tiers, one of which is free. You can use it in the cloud or on-premises with Linux, Mac or Windows machines. 


Managing state with Elf, a new reactive framework

Elf is a reactive and immutable state management library built on top of RxJS. Elf provides us with a wide arrange of tools to manage our state. Because of this, there is some terminology we should know, like observables, observers, and subscriptions. Observables are objects that can emit data over a period of time. They function as wrappers around data sources or stream of values. Observers are consumers of the data observables store. They execute a piece of code if the data being observed is mutated or if an error occurs, and react to state changes. They also implement up to three methods: next, error, and complete. We will not look at these in detail because they are specific to RxJS and therefore beyond the scope of this article. Subscriptions are how we connect observers to observables. Observers subscribe to observables, watch for any changes in the data, and react to those changes. ... Elf entities are unique types of Elf stores. Entities act in the same manner as tables in a database, and we can store large collections of similar data in entities.


FBI: Another Zoho ManageEngine Zero-Day Under Active Attack

The bug is the third zero-day under active attack that researchers have discovered in the cloud platform company’s ManageEngine suite since September, spurring dire warnings from the FBI and researchers alike. Though no one has yet conclusively identified the APT responsible, it’s likely the attacks are linked and those responsible are from China, previous evidence has shown. Earlier this month, researchers at Palo Alto Networks Unit 42 revealed that state-backed adversaries were using vulnerable versions of ManageEngine ServiceDesk Plus to target a number of U.S. organizations between late October and November. The attacks were related to a bug revealed in a Nov. 22 security advisory by Zoho alerting customers of active exploitation against newly registered CVE-2021-44077 found in Manage Engine ServiceDesk Plus. The vulnerability, which allows for unauthenticated remote code execution, impacts ServiceDesk Plus versions 11305 and below. 


Vulnerabilities to fraud are increasing across the board

In a phenomenon McKinsey referred to as The Quickening, e-commerce saw more than a decade’s worth of growth in the first quarter of 2020, as more consumers than ever before turned to digital solutions. According to media regulator Ofcom, UK adults spent an average of three hours and 47 minutes online every day during the pandemic, prompting an increase in the number of personal accounts for banking, financial services, e-commerce shopping and media streaming. As logins soared, so did the opportunities for fraud. While new account opening fraud remains the most popular form of automated attack across the customer journey, with one in 11 transactions in the Digital Identity Network estimated to be an attempt, overall this attack vector fell 10% YoY. A corresponding growth of 52% in login attacks and an 18% growth in payment attacks – testing stolen card credentials – reinforces the hypothesis that fraudsters are automating attacks to test the validity of stolen credentials on an industrial scale.


3 Meaningful KPIs to Focus Agile Development, DevOps, and IT Ops to Deliver Business Outcomes

Speed without guard rails and safety can lead to disastrous crashes – but stagnation and creating bureaucracy-driven change processes that slow the delivery of innovation, new capabilities, and improvements can lead to disruption. Whether you are agile, DevOps, or IT Ops-centric, we’re all trying to deliver positive business outcomes through transformation management. And change failure rates is the first indicative KPI of how well IT performs in delivering business outcomes. When change failure rates are high, IT has to slow down and fix things, while business stakeholders lose trust in IT. And that’s just the start of impacts because change failures can lead to outages, security issues, and other major incidents. A measurement is only as good as its ability to lead to action. Using an AIOps platform to improve root cause analysis by correlating incidents to the changes that caused them is a best practice for identifying systemic causes and helping reduce change failure rates.


The Future of Banking When a ‘New Normal’ Has Yet to be Defined

Everything that we thought was going to be the future in 2030, ended up just being how we get through the next 12 months. This means that we now need to reset our expectations about what innovation really looks like, because no one’s impressed with you having a mobile app anymore or having a digital channel, or having some level of automation, or accepting digital signatures. Let’s face it, if you hadn’t figured out how to do these things in the most recent period, you’re probably no longer in business. ... The ‘Great Resignation’ is actually accelerating automation. Since no one can find people to work, they’re doubling down on automation, artificial intelligence, and machine learning. When more people reenter the workforce, we need to start to define what a human’s good at and what a machine’s good at. Part of being an effective human leader or worker in the future will be the ability to constantly reinvent yourself. Likewise, the key component of someone you want to look for in the future, is the ability to destroy their own job


Transforming government software development and digital services

There are a plethora of country-specific laws and digital government initiatives that aim to rethink public sector IT. One example of the collaborative approach mentioned earlier is Germany’s Online Access Act which aims to bring together the country’s 16 federal states and 11,000 local governments under one digital banner. This means that all services offered at federal, state and local level are to be accessible online via their own portals, with these portals linked within a network. With a digital account, citizens can reach all federal, state and local services from this network in just three clicks. To enable this, uniform IT standards and interfaces are necessary across the board. Another interesting development is the public sector taking cues from Silicon Valley to become more efficient, moving from a bureaucratic culture to a generative one. One example of this is Kessel Run, which aims to revolutionise the software acquisition process for the United States Air Force (USAF). 


Combating Synthetic ID Fraud in 2022

Technologies such as machine learning are also being used by security vendors to fight against SIF. "SIF’s use of machine learning is largely what makes it effective at bypassing legacy fraud detection systems. Needless to say, banks can use the same technology to identify these attacks. However, despite having multiple vendors out there claiming to leverage machine learning techniques, financial institutions have so far failed to combat SIF," says People’s United Bank's Boyer. Boyer says financial institutions are not using these technologies in the right manner. "Financial institutions need to start using machine learning techniques correctly. Many businesses have a 'set it once and forget it' approach. There has to be some kind of human interaction to differentiate between fraud and legitimate transactions." And vendors must change their approach too, she says. "Vendors are checking personally identifiable information that has been used previously to verify its legitimacy. 



Quote for the day:

''Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - December 22, 2021

Cybersecurity spending trends for 2022: Investing in the future

Despite the steady state of funding, CISOs aren’t going to be flush with cash. Security leaders and executive advisors say security departments must continue to show that they’re delivering value for the dollars spent, maturing their operations, and, ultimately, improving their organization’s security posture. “Organizations know that risks are increasing every day, and as such, investments continue to pour into cybersecurity,” says Joe Nocera, leader of PwC’s Cyber & Privacy Innovation Institute. “We’re hearing from business leaders that they’d be willing to spend anything to not end up on the front page of a newspaper for a hack, but they don’t want to spend a penny more than is necessary and they want to make sure they’re spending their money in the right areas. That’s going to require the CEO and CISOs to work together. CISOs need to know what the right level of protection is.” Nocera adds: “Cyber investments are becoming less about having the latest products from tech vendors and more about first understanding where the business is most vulnerable, then prioritizing investments by how likely an attack will occur and how substantial that loss could be to the business.”


Why CISOs Shouldn’t Report to CIOs in the C-Suite

A very common complaint I hear from CISOs is that they do not receive the resources they need to secure their enterprises. While some companies understand how and where the CISO fits into the leadership structure, the majority do not. One individual that works for a local government told me he took a position as a CIO rather than a CISO because he “knew the CISO role was that of a fall guy.” He believes he was only offered the CISO position because the CIO wanted someone to blame if things went badly. This example clearly shows the conflict of interest that exists when a CISO reports to a CIO. One CISO working in the industrial market told me that there’s an “inherent tension between me and others that report to the CIO.” This frequently occurs due to the trade-off between security and efficiency, which impacts business units throughout an enterprise. When manufacturing wants to continue running a legacy system with outdated software and the CISO says no, this impacts revenue. 


Why Do We Need An Agile Finance Transformation

Embracing agility strategically and tactically while encouraging a fail-fast environment ensures teams have adaptable processes, collaborative mindsets, and a bias for continuous improvement. An agile finance function is prepared to provide assurance for financial results and contribute to strategic decisions in the face of evolving market conditions, the accelerated pace of change, and the introduction of unforeseeable circumstances. CFOs, controllers, finance and accounting professionals, and students alike are, therefore, encouraged to develop agile and scrum expertise to elevate individual, functional, and organizational performance, further strengthening the finance function’s value proposition for decades to come. Utilizing agile and scrum to redefine approaches to core activities like financial planning and analysis, internal audit, and financial close can position management accountants to better support the unprecedented number of transformation initiatives organizations embark upon today. Further, the agile finance function can realize elevated outcomes, maximized value, and expedited delivery, enabling their organizations to adapt to changing priorities with agility and data-backed insights.


Mozilla patches critical “BigSig” cryptographic bug: Here’s how to track it down and fix it

Many software vendors rely on third-party open source cryptographic tools, such as OpenSSL, or simply hook up with the cryptographic libraries built into the operating system itself, such as Microsoft’s Secure Channel on Windows or Apple’s Secure Transport on macOS and iOS. But Mozilla has always used its own cryptographic library, known as NSS, short for Network Security Services, instead of relying on third-party or system-level code. Ironically, this bug is exposed when affected applications set out to test the cryptographic veracity of digital signatures provided by the senders of content such as emails, PDF documents or web pages. In other words, the very act of protecting you, by checking up front whether a user or website you’re dealing with is an imposter …could, in theory, lead to you getting hacked by said user or website. As Ormandy shows in his bug report, it’s trivial to crash an application outright by exploiting this bug, and not significantly more difficult to perform what you might call a “controlled crash”, which can typically be wrangled into an RCE, short for remote code execution.


Zero Trust Shouldn’t Mean Zero Trust in Employees

An effective zero trust experience works for and empowers the employee. To them, everything feels the same — whether they're accessing their email, a billing platform, or the HR app. In the background, they don't have broad access to apps and data that they don't need. This comes down to building a well-defined and measurable "circle of trust" that is granted to an employee based on their role and team. With these guardrails in place, you're removing the friction and providing a good user experience while establishing more effective security. Security teams must be able to clearly and reliably enforce a trust boundary that's extended to employees based on what they need to get their jobs done. From there, zero trust is about building out those guardrails so that the trust boundary is maintained. No more, no less. Zero trust should be implemented across the entire HR life cycle, especially when staffing shortages and the Great Resignation have caused hiring and turnover fluctuations.


Understanding Black Box Testing - Types, Techniques, and Examples

To ensure that the software quality is maintained and you do not lose customers because of a bad user experience, your application should go through stern supervision using suitable testing techniques. Black box testing is the easiest and fastest solution to investigate the software functionalities without any coding knowledge. The debate on white box vs. black-box testing is an ever-prevailing discussion, where both stand out as winners. Whether you want White box testing or Black box testing depends upon how deeper you want to get into the software structure under test. If you want to test the functionalities with an end-user perspective, Black box testing fits the bill. And, if you wish to direct your testing efforts towards how the software is built, its coding structure, and design, then white box testing works well. However, both aim to improve the software quality in their own different ways. There are a lot of black-box testing techniques discussed above. 


CIO priorities: 10 challenges to tackle in 2022

From robotic process automation to low-code technologies, there's a whole suite of tools that claim to make the application development process easier. However, automation should come with a warning: while these tools can lighten the day-to-day load for IT teams, someone somewhere must ensure that new applications meet stringent reliability and security standards. Increased automation will mean IT professionals spend more time engaging and overseeing, so focus on training and development to ensure your staff is ready for a shift in responsibility. With all the talk of automation and low-code development, it would be easy to assume that the traditional work of the IT department is done. Nothing could be further from the truth. Yes, the tech team is set to change, but talented developers – who work alongside their business peers – remain a valuable and highly prized commodity. To attract and retain IT staff, CIOs will need to think very hard about the opportunities they offer. Rather than being a place to go, work is going to become an activity you do in a collaborative manner, regardless of location. 


Cloud numbers don’t add up

The problem is aligning ambition with reality. It’s perhaps also a weirdness in the definition of “cloud native.” The Cloud Native Computing Foundation defines “cloud native” as enabling enterprises to “build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” There’s nothing particularly modern about a private cloud/data center. Scott Carey has described it thus: “Cloud native encompasses the various tools and techniques used by software developers today to build applications for the public cloud, as opposed to traditional architectures suited to an on-premises data center” (emphasis mine). If going cloud native simply means “doing what we’ve always done, but sprinkled with containers,” that’s not a very useful data point. “Cloud first,” however, arguably is. If we’re already at 47% of respondents saying they default to cloud (again, my assumption is that people weren’t thinking “my private data center” when answering a question about “cloud first”), then we have a real problem with measured spend on cloud computing from IDC, Gartner, and even the most wide-eyed of would-be analyst firms.


The Dark Web: a cyber crime bazaar where data is a hot commodity

Everyone is aware of the Dark Web’s reputation as a playground for cyber criminals who anonymously trade stolen data and partake in illegal activities. While in the past it required a degree of technical knowledge to transact on the Dark Web, in recent years the trading of malware and stolen data has become increasingly commoditised. As a result, marketplaces, hacker forums and ransomware groups sites are proliferating. Bitglass recently conducted some research that shines some light on exactly how Dark Web activity, the value of stolen data, and cyber criminal behaviours have rapidly evolved in recent years. What we found should trigger alarm bells for enterprises that want to prevent their sensitive data from ending up on the Dark Web. Back in 2015, Bitglass conducted the world’s first data tracking experiment to identify exactly how data is viewed and accessed on the Dark Web. This year we re-ran the experiment and embellished it, posting fake account usernames, emails and passwords that would supposedly give access to high-profile social media, retail, gaming, crypto and pirated content networks acquired through well-known breaches.


Disaster preparedness: 3 key tactics for IT leaders

Once risks are identified and impacts are evaluated and scored, implement an appropriate risk response. This includes risk treatment options to accept the risk, mitigate the risk with new or existing controls, transfer the risk to third parties – often with insurance or risk sharing, or avoid the risk by ceasing the business activity related to it. A risk assessment can be coupled with a business impact analysis (BIA) that provides input into business continuity and disaster planning. A BIA identifies recovery time objectives (RTOs), recovery point objectives (RPOs), critical processes, dependence on critical systems, and many other areas. It gets to the 80/20 rule where rather than create costly recovery strategies for 100 percent of all critical business functions, you want to focus on the 20 percent of the business processes that are the most critical and need to be recovered quickly in a disaster event. Once a BIA is completed, organizations can determine their recovery strategies to maintain continuity of operations during a disaster. Business continuity plans should be based on the BIA and updated at least every year.



Quote for the day:

"Tact is the ability to make a person see lightning without letting him feel the bolt." -- Orlando A. Battista

Daily Tech Digest - December 21, 2021

Everyone likes to talk sustainability, but who takes responsibility?

It’s no longer enough to rely upon a small pool of employees to drive, inform and implement widespread change. Meeting these ambitious targets will only be possible if they are accompanied by a top-down mentality to change alongside a groundswell of employee support. Ultimately, responsibility for change needs to fall under the remit of the entire workforce, not just one individual. At the heart of this is ensuring that the sustainability function is not siloed from the rest of the business, acting as its own separate entity with different KPIs or activations. To be successful, it needs to permeate the wider business and encourage others to embrace a ‘sustainability by design’ mindset with new policy, direction and solutions. At first, this might mean that meetings should have a dedicated sustainability champion, whether that is the CEO or Chief Sustainability Officer, as outlined in the above research, or another individual, who knowledge-shares, coaches other employees and ensures the business is on track against its targets. 


Log4j: Belgian Defense Ministry Reports it Was 'Paralyzed'

The ministry told the Belgian newspaper that the cyberattack stemmed from Apache's Log4j - which provides logging capabilities for Java applications and is widely used, including for Apache web server software. Belgian Commander Olivier Séverin also told the outlet, "All weekend our teams have been mobilized to control the problem, continue our activities and warn our partners." Taking to Facebook in the wake of this recent attack, the Ministry of Defense writes, "Due to technical issues, we are unable to process your requests via mil.be or answer your queries via Facebook. We are working on a resolution and we thank you for your understanding." Representatives for both the ministry and Defense Minister Ludivine Dedonder did not respond to Information Security Media Group's request for comment. Belgian officials also did not elaborate on the attack's specifics with De Standaard. The Belgian incident is one of the first high-profile attacks stemming from the Log4j vulnerability, although cybersecurity experts have warned of active scanning and exploitation of the remote code execution vulnerability.


Use of blockchain technology could increase human trust in AI

With advancements in technology, trust has become a vital factor in human-technology interactions. In the past, people trusted technology mainly because it worked as expected. However, the emergence of Artificial Intelligence solutions does not remain the same due to the following challenges: Openness: AI-based applications are built to be adaptive and reactive, to have an intelligence of their own to respond to situations. Anyone can put it to good use or apply it for nefarious purposes. Hence, people have some reservations about trusting AI-based solutions. Transparency: One of the significant issues impacting human trust in AI applications is the lack of transparency. AI developers need to clarify the extent of personal data utilized and the benefits and risks of using the application to increase trust. Privacy: AI has made data collection and analysis much easier; however, the end-users have to bear the brunt, as the collection of humongous amounts of data by companies worldwide may end up jeopardizing the privacy of the user(s) whose data is being collected.


Shifting security further left: DevSecOps becoming SecDevOps

With the rising cost and complexity of modern software development practices, businesses will increasingly require a comprehensive, fully integrated security platform with fewer disparate tools. This platform supports pervasive, or continuous, security because it: Starts in the design phase with threat modeling, ensuring that only secure components are incorporated into the design. This shifts security even further left so that DevSecOps now becomes SecDevOps ensuring software is ‘secure by design’. Is fully integrated, but also open to new technology plugins, to provide comprehensive coverage analyzing every possible dimension of the code. This ‘single pane of glass’ approach empowers security professionals and developers to understand risk, prioritize remediation efforts, and define and monitor progress objectives across multiple dimensions. Delivers a frictionless developer experience that enables security analysis to meet developers where they work – within the IDE, CI/CD pipelines, code and container repositories, and defect tracking systems.


DeepMind’s New AI With a Memory Outperforms Algorithms 25 Times Its Size

Bigger is better—or at least that’s been the attitude of those designing AI language models in recent years. But now DeepMind is questioning this rationale, and says giving an AI a memory can help it compete with models 25 times its size. When OpenAI released its GPT-3 model last June, it rewrote the rulebook for language AIs. The lab’s researchers showed that simply scaling up the size of a neural network and the data it was trained on could significantly boost performance on a wide variety of language tasks. Since then, a host of other tech companies have jumped on the bandwagon, developing their own large language models and achieving similar boosts in performance. But despite the successes, concerns have been raised about the approach, most notably by former Google researcher Timnit Gebru. In the paper that led to her being forced out of the company, Gebru and colleagues highlighted that the sheer size of these models and their datasets makes them even more inscrutable than your average neural network, which are already known for being black boxes.


5 rules for getting data architecture right

A number of cloud experts suggest that centralizing your application data is the right model for managing a large dataset for a large application. Centralizing your data, they argue, makes it easier to apply machine learning and other advanced analytics to get more useful information out of your data. But this strategy is faulty. Centralized data is data that can’t scale easily. The most effective way to scale your data is to decentralize it and store it within the individual service that owns the data. Your application, if composed of dozens or hundreds of distributed services, will store your data in dozens or hundreds of distributed locations. This model enables easier scaling and supports a full service ownership model. Service ownership enables development teams to work more independently, and encourages more robust SLAs between services. This fosters higher-quality services and makes data changes safer and more efficient through localization.


CISA Compliance for 2022

The fact that the Federal Government is suddenly placing such a high priority on cyber security is telling, and the directive is worth paying attention to, even for private sector organizations. If federal agencies shore up their cyber defenses in accordance with the new directive, then at least some cybercriminals will likely turn their attention toward attacking private sector targets. After all, it is likely that some of the known vulnerabilities will continue to exist in private companies, even after those vulnerabilities have been addressed on systems belonging to the federal government. With the end of the year rapidly approaching, IT professionals should put cyber security at the top of their New Year's resolutions. But what specifically should IT pros be doing to prepare for 2022? CISA differentiates between known vulnerabilities and vulnerabilities that are known to have been exploited. Likewise, IT pros in the private sector should focus their efforts and their security resources on addressing vulnerabilities that have been exploited in the real world. 


Major Algorithmic Breakthroughs Of 2021

In a major breakthrough, scientists have discovered an entirely different form of biological reproduction and applied it to create the first-ever, self-replicating living robots. This research was conducted by scientists at the University of Vermont, Wyss Institute for Biologically Inspired Engineering at Harvard University, and Tufts University. This team had created “Xenobots” last year and discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, look for single cells, gather them together and assemble “baby” Xenobots in their mouth. After a few days, these become new Xenobots that look and move just like themselves. ... 2021 has been a transformative year for large language models, with all the major names in tech bringing in path-breaking new systems. Just days back, DeepMind introduced a 280 billion parameter transformer language model called Gopher. DeepMind’s research went on to say that Gopher almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations.


Hybrid work model: 4 tips for teams in 2022

Use milestones and deadlines to gauge your team’s progress instead of tracking time. One challenge of remote work is “appearing” to be productive and present to the management team. However, measurement should not be seen as a punitive exercise to catch people out – it should guide employees toward completing their goals. Most workers don’t work the entire eight hours they’re in the office either, as they’re often engaging in spontaneous meetings and meaningful moments of connection with colleagues. Managers should disregard time as a measure of productivity and trust their employees to do their job to the best of their ability. If goals are being met but the employees feel distant because they don’t need to collaborate as much or that they need to “appear busy,” then the goals are too easy and need to be readjusted. Be careful to keep engagement and communication high – otherwise, you can end up with the “watermelon effect” – good “green” performance, but below the surface, there’s a big chunk of red, which represents a poor employee experience. 


The CEO’s Playbook for a Successful Digital Transformation

A crucial characteristic of successful digital CEOs is that they can step back far enough from their current business to reimagine where transformative — not incremental — value is possible. We find that these CEOs spend a lot of time visiting companies and staying abreast of how new trends are generating value. That helps them to look at their own assets with fresh eyes and see where there’s new value. Steve Timm, the president of Collins Aerospace, finds transformative value in being able to thoughtfully reimagine the business model. “Many CEOs have domain experience and they don’t want to get outside of that,” he told us during an interview. “They’re not thinking about redefining the broader architecture or ecosystem. We need to redefine the boundaries where value can come from.” With clarity on the business model established, targeting a domain — for example, a complete core process or user journey — has emerged as a critical element for focusing energies in a digital transformation. 



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray