Daily Tech Digest - May 31, 2023

5 best practices for software development partnerships

“The key to successful co-creation is ensuring your partner is not just doing their job, but acting as a true strategic asset and advisor in support of your company’s bottom line,” says Mark Bishopp, head of embedded payments/finance and partnerships at Fortis. “This begins with asking probing questions during the prospective stage to ensure they truly understand, through years of experience on both sides of the table, the unique nuances of the industries you’re working in.” Beyond asking questions about skills and capabilities, evaluate the partner’s mindset, risk tolerance, approach to quality, and other areas that require alignment with your organization’s business practices and culture. ... To eradicate the us-versus-them mentality, consider shifting to more open, feedback-driven, and transparent practices wherever feasible and compliant. Share information on performance issues and outages, have everyone participate in retrospectives, review customer complaints openly, and disclose the most challenging data quality issues.


Revolutionizing Algorithmic Trading: The Power of Reinforcement Learning

The fundamental components of a reinforcement learning system are the agent, the environment, states, actions, and rewards. The agent is the decision-maker, the environment is what the agent interacts with, states are the situations the agent finds itself in, actions are what the agent can do, and rewards are the feedback the agent gets after taking an action. One key concept in reinforcement learning is the idea of exploration vs exploitation. The agent needs to balance between exploring the environment to find out new information and exploiting the knowledge it already has to maximize the rewards. This is known as the exploration-exploitation tradeoff. Another important aspect of reinforcement learning is the concept of a policy. A policy is a strategy that the agent follows while deciding on an action from a particular state. The goal of reinforcement learning is to find the optimal policy, which maximizes the expected cumulative reward over time. Reinforcement learning has been successfully applied in various fields, from game playing (like the famous AlphaGo) to robotics (for teaching robots new tasks).


Data Governance Roles and Responsibilities

Executive-level roles include leadership in the C-suite at the organization’s top. According to Seiner, people at the executive level support, sponsor, and understand Data Governance and determine its overall success and traction. Typically, these managers meet periodically as part of a steering committee to cover broadly what is happening in the organization, so they would add Data Governance as a line item, suggested Seiner. These senior managers take responsibility for understanding and supporting Data Governance. They keep up to date on Data Governance progress through direct reports and communications from those at the strategic level. ... According to Seiner, strategic members take responsibility for learning about Data Governance, reporting to the executive level about the program, being aware of Data Governance activities and initiatives, and attending meetings or sending alternates. Moreover, this group has the power to make timely decisions about Data Governance policies and how to enact them. 


Effective Test Automation Approaches for Modern CI/CD Pipelines

Design is not just about unit tests though. One of the biggest barriers to test automation executing directly in the pipeline is that the team that deals with the larger integrated system only starts a lot of their testing and automation effort once the code has been deployed into a bigger environment. This wastes critical time in the development process, as certain issues will only be discovered later and there should be enough detail to allow testers to at least start writing the majority of their automated tests while the developers are coding on their side. This doesn’t mean that manual verification, exploratory testing, and actually using the software shouldn’t take place. Those are critical parts of any testing process and are important steps to ensuring software behaves as desired. These approaches are also effective at finding faults with the proposed design. However, automating the integration tests allows the process to be streamlined.


What Does Being a Cross-Functional Team in Scrum Mean?

By bringing together individuals with different skills and perspectives, these teams promote innovation, problem-solving, and a holistic approach to project delivery. They reduce handoffs, bottlenecks, and communication barriers often plaguing traditional development models. Moreover, cross-functional teams enable faster feedback cycles and facilitate continuous improvement. With all the necessary skills in one team, there's no need to wait for handoffs or external dependencies. This enables quicker decision-making, faster iterations, and the ability to respond to customer feedback promptly. In short, being a cross-functional Scrum Team means having a group of individuals with diverse skills, a shared sense of responsibility, and a collaborative mindset. They work together autonomously, leveraging their varied expertise to deliver high-quality software increments. ... Building genuinely cross-functional Scrum Teams starts with product definition. This means identifying and understanding the scope, requirements, and goals of the product the team will work on. 


The strategic importance of digital trust for modern businesses

Modern software development processes, like DevOps, are highly automated. An engineer clicks a button that triggers a sequence of complicated, but automated, steps. If a part of this sequence (e.g., code signing) is manual then there is a likelihood that the step may be missed because everything else is automated. Mistakes like using the wrong certificate or the wrong command line options can happen. However, the biggest danger is often that the developer will store private code signing keys in a convenient location (like their laptop or build server) instead of a secure location. Key theft, misused keys, server breaches, and other insecure processes can permit code with malware to be signed and distributed as trusted software. Companies need a secure, enterprise-level code signing solution that integrates with the CI/CD pipeline and automated DevOps workflows but also provides key protection and code signing policy enforcement.


Managing IT right starts with rightsizing IT for value

IT financial management — sometimes called FinOps — is overlooked in many organizations. A surprising number of organizations do not have a very good handle on the IT resources being used. Another way of saying this is: Executives do not know what IT they are spending money on. CIOs need to make IT spend totally transparent. Executives need to know what the labor costs are, what the application costs are, and what the hardware and software costs are that support those applications. The organization needs to know everything that runs — every day, every month, every year. IT resources need to be matched to business units. IT and the business unit need to have frank discussions about how important that IT resource really is to them — is it Tier One? Tier Two? Tier Thirty? In the data management space — same story. Organizations have too much data. Stop paying to store data you don’t need and don’t use. Atle Skjekkeland, CEO at Norway-based Infotechtion, and John Chickering, former C-level executive at Fidelity, both insist that organizations, “Define their priority data, figure out what it is, protect it, and get rid of the rest.”


Implementing Risk-Based Vulnerability Discovery and Remediation

A risk-based vulnerability management program is a complex preventative approach used for swiftly detecting and ranking vulnerabilities based on their potential threat to a business. By implementing a risk-based vulnerability management approach, organizations can improve their security posture and reduce the likelihood of data breaches and other security events. ... Organizations should still have a methodology for testing and validating that patches and upgrades have been appropriately implemented and would not cause unanticipated flaws or compatibility concerns that might harm their operations. Also, remember that there is no "silver bullet": automated vulnerability management can help identify and prioritize vulnerabilities, making it easier to direct resources where they are most needed. ... Streamlining your patching management is another crucial part of your security posture: an automated patch management system is a powerful tool that may assist businesses in swiftly and effectively applying essential security fixes to their systems and software.


Upskilling the non-technical: finding cyber certification and training for internal hires

“If you are moving people into technical security from other parts of the organization, look at the delta between the employee's transferrable skills and the job they’d be moving into. For example, if you need a product security person, you could upskill a product engineer or product manager because they know how the product works but may be missing the security mindset,” she says. “It’s important to identify those who are ready for a new challenge, identify their transferrable skills, and create career paths to retain and advance your best people instead of hiring from outside.” ... While upskilling and certifying existing employees would help the organization retain talented people who already know the company, Diedre Diamond, founding CEO of cyber talent search company CyberSN, cautions against moving skilled workers to entry-level roles in security that don’t pay what the employees are used to earning. Upskilling financial analysts into compliance, either as a cyber risk analyst or GRC analyst will require higher-level certifications, but the pay for those upskilled positions may be more equitable for those higher-paid employees, she adds.


Data Engineering in Microsoft Fabric: An Overview

Fabric makes it quick and easy to connect to Azure Data Services, as well as other cloud-based platforms and on-premises data sources, for streamlined data ingestion. You can quickly build insights for your organization using more than 200 native connectors. These connectors are integrated into the Fabric pipeline and utilize the user-friendly drag-and-drop data transformation with dataflow. Fabric standardizes on Delta Lake format. Which means all the Fabric engines can access and manipulate the same dataset stored in OneLake without duplicating data. This storage system provides the flexibility to build lakehouses using a medallion architecture or a data mesh, depending on your organizational requirement. You can choose between a low-code or no-code experience for data transformation, utilizing either pipelines/dataflows or notebook/Spark for a code-first experience. Power BI can consume data from the Lakehouse for reporting and visualization. Each Lakehouse has a built-in TDS/SQL endpoint, for easy connectivity and querying of data in the Lakehouse tables from other reporting tools.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - May 18, 2023

Security breaches push digital trust to the fore

Digital trust needs to be integrated within the organization and isn’t necessarily owned by a single department or job title. Even so, cybersecurity, and the CISO, have an important role to play, according to the World Economic Forum’s 2022 Earning Digital Trust report, in protecting interconnectivity that support business, livelihoods of people and society generally as people’s reliance on digital interactions grows. As governments and regulators implement stricter requirements for ensuring data privacy and security, CISOs face a renewed need to prioritize digital trust or risk fines, lawsuits, significant brand damage and revenue loss to the organization. Thomas suggests that for CISOs digital trust could become the measurable metrics and outcome of security initiatives. “Organizations are not only secure to be compliant and protect information. The outcome of this is the trust that customers have, and that is what's going to change the way we measure how well security is being implemented,” he says. “If you want to ensure your customers trust you, you need to look at it as an organizational goal, or have it as a part of the strategy. ...”


Preparing the Mindset for Change: Five Roadblocks That Lead Digital Transformation to Failure

The absence of effective advocacy may have significantly contributed to the failure of many digital transformation progress. However, it is the responsibility of the stakeholders to be the advocates of the change. The goal to change cannot be just a business decision it needs to be believed in. A business that is generational, often sees the founders married to legacy processes, they find it difficult to break the norm and adapt to automation irrespective of disparate systems restricting the growth and scale. ... A lack of strategic planning before and after implementation can lead to severe consequences for an organization. Conflicting priorities can arise, and critical objectives may not be effectively communicated or achieved due to a disconnect between business and technology plans.
Unfortunately, many organizations fail to recognize the importance of pre-and post-implementation planning and instead focus solely on the implementation process. This shortsighted approach can lead to poor customer and stakeholder engagement, as well as employee dissatisfaction. 


Don't overlook attack surface management

Let’s look at three aspects of ASM that you should consider today: ... Visibility and discovery. Attack surface management should provide a comprehensive view of the cloud environment, allowing organizations to identify potential security weaknesses and blind spots. It helps uncover unknown assets, unauthorized services, and overlooked configurations, offering a clearer picture of potential entry points for attackers. ... Risk assessment and prioritization. By understanding the scope and impact of vulnerabilities, organizations can assess the associated risks and prioritize them. Attack surface management empowers businesses to allocate resources efficiently, focusing on high-risk areas that could have severe consequences if compromised. ... Remediation and incident response. When vulnerabilities are detected, ASM management provides the necessary insights to remediate them promptly. It facilitates incident response by helping organizations take immediate action, such as applying patches, updating configurations, or isolating compromised resources.


One on One with Automated Software Testing Expert Phil Japikse

A common misconception is that creating automated testing increases the delivery time. There was a study done at Microsoft some years ago that looked at different teams. Some were using a test-first strategy, some were using a test-eventual strategy, and some groups were using traditional QA departments for their testing. Although the cycle time was slightly higher for those doing automated testing, the throughput was much higher. This was because the quality of their work was much higher, and they had much less rework. We all know it’s more interesting to work on new features and tedious and boring to fix bugs. If you aren’t including at least some automated testing in your development process, you are going to spend more time fixing bugs and less time building new features. ... The more complex or important the system is, the more testing it needs. Software that controls airplanes, for example, must be extremely well tested. One could argue that game software doesn’t need as much testing. It all depends on the business requirements for the application.


The Work Habits That Are Blocking Your Ideas, Dreams and Breakthrough Success

A reactive mind prevents us from responding productively to the moment. Any time we are reactive, because we are not effectively relating to ourselves in the moment, we cannot be present with others. Those who have been tasked with carrying out our objectives can sense our lack of clarity and misalignment. They may perceive us as "confused," for instance, and then our reactivity triggers their self-protective belief structures. Miscommunication becomes the norm when a reactive individual is leading a team. ... Your colleague's negativity is not only self-destructive; it is also destructive to the organization and the morale of their co-workers. But your own disconnection from the truth of the moment is also destructive. By prejudging a colleague, you are missing out on the opportunity to positively interact with them or influence their behavior, and both of these things matter. A healthy yet skeptical outlook is helpful. Would you want a contract written by your lawyer that only foresaw favorable outcomes? The invitation is to transform negativity into a healthy dynamic so that co-creativity and joy are both possible. You need to be open to the possibilities that each of us possesses.


Dialectic Thinking: The Secret to Exceptional Mindful Leadership

The paradox of acceptance and change may very well be the toughest one we grapple with. Whether this is in our own meditation practice and self-development, or leading an organization it’s vital to take a dialectic approach. For genuine change to occur, there must first be acceptance of the current state. This acceptance forms the bedrock of reality, a foundation that is crucial for creating meaningful change. It's a truth that can't be obscured or sugarcoated. With acceptance, there's an opportunity to see things as they are and then to envisage something different. However, we can often misconstrue acceptance as passivity or complacency. It can be seen as an excuse to “do nothing”, to shy away from bold action, or to remain comfortably entrenched in the status quo. On the flip side, a relentless push for change can create a sense of perpetual dissatisfaction, hindering our ability to appreciate what already is. This can also foster a short-term, transactional mindset, particularly in relationships.


How to explain data meshes, fabrics, and clouds

“A data mesh is a decentralized approach to managing data, where multiple teams within a company are responsible for their own data, promoting collaboration and flexibility,” he said. There are no complex words in this definition, and it introduces the problems data meshes aim to solve, the type of solution, and why it’s important. Expect to be asked for more technical details, though, especially if the executive has prior knowledge of other data management technologies. For example, “Weren't data warehouses and data lakes supposed to solve the data management issue?” This question can be a trap if you answer it with the technical differences between data warehouses, lakes, and meshes. Instead, focus your response on the business objective. Satish Jayanthi, co-founder and CTO of Coalesce, offers this suggestion: “Data quality often affects the accuracy of business analytics and decision-making. By implementing data mesh paradigms, the quality and accuracy of data can be enhanced, resulting in increased trust among businesses to utilize data more extensively for informed decision-making.”


Has the Cloud Forever Changed Disaster Recovery?

For today’s organisations, resilience is paramount to a successful data protection plan, mentioned Lawrence Yeo, Enterprise Solutions Director, ASEAN, Hitachi Vantara. Being resilient entails having the flexibility to quickly restore data and applications to both existing and new cloud accounts. We believe that traditional backup and disaster recovery systems focused on data centres are becoming outdated. Instead, we need a data protection strategy that prioritises IT resilience and can protect data anywhere, including public clouds and SaaS applications. Resilience is the key to a robust data protection strategy as a slow disaster recovery or data restoration can negatively impact business processes. To be resilient, you need a data protection solution that encompasses backup and disaster recovery across on-premises and public clouds, allowing you to restore data and applications quickly, either to existing or new cloud accounts.


IOT Sensors - Sensing the danger

How can an operator establish integrity and accuracy within a sensor and mitigate potential vulnerabilities? This is where Root of Trust (RoT) hardware plays a crucial role. Hardware such as a Device Identifier Composition Engine (DICE) can supply a unique security key to each firmware layer found in a sensor or connected device. ... Should an attack on your systems be successful, and a layer become exposed, the unique key accessed by a hacker cannot be used to breach further elements. This can help reduce the risk of a significant data breach and enables operators to trust the devices they utilise in a network. A device can also easily be re-keyed should any unauthorised amendments be discovered within the sensor’s firmware, enabling users to quickly identify vulnerabilities throughout the system’s update process. For organisations with smaller devices and an even smaller budget, specifications such as the Measurement and Attestation Roots (MARS) can be deployed to instil the necessary capabilities of identity, measurement storage, and reporting in a more cost-effective manner.


Data hoarding is bad for business and the environment

The findings suggest young consumers are unaware of the impact of their own carbon footprint. From the report, 44% said it’s wrong for businesses to waste energy and cause pollution by storing unneeded information online. ... The fallout? The Veritas study found that 47% of consumers would stop buying from a company if they knew it was willfully causing environmental damage by failing to control how much unnecessary data it was storing. Meanwhile, 49% of consumers think it’s the responsibility of the organizations that store their information to delete it when it’s no longer needed, the report said. ... It is incumbent upon leaders to pay attention to this issue. Srinivasan cautioned that organizations should not underestimate the environmental impact of poor data management practices – even if they are outsourcing their storage to public cloud providers. Some good data management practices would be to make consumers aware of the costs of all this data, especially the negative externalities on our overheating planet.



Quote for the day:

"Management is about arranging and telling. Leadership is about nurturing and enhancing." -- Tom Peters

Daily Tech Digest - May 16, 2023

Law enforcement crackdowns and new techniques are forcing cybercriminals to pivot

Because of stepped-up law enforcement efforts, cybercriminals are also facing a crisis in cashing out their cryptocurrencies, with only a handful of laundering vehicles in place due to actions against crypto-mixers who help obfuscate the money trail. "Eventually, they'll have to cash out to pay for their office space in St. Petersburg to pay for their Lambos. So, they're going to need to find an exchange," Burns Coven said. Cybercriminals are just sitting on their money, like stuffing money under the mattress. "It's been a tumultuous two years for the threat actors," she said. "A lot of law enforcement takedowns, challenging operational environments, and harder to get funds. And we're seeing this sophisticated laundering technique called absolutely nothing doing, just sitting on it." Despite the rising number of challenges, "I don't think there's a mass exodus of threat actors from ransomware," Burns Coven tells CSO, saying they are shifting tactics rather than exiting the business altogether. 


5 IT management practices certain to kill IT productivity

Holding people accountable is root cause analysis predicated on the assumption that if something goes wrong it must be someone’s fault. It’s a flawed assumption because most often, when something goes wrong, it’s the result of bad systems and processes, not someone screwing up. When a manager holds someone accountable they’re really just blame-shifting. Managers are, after all, accountable for their organization’s systems and processes, aren’t they? Second problem: If you hold people accountable when something goes wrong, they’ll do their best to conceal the problem from you. And the longer nobody deals with a problem, the worse it gets. One more: If you hold people accountable whenever something doesn’t work, they’re unlikely to take any risks, because why would they? Why it’s a temptation: Finding someone to blame is, compared to serious root cause analysis, easy, and fixing the “problem” is, compared to improving systems and practices, child’s play. As someone once said, hard work pays off sometime in the indefinite future, but laziness pays off right now.


How AI ethics is coming to the fore with generative AI

The discussion of AI ethics often starts with a set of principles guiding the moral use of AI, which is then applied in responsible AI practices. The most common ethical principles include being human-centric and socially beneficial, being fair, offering explainability and transparency, being secure and safe, and showing accountability. ... “But it’s still about saving lives and while the model may not detect everything, especially the early stages of breast cancer, it’s a very important question,” Sicular says. “And because of its predictive nature, you will not have everyone answering the question in the same fashion. That makes it challenging because there’s no right or wrong answer.” ... “With generative AI, you will never be able to explain 10 trillion parameters, even if you have a perfectly transparent model,” Sicular says. “It’s a matter of AI governance and policy to decide what should be explainable or interpretable in critical paths. It’s not about generative AI per se; it's always been a question for the AI world and a long-standing problem.”


Design Patterns Are A Better Way To Collaborate On Your Design System

You probably don’t think of your own design activities as a “pattern-making” practice, but the idea has a lot of very useful overlap with the practice of making a design system. The trick is to collaborate with your team to find the design patterns in your own product design, the parts that repeat in different variations that you can reuse. Once you find them, they are a powerful tool for making design systems work with a team. ... All designers and developers can make their design system better and more effective by focusing on patterns first (instead of the elements), making sure that each is completely reusable and polished for any context in their product. “Pattern work can be a fully integrated part of both getting some immediate work done and maintaining a design system. ... This kind of design pattern activity can be a direct path for designers and developers to collaborate, to align the way things are designed with the way they are built, and vice-versa. For that purpose, a pattern does not have to be a polished design. It can be a rough outline or wireframe that designers and developers make together. It needs no special skills and can be started and iterated on by all. 


Digital Twin Technology: Revolutionizing Product Development

Digital twin technology accelerates product development while reducing time to market and improving product performance, Norton says. The ability to design and develop products using computer-aided design and advanced simulation techniques can also facilitate collaboration, enable data driven decision making, engineer a market advantage, and reduce design churn. “Furthermore, developing an integrated digital thread can enable digital twins across the product lifecycle, further improving product design and performance by utilizing feedback from manufacturing and the field.” Using digital twins and generative design upfront allows better informed product design, enabling teams to generate a variety of possible designs based on ranked requirements and then run simulations on their proposed design, Marshall says. “Leveraging digital twins during the product use-cycle allows them to get data from users in the field in order to get feedback for better development,” she adds. Digital twin investments should always be aimed at driving business value. 


DevEx, a New Metrics Framework From the Authors of SPACE

Organizations can improve developer experience by identifying the top points of friction that developers encounter, and then investing in improving areas that will increase the capacity or satisfaction of developers. For example, an organization can focus on reducing friction in development tools in order to allow developers to complete tasks more seamlessly. Even a small reduction in wasted time, when multiplied across an engineering organization, can have a greater impact on productivity than hiring additional engineers. ... The first task for organizations looking to improve their developer experience is to measure where friction exists across the three previously described dimensions. The authors recommend selecting topics within each dimension to measure, capturing both perceptual and workflow metrics for each topic, and also capturing KPIs to stay aligned with the intended higher-level outcomes. ... The DevEx framework provides a practical framework for understanding developer experience, while the accompanying measurement approaches systematically help guide improvement.
While there are many companies with altruistic intentions, the reality is that most organizations are beholden to stakeholders whose chief interests are profit and growth. If AI tools help achieve those objectives, some companies will undoubtedly be indifferent to their downstream consequences, negative or otherwise. Therefore, addressing corporate accountability around AI will likely start outside the industry in the form of regulation. Currently, corporate regulation is pretty straightforward. Discrimination, for instance, is unlawful and definable. We can make clean judgments about matters of discrimination because we understand the difference between male and female, or a person’s origin or disability. But AI presents a new wrinkle. How do you define these things in a world of virtual knowledge? How can you control it? Additionally, a serious evaluation of what a company is deploying is necessary. What kind of technology is being used? Is it critical to the public? How might it affect others? Consider airport security. 


Prepare for generative AI with experimentation and clear guidelines

Your first step should be deciding where to put generative AI to work in your company, both short-term and into the future. Boston Consulting Group (BCG) calls these your “golden” use cases — “things that bring true competitive advantage and create the largest impact” compared to using today’s tools — in a recent report. Gather your corporate brain trust to start exploring these scenarios. Look to your strategic vendor partners to see what they’re doing; many are planning to incorporate generative AI into software ranging from customer service to freight management. Some of these tools already exist, at least in beta form. Offer to help test these apps; it will help teach your teams about generative AI technology in a context they’re already familiar with. ... To help discern the applications that will benefit the most from generative AI in the next year or so, get the technology into the hands of key user departments, whether it’s marketing, customer support, sales, or engineering, and crowdsource some ideas. Give people time and the tools to start trying it out, to learn what it can do and what its limitations are. 


Cyberdefense will need AI capabilities to safeguard digital borders

Speaking at CSIT's twentieth anniversary celebrations, where he announced the launch of the training scheme, Teo said: "Malign actors are exploiting technology for their nefarious goals. The security picture has, therefore, evolved. Malicious actors are using very sophisticated technologies and tactics, whether to steal sensitive information or to take down critical infrastructure for political reasons or for profit. "Ransomware attacks globally are bringing down digital government services for extended periods of time. Corporations are not spared. Hackers continue to breach sophisticated systems and put up stolen personal data for sale, and classified information." Teo also said that deepfakes and bot farms are generating fake news to manipulate public opinion, with increasingly sophisticated content that blur the line between fact and fiction likely to emerge as generative AI tools, such as ChatGPT, mature and become widely available. "Threats like these reinforce our need to develop strong capabilities that will support our security agencies and keep Singapore safe," the minister said. 


Five key signs of a bad MSP relationship – and what to do about them

Red flags to look out for here include overly long and unnecessarily complicated contracts. These are often signs of MSPs making lofty promises, trying to tie you into a longer project, and pre-emptively trying to raise bureaucratic walls to make accessing the services you are entitled to more complex. The advice here is simple – don’t rush the contract signing. Instead, ensure that the draft contract is passed through the necessary channels, so that all stakeholders have complete oversight. Also, do not be tempted by outlandish promises; think pragmatically about what you want to achieve with your MSP relationship, and make sure the contract reflects your goals. If you’re already locked into a contract, consider renegotiating specific terms. ... If projects are moving behind schedule and issues are coming up regularly, this is a sign that your project lacks true project management leadership. Of course, both parties will need some time when the project starts to get processes running smoothly, but if you’re deep into a contract and still experiencing delays and setbacks, this is a sign that all is not well at your MSP. 



Quote for the day:

"The greatest thing is, at any moment, to be willing to give up who we are in order to become all that we can be." -- Max de Pree

Daily Tech Digest - May 14, 2023

How to Balance Data Governance with Data Democracy

Data democratization is important to an organization because it ensures an effective and efficient method of providing all users, regardless of technical expertise, the ability to analyze readily accessible and reliable data to influence data-driven decisions and drive real-time insights. This eliminates the frustration of requesting access, sorting information, or reaching out to IT for help. ... The solution to this problem lies in data federation, which makes data from multiple sources accessible under a uniform data model. This model acts as a "single point of access" such that organizations create a virtual database where data can be accessed where it already lives. This makes it easier for organizations to query data from different sources in one place. With a single point of access, users can go to one location for searching, finding, and accessing every piece of data your organization has. This will make it easier to democratize data access because you won’t need to facilitate access across many different sources.


Will ChatGPT and Generative AI “Replace” Testing?

It stands to reason, then, that ChatGPT and generative AI will not "replace" testing or remove the need to invest in QA. Instead, like test execution automation before it, generative AI will provide a useful tool for moving faster. Yet, there will always be a need for more work, and at least a constant (if not greater) need for human input. Testers' time might be applied less to repetitive tasks like scripting, but new processes will fill the void. Meanwhile, the creativity and critical thinking offered by testers will not diminish in value as these repetitive processes are automated; such creativity should be given greater freedom. At the same time, your testers will have vital insight into how generative AI should be used in your organization. Nothing is adopted overnight, and identifying the optimal applications of tools like ChatGPT will be an ongoing conversation, just as the testing community has continually explored and improved practices for getting the most out of test automation frameworks. Lastly, as the volume of possible test scenarios grows, automation and AI will need a human steer in knowing where to target its efforts, even as we can increasingly use data to target test generation.


How agtech is poised to transform India into a farming powerhouse

Collaboration will be crucial. While agtechs might facilitate better decision making and replace manual farming practices like spraying, reducing dependence on retailers and mandis, incumbents remain important in the new ecosystem for R&D and the supply of chemicals and fertilizers. There are successful platforms already emerging that offer farmers an umbrella of products and services to address multiple, critical pain points. These one-stop shop agri-ecosystems are also creating a physical backbone/supply chain—which makes it easier for incumbents and start-ups to access the fragmented farmer base. Agtechs have a unique opportunity to become ideal partners for companies seeking market access. In this scenario, existing agriculture companies are creating value for the farmer by having more efficient and cost-effective access to the farmer versus traditional manpower-intensive setups. It’s a system that builds: the more agtechs know the farmer, the better products they can develop. India’s farms have been putting food on the table for India and the world for decades. 


How A Non Data Science Person Can Work Effectively With A Data Scientist

Effective communication is essential for a successful partnership. The data scientist should communicate technical procedures and conclusions in a clear and concise manner. In contrast, the non-data science person should communicate business requirements and limitations. Both sides can collaborate successfully by developing a clear understanding of the project objectives and the data science methodologies. Setting expectations and establishing the project’s scope from the beginning is equally critical. The non-data scientist should specify what they expect from the data scientist, including the results they intend to achieve and the project’s schedule. In return, they should describe their areas of strength and the achievable goals that fall within the project’s parameters. It is crucial to keep the lines of communication open and transparent throughout the process. Regular meetings and status reports should be organized to keep everyone informed of the project’s progress and to identify any potential issues.


Why Metadata Is a Critical Asset for Storage and IT Managers

Advanced metadata is handled differently by file storage and object storage environments. File storage organizes data in directory hierarchies, which means you can’t easily add custom metadata attributes. ... Metadata is massive because the volume and variety of unstructured data – files and objects – are massive and difficult to wrangle. Data is spread across on-premises and edge data centers and clouds and stored in potentially many different systems. To leverage metadata, you first need a process and tools for managing data. Managing metadata requires both strategy and automation; choosing the best path forward can be difficult when business needs are constantly changing and data types may also be morphing from the collection of new data types such as IoT data, surveillance data, geospatial data, and instrument data. Managing metadata as it grows can also be problematic. Can you have too much? One risk is a decrease in file storage performance. Organizations must consider how to mitigate this; one large enterprise we know switched from tagging metadata at the file level to the directory level.


Understand the 3 major approaches to data migration

Application data migration—sometimes called logical data migration or transaction-level migration—is a migration approach that utilizes the data mobility capabilities built natively into the application workload itself. ... Technique: Some applications offer proprietary data mobility features. These capabilities usually facilitate or assist with configuring backups or secondary storage. These applications then synchronously or asynchronously ensure that the secondary storage is valid and, when necessary, can be used without the primary copy. ... Block-level data migration is performed at the storage volume level. Block-level migrations are not strictly concerned about the actual data stored within the storage volume. Rather, they include file system data of any kind, partitions of any kind, raw block storage, and data from any applications. Technique: Block-level migration tools synchronize one storage volume to another storage volume from the beginning of the volume (byte 0) to the end of the entire volume (byte N) without processing any data content.


Open Source MongoDB Alternative FerretDB Now Generally Available

FerretDB works as a proxy that translates MongoDB wire protocol queries to SQL, with PostgreSQL as the database backend. Started as an open-source alternative to MongoDB, FerretDB provides the same MongoDB APIs without developers needing to learn a new language or command. Peter Farkas, co-founder and CEO of FerretDB, explains: We are creating a new standard for document databases with MongoDB compatibility. FerretDB is a drop-in replacement for MongoDB, but it also aims to set a new standard that not only brings easy-to-use document databases back to its open-source roots but also enables different database engines to run document database workloads using a standardized interface. While FerretDB is built on PostgreSQL, the database is designed with a pluggable architecture to support other backends, with projects for Tigris, SAP HANA, and SQLite currently in the working. Written in Go, the project was originally started as the Server Side Public License (SSPL) that MongoDB adopted in 2018 does not meet all criteria for open-source software set by the Open Source Initiative.


Wardley Mapping and Strategy for Software Developers

This is a more engineering-focused way to look at a business and isn’t dependent on stories, aphorisms or strange MBA terms. A few people have asked me personally whether this method really works. But it isn’t a “method” as such; just a way to agree on the environment that may otherwise be left unchallenged. Jennifer Riggins has already covered the background to Wardley mapping in detail, so I only need to summarize what we need to become aware of. ... So how do you map your own projects? One good start is simply to get your team together and see if they can map just the build process — with a build as the final product (the cup of tea). For example; starting from an agreed story, through to a change in the code in the repository, to a checkout into a staging build, to deployment. See if everyone even agrees what this looks like. The result should eventually be a common understanding. There are plenty of introductions to mapping, but the important thing is to recognize that you can represent a business in a fairly straightforward way. 


The Leader's Role in Building Independent Thinkers: How to Equip Your Team for Success

Striving for perfection can often lead to "analysis paralysis," hindering progress and preventing team members from taking action. To encourage independent thinking, leaders must prioritize action over perfection. By creating a culture of experimentation and iteration, employees learn from their mistakes, build confidence, and become less afraid of failure. ... Standing firmly behind your values and vision is a powerful way for leaders to generate independent thinking in their teams. When team members see their leader living by strong values and embodying a clear vision, they feel empowered to follow their example. This approach cultivates an environment of trust and confidence, enabling your employees to think critically and independently. ... It is essential for leaders to avoid merely delegating tasks and stepping back. Instead, actively participate in the work alongside your team, providing guidance and offering support when needed. This approach instills a sense of collaboration and helps your team feel part of something bigger. 


The Great Resignation takes a dark turn: Is it now the Great Layoff? Expert weighs in

The main challenges that Gen-Z employees face in the event of a layoff are a lack of savings, a lack of job experience, and a lack of job security. Many Generation Z workers are just starting out in their careers and haven't had time to save. Many people may have little or no savings in case of a financial emergency, such as job loss. Because Generation Z is so young, they have yet to have the opportunity to gain the experience that their elders have. If they are laid off, they are concerned that they will not have the necessary experience to re-enter the workforce. Finally, even if Gen Z workers are employed, they may believe their job is in jeopardy due to the pandemic's impact on their industry. They may be concerned that their employer will lay off employees or that their position will become obsolete as the company adapts to the changing business environment. Because of these challenges and ongoing economic uncertainty, Generation Z remains concerned about the possibility of layoffs. 



Quote for the day:

"Innovation distinguishes between a leader and a follower." -- Steve Jobs

Daily Tech Digest - May 13, 2023

How to build employee careers through an internal talent marketplace

One of the biggest hurdles to the success of an internal talent marketplace is the reluctance that people managers show when it comes to letting talent go. This is especially true for top talent and individuals they believe to be critical to their success. To overcome this challenge, managers need to be coached to recognize how employing this concept is, in fact, beneficial for the organisation on the whole. Before implementing any such initiative, it is necessary for managers to understand the long-term purpose that an internal marketplace will help serve and how retaining top talent in a different role within the organisation is far more favourable than having them leave the organisation. ... It is also the organisation's responsibility to ensure that its employees are provided relevant learning opportunities. The keyword is relevant. Using the information gleaned from regular discussions and performance assessments, managers will be in a strong position to create learning/training initiatives that are aligned with individual and organisational goals. This will provide employees with the necessary impetus to upskill themselves before they apply for any other internal opportunities.


What it Really Takes to Transition from Entrepreneur to CEO

Once you realize you need others to succeed, there's a key step to take next: disconnect emotionally from the business. Of course, you still must care deeply about the business; you just need to realize you and the business are no longer one. This whole idea might sound counterintuitive, but it's important. After all, with most entrepreneurs, your business is an enormous part of your identity. But as you begin to embrace the CEO role, you have to start sharing the business with others for it to grow. Sometimes this is literal — in terms of equity that gets distributed — while other times, it's sharing things like responsibility and key decisions. ... But while the CEO sets the vision, yours is no longer the only one, as it likely was when you were a solitary business owner. As you build a team of strong leaders around you, each of those individuals will have their own opinions about where the company should be headed. The CEO's role is to align your team around a shared purpose, values and mission, but all of you must create this together.


The Business Case For Federated Data Governance & Access Control

Recent MIT-CISR research from Stephanie Woerner and others shows that 51% of enterprises are to this day, locked in silos, and 21% have a morass of tech debt stitching their companies together. Ross and her co-authors describe a situation where “80% of the company’s programming code (was) dedicated to linking disparate systems as opposed to creating new capabilities.” Scenarios like this are unfortunately common and lead to business architectures that aren’t agile, nor do they have the resources or capabilities that enable digital transformation. ... So, is there a better approach? Simply put, yes, but first I want to suggest that we need to consider data governance and access control as a system of systems. This means moving to what Gartner calls ‘Federated Data Governance’ – universal controls are applied to data by establishing a system of policies and controls. For example, in the case of the finance department, when controlling data around the end of the quarter or specific timeframe is important, localized controls should and can be put in place. 


Credential Hacking: The Dark Side of Artificial Intelligence

If we take a step back to design a layered defense approach, robust strong authentication is just one part of the holistic cybersecurity approach. For an entire security architecture to work effectively, zero trust must be integrated into the whole equation. To that end, there are two additional aspects—attestation and assumed breach—beyond simply authentication. AI helps in both these areas. In this new cybersecurity normal, breaches are inevitable. This widely accepted truth also means that it is not so much a matter of getting breached as it is a matter of having a rapid detection, containment and recovery so that significant business impact is not felt and cyber resilience is sustained after a breach. Assumed breach requires the continual upkeeping and ingestion of cyber threat intelligence so that new IoCs (Indicators of Compromise) and TTPs (Tactics, Techniques and Procedures) can be utilized to update the protective and detective measures to limit the blast radius of any successful attacks and to detect early for prompt containment.


How the IoT Is Integral to Automated Material Handling

IoT data often goes into cloud-based systems for easier access later. A leader might use such an interface to determine how many more parcels their company shipped after implementing automated material handling. They could also use IoT information to determine whether automation reduced injuries, product damage or other undesirable outcomes. Sometimes, IoT data can automatically correct a system’s processes for better results. Such was the case with one that used a predictive process adjustment module for automated storage and retrieval. ... If the IoT sensors picked up on something abnormal, the machine would make the necessary changes without human input. This technology is especially convenient for facilities that must meet high output goals and may not have large numbers of on-site team members to correct problems. Any automated material handling strategy should ideally include metrics people choose and follow before, during and after implementation. The IoT can aid people in selecting and monitoring appropriate statistics, thus providing insights into whether things are going well or if people should make adjustments to get optimal results.


Creating A Cybersecurity Disaster Recovery Plan

A chain is only as strong as its weakest link, and human error is still one of the leading causes of security incidents. According to the latest research, 82% of cybersecurity breaches are caused by human error, meaning cybersecurity education can eliminate all but the most complex threats. The overwhelming majority of people have good intentions, and so do most employees. However, some still don’t understand that “1234” isn’t a good password or that a Nigerian prince promising them a large sum of money is suspicious. To stay ahead of sloppy password use, organizations should mandate and enforce the use of strong passwords. Typically, a strong password is at least 8-12 characters long and includes a mix of uppercase and lowercase letters, numbers, and special characters. Employees must also regularly update passwords and refrain from using them across different accounts or services. Passwords must also avoid using common words, phrases, or personal information. Additionally, train employees to identify and report suspicious activities.


Business automation intensifies as data governance returns

The research indicates 2023 to be the year of automation, from the use of super-hyped generative artificial intelligence (AI) technologies such as ChatGPT to more traditional business and IT automation. Organisations in the EMEA region are planning to increase their use of automation more than North America and Asia-Pacific, according to the research. But in terms of a specific area of business applications, customer experience is well to the fore of investment projects. Some 43% said they will invest in customer experience software spanning marketing, sales and contact centre management. Stephanie Corby, practice director at TechTarget’s analyst division, Enterprise Strategy Group, says: “CX is a top business driver for enterprise organisations ... but the reality is that most organisations are still in the early stages of CX maturity and strategy. The complexity of CX technology stacks has created integration and adoption challenges that will inevitably drive conversion to platforms.”


Modern Data Management Platforms Are Vital For Solving Modern Data Management Problems

With the growing importance of data, it has also become essential to integrate data security and data protection into the broader security ecosystem for increased insight and responsiveness. The evolving nature of cyber threats makes a proactive approach essential, and Security Information and Event Management (SIEM) systems must be connected to easily feed alerts, events, and audit data to other platforms. This gives security teams greater visibility into anomalies and threats, improving responsiveness and mitigating risk. Ongoing global economic instability means that across industries, businesses need to improve cost efficiency and optimise budgets. Data can easily become a major cost centre for businesses, and yet there needs to be increased spend around security, especially for mission-critical areas. Intelligent technologies can help businesses reduce the time it takes to protect applications by improving efficiency of backups and scans, which is a quick and easy way of reducing costs.


Is the Big Data Lake Era Fading?

Data lakes undoubtedly offer benefits over the previous, more traditional approach of handling data, like ERP and CRM softwares. While the previous approach is more like small, self-owned, self-operated stores, data lakes can be compared to Walmart, where all the data can be found in a single place. However, as the technology matures, enterprises are finding that this approach also comes with its set of drawbacks. Without proper management, large data lakes can quickly become data swamps — unmanageable pools of dirty data. In fact, there are 3 paradigms in which data lakes can fall apart, namely complexity, data quality, and security. Flexibility is one of the biggest pros of maintaining a data lake, as they are large dumps of raw data in their native format. This data is also not stored in a hierarchical structure, instead using a flat architecture to store data. However, with this flexibility also comes with added complexity, meaning that talented data scientists and engineers need to trawl through this data to derive value out of it. This cannot be done without specialised talent to maintain and operate it.


How to Navigate Structured and Unstructured Data as a Healthcare Organization

Unstructured data is immensely valuable to healthcare. “If you approach it from a high level, clinical notes are a glimpse into the physician’s brain,” says Brian Laberge, solution engineer at software and solutions provider Wolters Kluwer. In addition, written notes often capture the severity of a patient’s health condition or nuanced nonclinical social needs far better than highly structured diagnostic codes, he adds. Clinical and administrative staff can easily parse free text for relevant information, such as a diagnosis or a treatment recommendation. The difficulty stems from what comes next. ... Patient-generated health data comes with its own set of concerns. While it may be available in real time from sources such as monitoring devices or digital therapeutics applications — and it may be structured in its own right — most of it is only transferrable into EHRs as unstructured summary reports, notes Natalie Schibell, vice president and principal analyst at Forrester. (The same is true of visit summaries that come from urgent care, retail health or telehealth providers not affiliated with a health system.)



Quote for the day:

"Making good decisions is a crucial skill at every level." -- Peter Drucker

Daily Tech Digest - May 12, 2023

The Industrywide Consequences of Making Security Products Inaccessible

Restricting access to security products creates situations where people from underrepresented groups are not able to easily catch up with their more fortunate peers who are already employed by enterprises with access to the latest tooling. In other words, companies publicly championing their efforts to increase diversity and get more people from underrepresented groups in the industry are actually making it harder for the same people to get into cybersecurity. It's not uncommon to see motivated and driven people from underrepresented backgrounds spend their free time studying and trying to level up their skills so they can move up the career ladder. While scholarships and grants are certainly helpful, what can be even more impactful is giving them access to tools they need to learn to develop new skills, build résumés, and get hired or promoted. ... It seems like most security vendors today create thought leadership content about how bad the talent shortage is for the industry, yet few are making it easy for people to become job ready by learning how to use their tools.


Open-Source Leadership to the European Commission: CRA Rules Pose Tech and Economic Risks to EU

As currently written, the CRA would impose a number of new requirements on hardware manufacturers, software developers, distributors, and importers who place digital products or services on the EU market. The list of proposed requirements includes an "appropriate" level of cybersecurity, a prohibition on selling products with any known vulnerability, security by default configuration, protection from unauthorized access, limitation of attack surfaces, and minimization of incident impact. The list of proposed rules also includes a requirement for self-certification by suppliers of software to attest conformity with the requirements of the CRA, including security, privacy, and the absence of Critical Vulnerability Events (CVEs). The problem with these rules, explained Mike Milinkovich, executive director of the Eclipse Foundation, in a blog post, is that they break the "fundamental social contract" that underpins open-source, which is, simply stated, that its producers of that software provide it freely, but accept no liability for its use and provide no warranties.


White House addresses AI’s risks and rewards

While Schiappa agreed that AI can exploit vulnerabilities with malicious code, he argued that the quality of the output generated by LLM is still hit and miss. “There is a lot of hype around ChatGPT but the code it generates is frankly not great,” he said. Generative AI models can, however, accelerate processes significantly, Schiappa said, adding that the “invisible” part of such tools — those aspects of the model not involved in natural language interface with a user — are actually more risky from an adversarial perspective and more powerful from a defense perspective. Meta’s report said industry defensive efforts are forcing threat actors to find new ways to evade detection, including spreading across as many platforms as they can to protect against enforcement by any one service. “For example, we’ve seen malware families leveraging services like ours and LinkedIn, browsers like Chrome, Edge, Brave and Firefox, link shorteners, file-hosting services like Dropbox and Mega, and more. When they get caught, they mix in more services including smaller ones that help them disguise the ultimate destination of links,” the report said.


Start Your Architecture Modernization with Domain-Driven Discovery

Architecture modernization projects are complex, expensive, and full of risks. Starting with a Domain-Driven Discovery (DDD) focuses your team and improves your chances of success. ... There was a time when we started new Agile projects with a two-week Sprint 0 then launched right into coding the solution. Unfortunately, teams often found out later they wasted time and money on "building the wrong thing righter." The influences of Design Thinking and Dual-Track Agile and frameworks like Mobius have opened our collective eyes to the importance of a brief discovery for product work. ... We suggest using event storming workshops to clarify the business processes related to the in-scope systems. Start by choosing a primary process or experience to focus on, such as a new customer registration. Next, collaboratively identify every event in this end-to-end process. It’s important to focus on how it works today, not how it should work in the future. Then identify a subset of the events that are essential to the process and labels these Pivotal Events. 


Career Reinvention: Considering a Switch to Cybersecurity

A significant skills gap in the cybersecurity industry has created a unique opportunity for individuals from various backgrounds to enter the field. Employers are seeking new people who weren’t necessarily trained to be cyber defenders but who have fresh perspectives and the potential to learn. This situation creates a tremendous opportunity for career reinvention. In response to this talent gap, the industry has committed to providing the new hires the resources and support they need to reach their fullest potential and succeed in a new career space. ... Of course, candidates should work to understand all they can about the types of cyber roles they would be most qualified for and interested in taking on – the good and the bad. This will ensure no element of surprise later down the line that catches them off guard, potentially making their career switch regretfully. It is also essential that any decisions being made are based on desire and genuine interest. If one enjoys the work they do and has the opportunity to work with good people, the rest will follow. Making a career change can be stressful, so taking it one step at a time is the best way to approach a drastic reinvention.


Data Waste Is Putting Retail Loyalty at Risk — Here’s Why

According to Wenthe, data wastage — the efficient or ineffective use of data — has become a common blight among brands in all industries, and especially those in the retail, automotive, CPG, and entertainment spaces. “Data wastage comes in a variety of forms from data sources such as customer service, sales, or operations departments,” Wenthe says. “[It] is usually the result of a collection of unnecessary data, withholding relevant data from the right team, or failure to analyze or action on the data that has been collected.” While it’s not always easy to identify data wastage, Wenthe says time spent managing customer data collections is often the main culprit. According to Gartner, data inefficiencies can end up costing organizations an average of $12.9 million per year — a huge chunk of change for just about any company to lose. “This issue is important for any brand where their data remains disjointed and unable to interact with one another,” Wenthe says. Given the influx of data coming through new channels and departments, including customer service, sales, and operations, it’s no surprise to hear that retail brands right now are struggling. 


Israeli threat group uses fake company acquisitions in CEO fraud schemes

The targeted organizations had headquarters in 15 countries, but since they are multinational corporations, employees of these companies from offices in 61 different countries were targeted. The reason why the group is focused on large enterprises is in the lure they chose to justify the very large transfers they're after: company acquisitions. It's not unusual for such multinational companies to acquire smaller companies in various local markets. ... "​​First, members of the executive team are likely to send and receive legitimate communications with the CEO on a regular basis, which means an email from the head of the organization may not seem abnormal," the researchers said. "Second, based on the stated importance of the supposed acquisition project, it’s reasonable for a senior leader at the company to be entrusted to help. And finally, because of their seniority within the organization, there is presumably less red tape that would need to be cut through in order for them to authorize a large financial transaction."


Poison Control: Report Says Tech Workplace Toxicity Rising

Joel Davies, senior people scientist at Culture Amp, says senior leadership hold the keys to creating a better work culture. “There is a common belief that ‘people leave managers, not companies,’ but we have found perceptions of senior leadership tend to be more important for employee engagement and commitment than perceptions of one’s direct manager. Senior leaders are role models, whether they like it or not. The way they behave at work creates powerful social norms that can impact how the rest of the organizations behaves.” In a tough economic environment, Tsingos says transparency goes a long way to building a positive workplace perception. “We’re living in an era of uncertainty in the financial markets,” he says. “This pressure creates toxicity. How do you deal with that. You deal with that with transparency. You deal with openness, and you deal with it by investing in your people. You might have a big company laying off thousands of people -- but there are some people who may come back and who are thankful for the transparency. Because that employer was investing in them and treating them nicely.”


The art of leading in the AI age

In the digital era, the leader as a subject matter expert is typically a senior programmer who takes on the role and responsibilities of someone who helps everyone else understand the opportunities and risks of developing something that makes life easier in the short term, but more complex and difficult in the longer term. ... we look for leaders who mediate between different reasons to use (or not to use) technology, because the best facilitator is the one who is most likely to make room for different needs and thus help her fellow human beings design their own lives. This means that leaders primarily act as organizational midwives who use their own experience and expertise to help others trust themselves—and one another—to do a job none of them could do alone. In the digital era, the leader as an organizational midwife is typically a chief experience officer or a people leader who takes on the role and responsibilities of someone who nurtures a culture in which decisions on how something should and should not be used are made deliberately and intentionally by everyone.


The Building Blocks of Success: Is Data Mesh Right for My Organization?

In many ways, data mesh is a lot like Legos. It’s possible to make over 915 million different combinations from just six different Lego bricks. A data mesh can similarly be built in any way that works best for your organization: choose each component carefully and build the solution that most fits your needs. ... The traditional operating model of centralized data engineering requires fewer skilled technical resources as the business teams all share those resources. Decentralization can lead to each business team hiring and supporting their own technical teams, which requires more resources. On the one hand, this is one reason agility and speed-to-delivery is improved: there are more people delivering, perhaps with fewer competing demands on their time. ... The strongest candidate for a data mesh includes a compelling business case, strong buy-in and sufficient resources, and an organizational culture that supports it. If you have an approach that’s working for you — say, your organization is not domain-oriented and has centralized IT with fungible resources that are implemented alongside various projects — then data mesh likely isn’t the right investment at this time.



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley