Daily Tech Digest - May 31, 2023

5 best practices for software development partnerships

“The key to successful co-creation is ensuring your partner is not just doing their job, but acting as a true strategic asset and advisor in support of your company’s bottom line,” says Mark Bishopp, head of embedded payments/finance and partnerships at Fortis. “This begins with asking probing questions during the prospective stage to ensure they truly understand, through years of experience on both sides of the table, the unique nuances of the industries you’re working in.” Beyond asking questions about skills and capabilities, evaluate the partner’s mindset, risk tolerance, approach to quality, and other areas that require alignment with your organization’s business practices and culture. ... To eradicate the us-versus-them mentality, consider shifting to more open, feedback-driven, and transparent practices wherever feasible and compliant. Share information on performance issues and outages, have everyone participate in retrospectives, review customer complaints openly, and disclose the most challenging data quality issues.


Revolutionizing Algorithmic Trading: The Power of Reinforcement Learning

The fundamental components of a reinforcement learning system are the agent, the environment, states, actions, and rewards. The agent is the decision-maker, the environment is what the agent interacts with, states are the situations the agent finds itself in, actions are what the agent can do, and rewards are the feedback the agent gets after taking an action. One key concept in reinforcement learning is the idea of exploration vs exploitation. The agent needs to balance between exploring the environment to find out new information and exploiting the knowledge it already has to maximize the rewards. This is known as the exploration-exploitation tradeoff. Another important aspect of reinforcement learning is the concept of a policy. A policy is a strategy that the agent follows while deciding on an action from a particular state. The goal of reinforcement learning is to find the optimal policy, which maximizes the expected cumulative reward over time. Reinforcement learning has been successfully applied in various fields, from game playing (like the famous AlphaGo) to robotics (for teaching robots new tasks).


Data Governance Roles and Responsibilities

Executive-level roles include leadership in the C-suite at the organization’s top. According to Seiner, people at the executive level support, sponsor, and understand Data Governance and determine its overall success and traction. Typically, these managers meet periodically as part of a steering committee to cover broadly what is happening in the organization, so they would add Data Governance as a line item, suggested Seiner. These senior managers take responsibility for understanding and supporting Data Governance. They keep up to date on Data Governance progress through direct reports and communications from those at the strategic level. ... According to Seiner, strategic members take responsibility for learning about Data Governance, reporting to the executive level about the program, being aware of Data Governance activities and initiatives, and attending meetings or sending alternates. Moreover, this group has the power to make timely decisions about Data Governance policies and how to enact them. 


Effective Test Automation Approaches for Modern CI/CD Pipelines

Design is not just about unit tests though. One of the biggest barriers to test automation executing directly in the pipeline is that the team that deals with the larger integrated system only starts a lot of their testing and automation effort once the code has been deployed into a bigger environment. This wastes critical time in the development process, as certain issues will only be discovered later and there should be enough detail to allow testers to at least start writing the majority of their automated tests while the developers are coding on their side. This doesn’t mean that manual verification, exploratory testing, and actually using the software shouldn’t take place. Those are critical parts of any testing process and are important steps to ensuring software behaves as desired. These approaches are also effective at finding faults with the proposed design. However, automating the integration tests allows the process to be streamlined.


What Does Being a Cross-Functional Team in Scrum Mean?

By bringing together individuals with different skills and perspectives, these teams promote innovation, problem-solving, and a holistic approach to project delivery. They reduce handoffs, bottlenecks, and communication barriers often plaguing traditional development models. Moreover, cross-functional teams enable faster feedback cycles and facilitate continuous improvement. With all the necessary skills in one team, there's no need to wait for handoffs or external dependencies. This enables quicker decision-making, faster iterations, and the ability to respond to customer feedback promptly. In short, being a cross-functional Scrum Team means having a group of individuals with diverse skills, a shared sense of responsibility, and a collaborative mindset. They work together autonomously, leveraging their varied expertise to deliver high-quality software increments. ... Building genuinely cross-functional Scrum Teams starts with product definition. This means identifying and understanding the scope, requirements, and goals of the product the team will work on. 


The strategic importance of digital trust for modern businesses

Modern software development processes, like DevOps, are highly automated. An engineer clicks a button that triggers a sequence of complicated, but automated, steps. If a part of this sequence (e.g., code signing) is manual then there is a likelihood that the step may be missed because everything else is automated. Mistakes like using the wrong certificate or the wrong command line options can happen. However, the biggest danger is often that the developer will store private code signing keys in a convenient location (like their laptop or build server) instead of a secure location. Key theft, misused keys, server breaches, and other insecure processes can permit code with malware to be signed and distributed as trusted software. Companies need a secure, enterprise-level code signing solution that integrates with the CI/CD pipeline and automated DevOps workflows but also provides key protection and code signing policy enforcement.


Managing IT right starts with rightsizing IT for value

IT financial management — sometimes called FinOps — is overlooked in many organizations. A surprising number of organizations do not have a very good handle on the IT resources being used. Another way of saying this is: Executives do not know what IT they are spending money on. CIOs need to make IT spend totally transparent. Executives need to know what the labor costs are, what the application costs are, and what the hardware and software costs are that support those applications. The organization needs to know everything that runs — every day, every month, every year. IT resources need to be matched to business units. IT and the business unit need to have frank discussions about how important that IT resource really is to them — is it Tier One? Tier Two? Tier Thirty? In the data management space — same story. Organizations have too much data. Stop paying to store data you don’t need and don’t use. Atle Skjekkeland, CEO at Norway-based Infotechtion, and John Chickering, former C-level executive at Fidelity, both insist that organizations, “Define their priority data, figure out what it is, protect it, and get rid of the rest.”


Implementing Risk-Based Vulnerability Discovery and Remediation

A risk-based vulnerability management program is a complex preventative approach used for swiftly detecting and ranking vulnerabilities based on their potential threat to a business. By implementing a risk-based vulnerability management approach, organizations can improve their security posture and reduce the likelihood of data breaches and other security events. ... Organizations should still have a methodology for testing and validating that patches and upgrades have been appropriately implemented and would not cause unanticipated flaws or compatibility concerns that might harm their operations. Also, remember that there is no "silver bullet": automated vulnerability management can help identify and prioritize vulnerabilities, making it easier to direct resources where they are most needed. ... Streamlining your patching management is another crucial part of your security posture: an automated patch management system is a powerful tool that may assist businesses in swiftly and effectively applying essential security fixes to their systems and software.


Upskilling the non-technical: finding cyber certification and training for internal hires

“If you are moving people into technical security from other parts of the organization, look at the delta between the employee's transferrable skills and the job they’d be moving into. For example, if you need a product security person, you could upskill a product engineer or product manager because they know how the product works but may be missing the security mindset,” she says. “It’s important to identify those who are ready for a new challenge, identify their transferrable skills, and create career paths to retain and advance your best people instead of hiring from outside.” ... While upskilling and certifying existing employees would help the organization retain talented people who already know the company, Diedre Diamond, founding CEO of cyber talent search company CyberSN, cautions against moving skilled workers to entry-level roles in security that don’t pay what the employees are used to earning. Upskilling financial analysts into compliance, either as a cyber risk analyst or GRC analyst will require higher-level certifications, but the pay for those upskilled positions may be more equitable for those higher-paid employees, she adds.


Data Engineering in Microsoft Fabric: An Overview

Fabric makes it quick and easy to connect to Azure Data Services, as well as other cloud-based platforms and on-premises data sources, for streamlined data ingestion. You can quickly build insights for your organization using more than 200 native connectors. These connectors are integrated into the Fabric pipeline and utilize the user-friendly drag-and-drop data transformation with dataflow. Fabric standardizes on Delta Lake format. Which means all the Fabric engines can access and manipulate the same dataset stored in OneLake without duplicating data. This storage system provides the flexibility to build lakehouses using a medallion architecture or a data mesh, depending on your organizational requirement. You can choose between a low-code or no-code experience for data transformation, utilizing either pipelines/dataflows or notebook/Spark for a code-first experience. Power BI can consume data from the Lakehouse for reporting and visualization. Each Lakehouse has a built-in TDS/SQL endpoint, for easy connectivity and querying of data in the Lakehouse tables from other reporting tools.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - May 18, 2023

Security breaches push digital trust to the fore

Digital trust needs to be integrated within the organization and isn’t necessarily owned by a single department or job title. Even so, cybersecurity, and the CISO, have an important role to play, according to the World Economic Forum’s 2022 Earning Digital Trust report, in protecting interconnectivity that support business, livelihoods of people and society generally as people’s reliance on digital interactions grows. As governments and regulators implement stricter requirements for ensuring data privacy and security, CISOs face a renewed need to prioritize digital trust or risk fines, lawsuits, significant brand damage and revenue loss to the organization. Thomas suggests that for CISOs digital trust could become the measurable metrics and outcome of security initiatives. “Organizations are not only secure to be compliant and protect information. The outcome of this is the trust that customers have, and that is what's going to change the way we measure how well security is being implemented,” he says. “If you want to ensure your customers trust you, you need to look at it as an organizational goal, or have it as a part of the strategy. ...”


Preparing the Mindset for Change: Five Roadblocks That Lead Digital Transformation to Failure

The absence of effective advocacy may have significantly contributed to the failure of many digital transformation progress. However, it is the responsibility of the stakeholders to be the advocates of the change. The goal to change cannot be just a business decision it needs to be believed in. A business that is generational, often sees the founders married to legacy processes, they find it difficult to break the norm and adapt to automation irrespective of disparate systems restricting the growth and scale. ... A lack of strategic planning before and after implementation can lead to severe consequences for an organization. Conflicting priorities can arise, and critical objectives may not be effectively communicated or achieved due to a disconnect between business and technology plans.
Unfortunately, many organizations fail to recognize the importance of pre-and post-implementation planning and instead focus solely on the implementation process. This shortsighted approach can lead to poor customer and stakeholder engagement, as well as employee dissatisfaction. 


Don't overlook attack surface management

Let’s look at three aspects of ASM that you should consider today: ... Visibility and discovery. Attack surface management should provide a comprehensive view of the cloud environment, allowing organizations to identify potential security weaknesses and blind spots. It helps uncover unknown assets, unauthorized services, and overlooked configurations, offering a clearer picture of potential entry points for attackers. ... Risk assessment and prioritization. By understanding the scope and impact of vulnerabilities, organizations can assess the associated risks and prioritize them. Attack surface management empowers businesses to allocate resources efficiently, focusing on high-risk areas that could have severe consequences if compromised. ... Remediation and incident response. When vulnerabilities are detected, ASM management provides the necessary insights to remediate them promptly. It facilitates incident response by helping organizations take immediate action, such as applying patches, updating configurations, or isolating compromised resources.


One on One with Automated Software Testing Expert Phil Japikse

A common misconception is that creating automated testing increases the delivery time. There was a study done at Microsoft some years ago that looked at different teams. Some were using a test-first strategy, some were using a test-eventual strategy, and some groups were using traditional QA departments for their testing. Although the cycle time was slightly higher for those doing automated testing, the throughput was much higher. This was because the quality of their work was much higher, and they had much less rework. We all know it’s more interesting to work on new features and tedious and boring to fix bugs. If you aren’t including at least some automated testing in your development process, you are going to spend more time fixing bugs and less time building new features. ... The more complex or important the system is, the more testing it needs. Software that controls airplanes, for example, must be extremely well tested. One could argue that game software doesn’t need as much testing. It all depends on the business requirements for the application.


The Work Habits That Are Blocking Your Ideas, Dreams and Breakthrough Success

A reactive mind prevents us from responding productively to the moment. Any time we are reactive, because we are not effectively relating to ourselves in the moment, we cannot be present with others. Those who have been tasked with carrying out our objectives can sense our lack of clarity and misalignment. They may perceive us as "confused," for instance, and then our reactivity triggers their self-protective belief structures. Miscommunication becomes the norm when a reactive individual is leading a team. ... Your colleague's negativity is not only self-destructive; it is also destructive to the organization and the morale of their co-workers. But your own disconnection from the truth of the moment is also destructive. By prejudging a colleague, you are missing out on the opportunity to positively interact with them or influence their behavior, and both of these things matter. A healthy yet skeptical outlook is helpful. Would you want a contract written by your lawyer that only foresaw favorable outcomes? The invitation is to transform negativity into a healthy dynamic so that co-creativity and joy are both possible. You need to be open to the possibilities that each of us possesses.


Dialectic Thinking: The Secret to Exceptional Mindful Leadership

The paradox of acceptance and change may very well be the toughest one we grapple with. Whether this is in our own meditation practice and self-development, or leading an organization it’s vital to take a dialectic approach. For genuine change to occur, there must first be acceptance of the current state. This acceptance forms the bedrock of reality, a foundation that is crucial for creating meaningful change. It's a truth that can't be obscured or sugarcoated. With acceptance, there's an opportunity to see things as they are and then to envisage something different. However, we can often misconstrue acceptance as passivity or complacency. It can be seen as an excuse to “do nothing”, to shy away from bold action, or to remain comfortably entrenched in the status quo. On the flip side, a relentless push for change can create a sense of perpetual dissatisfaction, hindering our ability to appreciate what already is. This can also foster a short-term, transactional mindset, particularly in relationships.


How to explain data meshes, fabrics, and clouds

“A data mesh is a decentralized approach to managing data, where multiple teams within a company are responsible for their own data, promoting collaboration and flexibility,” he said. There are no complex words in this definition, and it introduces the problems data meshes aim to solve, the type of solution, and why it’s important. Expect to be asked for more technical details, though, especially if the executive has prior knowledge of other data management technologies. For example, “Weren't data warehouses and data lakes supposed to solve the data management issue?” This question can be a trap if you answer it with the technical differences between data warehouses, lakes, and meshes. Instead, focus your response on the business objective. Satish Jayanthi, co-founder and CTO of Coalesce, offers this suggestion: “Data quality often affects the accuracy of business analytics and decision-making. By implementing data mesh paradigms, the quality and accuracy of data can be enhanced, resulting in increased trust among businesses to utilize data more extensively for informed decision-making.”


Has the Cloud Forever Changed Disaster Recovery?

For today’s organisations, resilience is paramount to a successful data protection plan, mentioned Lawrence Yeo, Enterprise Solutions Director, ASEAN, Hitachi Vantara. Being resilient entails having the flexibility to quickly restore data and applications to both existing and new cloud accounts. We believe that traditional backup and disaster recovery systems focused on data centres are becoming outdated. Instead, we need a data protection strategy that prioritises IT resilience and can protect data anywhere, including public clouds and SaaS applications. Resilience is the key to a robust data protection strategy as a slow disaster recovery or data restoration can negatively impact business processes. To be resilient, you need a data protection solution that encompasses backup and disaster recovery across on-premises and public clouds, allowing you to restore data and applications quickly, either to existing or new cloud accounts.


IOT Sensors - Sensing the danger

How can an operator establish integrity and accuracy within a sensor and mitigate potential vulnerabilities? This is where Root of Trust (RoT) hardware plays a crucial role. Hardware such as a Device Identifier Composition Engine (DICE) can supply a unique security key to each firmware layer found in a sensor or connected device. ... Should an attack on your systems be successful, and a layer become exposed, the unique key accessed by a hacker cannot be used to breach further elements. This can help reduce the risk of a significant data breach and enables operators to trust the devices they utilise in a network. A device can also easily be re-keyed should any unauthorised amendments be discovered within the sensor’s firmware, enabling users to quickly identify vulnerabilities throughout the system’s update process. For organisations with smaller devices and an even smaller budget, specifications such as the Measurement and Attestation Roots (MARS) can be deployed to instil the necessary capabilities of identity, measurement storage, and reporting in a more cost-effective manner.


Data hoarding is bad for business and the environment

The findings suggest young consumers are unaware of the impact of their own carbon footprint. From the report, 44% said it’s wrong for businesses to waste energy and cause pollution by storing unneeded information online. ... The fallout? The Veritas study found that 47% of consumers would stop buying from a company if they knew it was willfully causing environmental damage by failing to control how much unnecessary data it was storing. Meanwhile, 49% of consumers think it’s the responsibility of the organizations that store their information to delete it when it’s no longer needed, the report said. ... It is incumbent upon leaders to pay attention to this issue. Srinivasan cautioned that organizations should not underestimate the environmental impact of poor data management practices – even if they are outsourcing their storage to public cloud providers. Some good data management practices would be to make consumers aware of the costs of all this data, especially the negative externalities on our overheating planet.



Quote for the day:

"Management is about arranging and telling. Leadership is about nurturing and enhancing." -- Tom Peters

Daily Tech Digest - May 16, 2023

Law enforcement crackdowns and new techniques are forcing cybercriminals to pivot

Because of stepped-up law enforcement efforts, cybercriminals are also facing a crisis in cashing out their cryptocurrencies, with only a handful of laundering vehicles in place due to actions against crypto-mixers who help obfuscate the money trail. "Eventually, they'll have to cash out to pay for their office space in St. Petersburg to pay for their Lambos. So, they're going to need to find an exchange," Burns Coven said. Cybercriminals are just sitting on their money, like stuffing money under the mattress. "It's been a tumultuous two years for the threat actors," she said. "A lot of law enforcement takedowns, challenging operational environments, and harder to get funds. And we're seeing this sophisticated laundering technique called absolutely nothing doing, just sitting on it." Despite the rising number of challenges, "I don't think there's a mass exodus of threat actors from ransomware," Burns Coven tells CSO, saying they are shifting tactics rather than exiting the business altogether. 


5 IT management practices certain to kill IT productivity

Holding people accountable is root cause analysis predicated on the assumption that if something goes wrong it must be someone’s fault. It’s a flawed assumption because most often, when something goes wrong, it’s the result of bad systems and processes, not someone screwing up. When a manager holds someone accountable they’re really just blame-shifting. Managers are, after all, accountable for their organization’s systems and processes, aren’t they? Second problem: If you hold people accountable when something goes wrong, they’ll do their best to conceal the problem from you. And the longer nobody deals with a problem, the worse it gets. One more: If you hold people accountable whenever something doesn’t work, they’re unlikely to take any risks, because why would they? Why it’s a temptation: Finding someone to blame is, compared to serious root cause analysis, easy, and fixing the “problem” is, compared to improving systems and practices, child’s play. As someone once said, hard work pays off sometime in the indefinite future, but laziness pays off right now.


How AI ethics is coming to the fore with generative AI

The discussion of AI ethics often starts with a set of principles guiding the moral use of AI, which is then applied in responsible AI practices. The most common ethical principles include being human-centric and socially beneficial, being fair, offering explainability and transparency, being secure and safe, and showing accountability. ... “But it’s still about saving lives and while the model may not detect everything, especially the early stages of breast cancer, it’s a very important question,” Sicular says. “And because of its predictive nature, you will not have everyone answering the question in the same fashion. That makes it challenging because there’s no right or wrong answer.” ... “With generative AI, you will never be able to explain 10 trillion parameters, even if you have a perfectly transparent model,” Sicular says. “It’s a matter of AI governance and policy to decide what should be explainable or interpretable in critical paths. It’s not about generative AI per se; it's always been a question for the AI world and a long-standing problem.”


Design Patterns Are A Better Way To Collaborate On Your Design System

You probably don’t think of your own design activities as a “pattern-making” practice, but the idea has a lot of very useful overlap with the practice of making a design system. The trick is to collaborate with your team to find the design patterns in your own product design, the parts that repeat in different variations that you can reuse. Once you find them, they are a powerful tool for making design systems work with a team. ... All designers and developers can make their design system better and more effective by focusing on patterns first (instead of the elements), making sure that each is completely reusable and polished for any context in their product. “Pattern work can be a fully integrated part of both getting some immediate work done and maintaining a design system. ... This kind of design pattern activity can be a direct path for designers and developers to collaborate, to align the way things are designed with the way they are built, and vice-versa. For that purpose, a pattern does not have to be a polished design. It can be a rough outline or wireframe that designers and developers make together. It needs no special skills and can be started and iterated on by all. 


Digital Twin Technology: Revolutionizing Product Development

Digital twin technology accelerates product development while reducing time to market and improving product performance, Norton says. The ability to design and develop products using computer-aided design and advanced simulation techniques can also facilitate collaboration, enable data driven decision making, engineer a market advantage, and reduce design churn. “Furthermore, developing an integrated digital thread can enable digital twins across the product lifecycle, further improving product design and performance by utilizing feedback from manufacturing and the field.” Using digital twins and generative design upfront allows better informed product design, enabling teams to generate a variety of possible designs based on ranked requirements and then run simulations on their proposed design, Marshall says. “Leveraging digital twins during the product use-cycle allows them to get data from users in the field in order to get feedback for better development,” she adds. Digital twin investments should always be aimed at driving business value. 


DevEx, a New Metrics Framework From the Authors of SPACE

Organizations can improve developer experience by identifying the top points of friction that developers encounter, and then investing in improving areas that will increase the capacity or satisfaction of developers. For example, an organization can focus on reducing friction in development tools in order to allow developers to complete tasks more seamlessly. Even a small reduction in wasted time, when multiplied across an engineering organization, can have a greater impact on productivity than hiring additional engineers. ... The first task for organizations looking to improve their developer experience is to measure where friction exists across the three previously described dimensions. The authors recommend selecting topics within each dimension to measure, capturing both perceptual and workflow metrics for each topic, and also capturing KPIs to stay aligned with the intended higher-level outcomes. ... The DevEx framework provides a practical framework for understanding developer experience, while the accompanying measurement approaches systematically help guide improvement.
While there are many companies with altruistic intentions, the reality is that most organizations are beholden to stakeholders whose chief interests are profit and growth. If AI tools help achieve those objectives, some companies will undoubtedly be indifferent to their downstream consequences, negative or otherwise. Therefore, addressing corporate accountability around AI will likely start outside the industry in the form of regulation. Currently, corporate regulation is pretty straightforward. Discrimination, for instance, is unlawful and definable. We can make clean judgments about matters of discrimination because we understand the difference between male and female, or a person’s origin or disability. But AI presents a new wrinkle. How do you define these things in a world of virtual knowledge? How can you control it? Additionally, a serious evaluation of what a company is deploying is necessary. What kind of technology is being used? Is it critical to the public? How might it affect others? Consider airport security. 


Prepare for generative AI with experimentation and clear guidelines

Your first step should be deciding where to put generative AI to work in your company, both short-term and into the future. Boston Consulting Group (BCG) calls these your “golden” use cases — “things that bring true competitive advantage and create the largest impact” compared to using today’s tools — in a recent report. Gather your corporate brain trust to start exploring these scenarios. Look to your strategic vendor partners to see what they’re doing; many are planning to incorporate generative AI into software ranging from customer service to freight management. Some of these tools already exist, at least in beta form. Offer to help test these apps; it will help teach your teams about generative AI technology in a context they’re already familiar with. ... To help discern the applications that will benefit the most from generative AI in the next year or so, get the technology into the hands of key user departments, whether it’s marketing, customer support, sales, or engineering, and crowdsource some ideas. Give people time and the tools to start trying it out, to learn what it can do and what its limitations are. 


Cyberdefense will need AI capabilities to safeguard digital borders

Speaking at CSIT's twentieth anniversary celebrations, where he announced the launch of the training scheme, Teo said: "Malign actors are exploiting technology for their nefarious goals. The security picture has, therefore, evolved. Malicious actors are using very sophisticated technologies and tactics, whether to steal sensitive information or to take down critical infrastructure for political reasons or for profit. "Ransomware attacks globally are bringing down digital government services for extended periods of time. Corporations are not spared. Hackers continue to breach sophisticated systems and put up stolen personal data for sale, and classified information." Teo also said that deepfakes and bot farms are generating fake news to manipulate public opinion, with increasingly sophisticated content that blur the line between fact and fiction likely to emerge as generative AI tools, such as ChatGPT, mature and become widely available. "Threats like these reinforce our need to develop strong capabilities that will support our security agencies and keep Singapore safe," the minister said. 


Five key signs of a bad MSP relationship – and what to do about them

Red flags to look out for here include overly long and unnecessarily complicated contracts. These are often signs of MSPs making lofty promises, trying to tie you into a longer project, and pre-emptively trying to raise bureaucratic walls to make accessing the services you are entitled to more complex. The advice here is simple – don’t rush the contract signing. Instead, ensure that the draft contract is passed through the necessary channels, so that all stakeholders have complete oversight. Also, do not be tempted by outlandish promises; think pragmatically about what you want to achieve with your MSP relationship, and make sure the contract reflects your goals. If you’re already locked into a contract, consider renegotiating specific terms. ... If projects are moving behind schedule and issues are coming up regularly, this is a sign that your project lacks true project management leadership. Of course, both parties will need some time when the project starts to get processes running smoothly, but if you’re deep into a contract and still experiencing delays and setbacks, this is a sign that all is not well at your MSP. 



Quote for the day:

"The greatest thing is, at any moment, to be willing to give up who we are in order to become all that we can be." -- Max de Pree