Daily Tech Digest - May 22, 2022

6 business risks of shortchanging AI ethics and governance

When enterprises build AI systems that violate users’ privacy, that are biased, or that do harm to society, it changes how their own employees see them. Employees want to work at companies that share their values, says Steve Mills, chief AI ethics officer at Boston Consulting Group. “A high number of employees leave their jobs over ethical concerns,” he says. “If you want to attract technical talent, you have to worry about how you’re going to address these issues.” According to a survey released by Gartner earlier this year, employee attitudes toward work have changed since the start of the pandemic. Nearly two-thirds have rethought the place that work should have in their life, and more than half said that the pandemic has made them question the purpose of their day job and made them want to contribute more to society. And, last fall, a study by Blue Beyond Consulting and Future Workplace demonstrated the importance of values. According to the survey, 52% of workers would quit their job — and only 1 in 4 would accept one — if company values were not consistent with their values. 


The Never-Ending To-Do List of the DBA

Dealing with performance problems is usually the biggest post-implementation nightmare faced by DBAs. As such, the DBA must be able to proactively monitor the database environment and to make changes to data structures, SQL, application logic, and the DBMS subsystem itself in order to optimize performance. ... Applications and data are more and more required to be up and available 24 hours a day, seven days a week. Globalization and e-business are driving many organizations to implement no-downtime, around-the-clock systems. To manage in such an environment, the DBA must ensure data availability using non-disruptive administration tactics. ... Data, once stored in a database, is not static. The data may need to move from one database to another, from the DBMS into an external data set, or from the transaction processing system into the data warehouse. The DBA is responsible for efficiently and accurately moving data from place to place as dictated by organizational needs. ... The DBA must implement an appropriate database backup and recovery strategy for each database file based on data volatility and application availability requirements. 


The brave, new world of work

The recent disruptions to the physical workplace have highlighted the importance of the human connections that people make on the job. In an excerpt from her new book, Redesigning Work, Lynda Gratton of the London Business School plays off an insight made nearly 50 years ago by sociologist Mark Granovetter. Granovetter famously discussed the difference between “weak” and “strong” social ties and showed that, when it came to finding jobs, weak ties (the loose acquaintances with whom you might occasionally exchange an email but don’t know well) could actually be quite powerful. Gratton applies this thinking to the way that networks are formed on the job, and to how people organize to get their work done, get new information, and innovate. She concludes that, especially in an age of remote and hybrid work, companies have to redouble their efforts to ensure that employees are able to establish and mine the power of weak ties. For Gratton, the ability to create such connections is a must-have. ... Now more than ever, people have to engage in the often challenging task of drawing boundaries. 


Most-wanted soft skills for IT pros: CIOs share their recruiting tips

Today’s IT organizations are called upon to drive and deliver significant transformation as technology seeps into all corners of a company and its products and services. With that, new and refined skills are necessary for successful technology leaders to influence business outcomes, innovation, and product development. Empathy, managing ambiguity, and collaborative influence drive innovation and are attributes we look for at MetaBank as we hire and develop top talent. Empathy lies at the core of successful problem-solving – viewing a problem from various angles leads to better solutions. ... Leaders often face challenging circumstances where they must quickly make a tough call with insufficient information. Making good choices in these situations can be critical for an organization’s success. It isn’t always easy to assess this in an interview, but behavioral interview questions and careful follow-up can help elicit specific examples from a candidate’s past work experience that may shed light on their judgment.


6 key steps to develop a data governance strategy

Much of the daily work of data governance occurs close to the data itself. The tasks that emerge from the governance strategy will often be in the hands of engineers, developers and administrators. But in too many organizations, these roles operate in silos separated by departmental or technical boundaries. To develop and apply a governance strategy that can consistently work across boundaries, some top-down influence is required. ... Horror stories of fines for breaching the EU's GDPR law on data privacy and protection might keep business leaders awake at night. This drastic approach may generate some interoffice memos or even unlock some budgetary constraints, but that would be a defensive reaction and possibly create resentment among stakeholders, which is no way to secure long-term good data governance. Instead, try this incremental approach, which should be much more attractive to executives: "Data governance is something we already do, but it's largely informal and we need to put some process around it. In doing so, we will meet regulatory demands, but we will also be a more functional, resilient organization."


8 Master Data Management Best Practices

When software development began embracing agile methodologies, its value to the business skyrocketed. That’s why we believe a MDM best practice is to embrace DataOps. hen software development began embracing agile methodologies, its value to the business skyrocketed. That’s why we believe a MDM best practice is to embrace DataOps. DataOps acknowledges the interconnected nature of data engineering, data integration, data quality, and data security/privacy. It aims to help organizations rapidly deliver data that not only accelerates analytics but also enables analytics that were previously deemed impossible. DataOps provides a myriad of benefits ranging from “faster cycle times” to “fewer defects and errors” to “happier customers.” (source) By adopting DataOps, your organization will have in place the practices, processes, and technologies needed to accelerate the delivery of analytics. You’ll bring rigor to the development and management of data pipelines. And you’ll enable CI/CD across your data ecosystem.


5 tips for building your innovation ecosystem

A common mistake when looking for innovative technology vendors is to look at companies touted as the most innovative or to go with best-of-breed, on the assumption that innovation is baked into their roadmap. It’s likely that neither approach will net you the innovation you’re looking for. Best-of-breed works well for internal IT such as your ERP or CRM, or anything under the covers in terms of client-facing solutions, but when it comes to your value proposition and differentiation you need to look elsewhere. In this case, the best-of-breed tools become the table stakes that you utilize as the foundation for your ecosystem or industry-cloud and your core IP comprises your own IP plus that of those innovative players that you’ve developed unique relationships with. The “most innovative” lists you find on the internet are often based on public or editor opinion and end up surfacing the usual suspects with strong brand awareness. While they may be leading players in the market, this does not guarantee continued innovation. If you do look at the “most innovative” lists, be sure to check the methodology involved and see how it fits your own definition and expectations for what constitutes innovation.


Zen and the Art of Data Maintenance: All Data is Suffering (Part 1)

Data can be used for many types of nefarious activities. For instance, an article in Wired described how a website stored video data regarding child sex abuse acts and how they used this data in threatening, destructive ways leading to all sorts of suffering including suicide attempts.[i] We are often bombarded with social media data (both factual and misinformation) that are designed to hold our attention through emotional disturbances such as fear. These are generally intended to elicit reactions or control behavior regarding many matters including purchasing, voting, mindshare, or almost any other matter. Have you suffered with data? How? Data is the plural form of the Latin word, ‘datum’, which Merriam Webster defines as ‘something given or admitted as a basis for reasoning or inference’. Thus, everything we receive through our senses could be considered data. It could be numbers, text, things we see, hear, or feel. But how could all data be suffering? What about positive data that communicates increased sales, better health, positive comments, data showing helpful contributions, and so on? 


The Metamorphosis of Data Governance: What it Means Today

There’s nothing more galvanizing to an organization’s board of directors—or the C-Level executives who directly answer to it—than stiff monetary penalties for noncompliance to regulations. Zoom reached a settlement for almost $100 million dollars for such issues. Even before this particular example, data governance was inexorably advancing to its current conception as a means of facilitating access control, data privacy, and security. “These are big ticket fines that are coming up,” Ganesan remarked. “Boards are saying we need to have guardrails around our data. Now, what has changed in the last few years is that part of governance, which is security and privacy, is going from being passive to more active.” Such activation not only entails what data governance focuses on, but also what the specific policies it’s comprised of focus on, too. The regulatory, risk mitigation side of data governance is currently being emphasized. It’s no longer adequate to have guidelines or even rules about how data are accessed on paper—top solutions in this space can propel those policies into source systems to ensure adherence when properly implemented. 


Five Steps Every Enterprise Architect Should Take for Better Presentation

Architects invariably care about the material they’re discussing. The mistake is believing or assuming that the audience cares as intently. They may. They may already be familiar with the content. This may simply be a status update on the latest digital transformation project and everyone is knowledgeable about the subject matter. ... Generally speaking, the audience isn’t going to automatically care as much about the material as does the Architect presenting. The key to this step is usually the hardest of all the points made in this article. The key is empathy. Thinking what you would do or what you would be interested in if you were the listener is not empathy. That’s simply you projecting your own headspace onto the audience. Trying to understand how that person is receiving your information is the key. Why do they care, what aspects will they be interested in. To do this requires knowing in advance who you will be speaking to and knowing their background, their education, their professional position, their issues or problems with the subject at hand… knowing, in effect, through what lens they will be viewing your content.



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson

Daily Tech Digest - May 21, 2022

How to make the consultant’s edge your own

What actually works, should the organization be led by a braver sort of leadership team, is a change in the culture of management at all levels. The change is that when something bad happens, everyone in the organization, from the board of directors on down, assumes the root cause is systemic, not a person who has screwed up. In the case of my client’s balance sheet fiasco, the root cause turned out to be everyone doing exactly what the situation they faced Right Now required. What had happened was that a badly delayed system implementation, coupled with the strategic decision to freeze the legacy system being replaced, led to a cascade of PTFs (Permanent Temporary Fixes to the uninitiated) to get through month-end closes. The PTFs, being temporary, weren’t tested as thoroughly as production code. But being permanent, they accumulated and sometimes conflicted with one another, requiring more PTFs each month to get everything to process. The result: Month ends did close, nobody had to tell the new system implementation’s executive sponsor about the PTFs and the risks they entailed, and nobody had to acknowledge that freezing the legacy system had turned out to be a bad call.


SBOM Everywhere: The OpenSSF Plan for SBOMs

The SBOM Everywhere working group will focus on ensuring that existing SBOM formats match documented use cases and developing high-quality open source tools to create SBOM documents. Although some of this tooling exists today, more tooling will need to be built. The working group has also been tasked with developing awareness and education campaigns to drive SBOM adoption across open source, government and commercial industry ecosystems. Notably, the U.S. federal government has taken a proactive stance on requiring the use of SBOMs for all software consumed and produced by government agencies. The Executive Order on Improving the Nation’s Cybersecurity cites the increased frequency and sophistication of cyberattacks as a catalyst for the public and private sectors to join forces to better secure software supply chains. Among the mandates is the requirement to use SBOMs to enhance software supply chain security. For government agencies and the commercial software vendors who partner and sell to them, the SBOM-fueled future is already here.


Cybersecurity pros spend hours on issues that should have been prevented

“Security is everyone’s job now, and so disconnects between security and development often cause unnecessary delays and manual work,” said Invicti chief product officer Sonali Shah. “Organizations can ease stressful overwork and related problems for security and DevOps teams by ensuring that security is built into the software development lifecycle, or SDLC, and is not an afterthought,” Shah added. “Application security scanning should be automated both while the software is being developed and once it is in production. By using tools that offer short scan times, accurate findings prioritized by contextualized risk and integrations into development workflows, organizations can shift security left and right while efficiently delivering secure code.” When it comes to software development, innovation and security don’t need to compete, according to Shah. Rather, they’re inherently linked. “When you have a proper security strategy in place, DevOps teams are empowered to build security into the very architecture of application design,” Shah said.


SmartNICs power the cloud, are enterprise datacenters next?

For all the potential SmartNICs have to offer, there remains substantial barriers to overcome. The high price of SmartNICs relative to standard NICs being one of many. Networking vendors have been chasing this kind of I/O offload functionality for years, with things like TCP offload engines, Kerravala said. "That never really caught on and cost was the primary factor there." Another challenge for SmartNIC vendors is the operational complexity associated with managing a fleet of SmartNICs distributed across a datacenter or the edge. "There is a risk here of complexity getting to the point where none of this stuff is really usable," he said, comparing the SmartNIC market to the early days of virtualization. "People were starting to deploy virtual machines like crazy, but then they had so many virtual machines they couldn't manage them," he said. "It wasn't until VMware built vCenter, that companies had one unified control plane for all their virtual machines. We don't really have that on the SmartNIC side." That lack of centralized management could make widespread deployment in environments that don't have the resources commanded by the major hyperscalers a tough sell.


Fantastic Open Source Cybersecurity Tools and Where to Find Them

Organizations benefit greatly when threat intelligence is crowdsourced and shared across the community, said Sanjay Raja, VP of product at Gurucul. "This can provide immediate protection or detection capabilities," he said. “While reducing the dependency on vendors who often do not provide updates to systems, for weeks or even months.” For example, CISA has an Automated Indicator Sharing platform. Meanwhile in Canada, there's the Canadian Cyber Threat Exchange. "These platforms allow for the real-time exchange and consumption of automated, machine-readable feeds," explained Isabelle Hertanto, principal research director in the security and privacy practice at Info-Tech Research Group. This steady stream of indicators of compromise can help security teams respond to network security threats, she told Data Center Knowledge. In fact, the problem isn't the lack of open source threat intelligence data, but an overabundance, she said. To help data center security teams cope, commercial vendors are developing AI-powered solutions to aggregate and process all this information. "We see this capability built into next generation commercial firewalls and new SIEM and SOAR platforms," Hertanto said.


Living better with algorithm

Together with Shah and other collaborators, Cen has worked on a wide range of projects during her time at LIDS, many of which tie directly to her interest in the interactions between humans and computational systems. In one such project, Cen studies options for regulating social media. Her recent work provides a method for translating human-readable regulations into implementable audits. To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see? Designing an auditing procedure is difficult in large part because there are so many stakeholders when it comes to social media. Auditors have to inspect the algorithm without accessing sensitive user data. They also have to work around tricky trade secrets, which can prevent them from getting a close look at the very algorithm that they are auditing because these algorithms are legally protected.


CFO perspectives on leading agile change

In an agile organization, leadership-level priorities cascade down to inform every part of the business. For this reason, CFOs talked extensively about the importance of setting up a prioritization framework that is as objective as possible. Many participants mentioned that it can be challenging to work out priorities through the QBR process, because different teams lack an institutional mechanism through which to weigh different work segments against one another and prioritize between them. Most CFOs agreed that some degree of direction from the top is required in this area. One CFO said he thinks of his organization as a “prioritization jar”: leadership puts big stones in the jar first and then fills in the spaces with sand. These prioritization “stones” might be six key projects identified by management, or they might be 20 key initiatives chosen through a mixture of leadership direction and feedback from tribes. A second challenge emerged regarding shifting resources among teams or clusters responsible for individual initiatives. When asked what they would do if they had a magic wand, several CFOs said they need better ways to reallocate resources at short notice. 


Friend Or Foe: Delving Into Edge Computing & Cloud Coputing

One of the most significant features of edge computing is decentralization. Edge computing allows for using resources and communication technologies via a single computing infrastructure and the transmission channel. Edge computing is a technology that optimizes computational needs by utilizing the cloud at its edge. When it comes to gathering data or when someone does a particular action, real-time execution is possible wherever there is a need for that. The two most significant advantages of edge computing are increased performance and lower operational expenses. ... The first thing to realize is that cloud computing and edge computing are not rival technologies. They aren’t different solutions to the same problem; rather, they’re two distinct ways of addressing particular problems. Cloud computing is ideal for scalable applications that must be ramped up or down depending on demand. Extra resources can be requested by web servers, for example, to ensure smooth service without incurring any long-term hardware expenses during periods of heavy server usage. 


Why AI and autonomous response are crucial for cybersecurity

Remote work has become the norm, and outside the office walls, employees are letting down their personal security defenses. Cyber risks introduced by the supply chain via third parties are still a major vulnerability, so organizations need to think about not only their defenses but those of their suppliers to protect their priority assets and information from infiltration and exploitation. And that’s not all. The ongoing Russia-Ukraine conflict has provided more opportunities for attackers, and social engineering attacks have ramped up tenfold and become increasingly sophisticated and targeted. Both play into the fears and uncertainties of the general population. Many security industry experts have warned about future threat actors leveraging AI to launch cyber-attacks, using intelligence to optimize routes and hasten their attacks throughout an organization’s digital infrastructure. “In the modern security climate, organizations must accept that it is highly likely that attackers could breach their perimeter defenses,” says Steve Lorimer, group privacy and information security officer at Hexagon.


Service Meshes Are on the Rise – But Greater Understanding and Experience Are Required

We explored the factors influencing people’s choices by asking which features and capabilities drive their organization’s adoption of service mesh. Security is a top concern, with 79% putting their faith in techniques such as mTLS authentication of servers and clients during transactions to help reduce the risk of a successful attack. Observability came a close second behind security, at 78%. As cloud infrastructure has grown in importance and complexity, we’ve seen a growing interest in observability to understand the health of systems. Observability entails collecting logs, metrics, and traces for analysis. Traffic management came third (62%). This is a key consideration given the complexity of cloud native that a service mesh is expected to help mitigate. ... Potential issues here include latency, lack of bandwidth, security incidents, the heterogeneous composition of the cloud environment, and changes in architecture or topology. Respondents want a service mesh to overcome these networking and in-service communications challenges.



Quote for the day:

"To command is to serve : nothing more and nothing less." -- Andre Marlaux

Daily Tech Digest - May 19, 2022

Five areas where EA matters more than ever

While resiliency has always been a focus of EA, “the focus now is on proactive resiliency” to better anticipate future risks, says Barnett. He recommends expanding EA to map not only a business’ technology assets but all its processes that rely on vendors as well as part-time and contract workers who may become unavailable due to pandemics, sanctions, natural disasters, or other disruptions. Businesses are also looking to use EA to anticipate problems and plan for capabilities such as workload balancing or on-demand computing to respond to surges in demand or system outages, Barnett says. That requires enterprise architects to work more closely with risk management and security staff to understand dependencies among the components in the architecture to better understand the likelihood and severity of disruptions and formulate plans to cope with them. EA can help, for example, by describing which cloud providers share the same network connections, or which shippers rely on the same ports to ensure that a “backup” provider won’t suffer the same outage as a primary provider, he says.


Build or Buy? Developer Productivity vs. Flexibility

To make things a bit more concrete, let’s look at a very simple example that shows the positives of both sides. Developers are the primary audience for InfluxData’s InfluxDB, a time series database. It provides both client libraries and direct access to the database via API to give developers an option that works best for their use case. The client libraries provide best practices out of the box so developers can get started reading and writing data quickly. Things like batching requests, retrying failed requests and handling asynchronous requests are taken care of so the developer doesn’t have to think about them. Using the client libraries makes sense for developers looking to test InfluxDB or to quickly integrate it with their application for storing time series data. On the other hand, developers who need more flexibility and control can choose to interact directly with InfluxDB’s API. Some companies have lengthy processes for adding external dependencies or already have existing internal libraries for handling communication between services, so the client libraries aren’t an option.


Enterprises shore up supply chain resilience with data

“Digital dialogue between trading partners is crucial, not just for those two [direct trading partners], but also for the downstream effects,” he says, adding that when it comes to supply chains and procurement, SAP’s focus is on helping its customers ensure that the data “flows to the right trading partners so that they can make proactive decisions in moving assets, logistics and doing the right purchasing”. He further adds that where supply chain considerations have traditionally been built around “cost, control and compliance”, companies are now looking to incorporate “connectivity, conscience and convenience” alongside those other factors. On the last point regarding convenience, Henrik says this refers to having “information at my fingertips when I need it”, meaning it is important for companies to not only collect data on their operations, but to structure it in a way that drives actionable insights. “Once you have actionable insights from the data, then real change happens, and that’s really what companies are looking for,” he says.


Ransomware is already out of control. AI-powered ransomware could be 'terrifying.'

If attackers were able to automate ransomware using AI and machine learning, that would allow them to go after an even wider range of targets, according to Driver. That could include smaller organizations, or even individuals. "It's not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely," Driver said. Ultimately, “it's terrifying.” The prediction that AI is coming to cybercrime in a big way is not brand new, but it still has yet to manifest, Hyppönen said. Most likely, that's because the ability to compete with deep-pocketed enterprise tech vendors to bring in the necessary talent has always been a constraint in the past. The huge success of the ransomware gangs in 2021, predominantly Russia-affiliated groups, would appear to have changed that, according to Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602 million in 2021, led by Conti's $182 million. The ransomware group that struck the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups brought in more than $30 million in that single year, according to Chainalysis.


Will quantum computing ever be available off-the-shelf?

Quantum computing will never exist in a vacuum, and to add value, quantum computing components need to be seamlessly integrated with the rest of the enterprise technology stack. This includes HPC clusters, ETL processes, data warehouses, S3 buckets, security policies, etc. Data will need to be processed by classical computers both before and after it runs through the quantum algorithms. This infrastructure is important: any speedup from quantum computing can easily be offset by mundane problems like disorganized data warehousing and sub-optimal ETL processes. Expecting a quantum algorithm to deliver an advantage with a shoddy classical infrastructure around it is like expecting a flight to save you time when you don’t have a car to take you to and from the airport. These same infrastructure issues often arise in many present-day machine learning (ML) use cases. There may be many off-the-shelf tools available, but any useful ML application will ultimately be unique to the model’s objective and the data used to train it. 


Addressing the skills shortage with an assertive approach to cybersecurity

All too often, businesses do not see investing in security strategy and technologies as a priority – until an attack occurs. It might be the assumption that only the wealthiest industries or those with highly classified information would require the most up-to-date cybersecurity tactics and technology, but this is simply not the case. All organizations need to adopt a proactive approach to security, rather than having to deal with the aftermath of an incident. By doing so, companies and organizations can significantly mitigate any potential damage. Traditionally, security awareness may have been restricted to specific roles, meaning only a select few people having the training and understanding required to deal with cyber-attacks. Nowadays every role, at every level, in all industries must have some knowledge to secure themselves and their work against breaches. Training should be made available for all employees to increase their awareness, and organizations need to prioritize investment in secure, up-to-date technologies to ensure their protection. 


Easily Optimize Deep Learning with 8-Bit Quantization

There are two challenges with quantization: How to do it easily - In the past, it has been a time consuming process; and How to maintain accuracy. Both of these challenges are addressed by the Neural Network Compression Framework (NNCF). NNCF is a suite of advanced algorithms for optimizing machine learning and deep learning models for inference in the Intel® Distribution of OpenVINOTM toolkit. NNCF works with models from PyTorch and TensorFlow. One of the main features of NNCF is 8-bit uniform quantization, using recent academic research to create accurate and fast models. The technique we will be covering in this article is called quantization-aware training (QAT). This method simulates the quantization of weights and activations while the model is being trained, so that operations in the model can be treated as 8-bit operations at inference time. Fine tuning is used to restore the accuracy drop from quantization. QAT has better accuracy and reliability than carrying out quantization after the model has been trained. Unlike other optimization tools, NNCF does not require users to change the model manually or learn how the quantization works.


Apache Druid: A Real-Time Database for Modern Analytics

With its distributed and elastic architecture, Apache Druid prefetches data from a shared data layer into an infinite cluster of data servers. Because there’s no need to move data and you’re providing more flexibility to scale, this kind of architecture performs quicker as opposed to a decoupled query engine such as a cloud data warehouse. Additionally, Apache Druid can process more queries per core by leveraging automatic, multilevel indexing that is built into its data format. This includes a global index, data dictionary and bitmap index, which goes beyond a standard OLAP columnar format and provides faster data crunching by maximizing CPU cycles. ... Apache Druid provides a smarter and more economical choice because of its optimized storage and query engine that decreases CPU usage. “Optimized” is the keyword here; you want your infrastructure to serve more queries in the same amount of time rather than having your database read data it doesn’t need to.


Compete to Communicate on Cybersecurity

At its core, cybersecurity depends on communication. Outdated security policies that are poorly communicated are equally as dangerous as substandard software code and other flawed technical features. Changing human behavior in digital security falls on the technology companies themselves, which need to improve explaining digital security issues to their employees and customers. In turn, tech companies can help employees and customers understand what they can do to make things better and why they need to be active participants in helping to defend themselves, our shared data and digital infrastructure. Instead of competing on the lowest price or claims of best service, how do we incentivize service vendors, cloud providers, device manufacturers and other relevant technology firms to pay more attention to how they communicate with users around security? Rules and regulations? Possibly. Improving how companies communicate and train on security? Absolutely. Shaping a marketplace where tech companies compete more intensively for business on the technical and training elements of security? Definitely.


A philosopher's guide to messy transformations

In the domain of expertise, people base their understanding of transformation on practical insight into the history and culture of the company. A question from an attendee on the panel I conducted illustrated this nicely: “How do you get an organization with a legacy of being extremely risk averse to embrace agility, which can be perceived as a more risky, trial-and-error approach?” The question acknowledges and accepts that the company needs to embrace agility but demonstrates neither insight nor interest as to why it needs to do so. Whether the questioner trusts senior management’s decision to embrace agility, or she has other reasons for ignoring the “why,” it is obvious that she wants to know about the “how.” Too often leaders forget about the how. And that can be a costly mistake. ... “When you have an organization that has been organically growing over 90 years, then the culture is embedded in the language and the behaviors of the people working in the organization,” he said. The strength of legacy companies is that their culture is defined by conversations and behaviors that have been evolving for decades. 



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee