Daily Tech Digest - October 15, 2021

9 common risk management failures and how to avoid them

Known for decades as the hub of technical innovation, Silicon Valley has evolved into a bastion of toxic "bro culture," according to Alla Valente, senior analyst at Forrester Research. She also cited other forms of toxic work culture when companies fail to mitigate risks that can alienate employees and customers. Facebook's lukewarm response to the Cambridge Analytica scandal, Valente argued, has significantly eroded its trustworthiness and market potential. Wells Fargo's executives turning a blind eye to the warning signs of the bank's predatory selling practices with their customers "was a strategic decision," Valente said. "It could have been fixed, but fixing culture is never easy." ... Efficiency and resiliency sit at opposite ends of the spectrum, Matlock said. Greater efficiency can lead to greater profits when things go well. The auto industry realized significant savings by creating a supply chain of thousands of third-party suppliers spread across multiple tiers. But during the pandemic, there were massive disruptions in supply chains that lacked resiliency. 


Mandating a Zero-Trust Approach for Software Supply Chains

SBOMs are a great first step towards supply-chain transparency, but there is more that needs to be done. As an analogy, many equate the SBOM to the ingredient labels on food. With that perspective, we can see parallels between our software supply chain and the food supply chain. Subsequently, the need for end-to-end provenance and resistance against tampering should be clear. For this reason, I am encouraged by Google’s proposed Supply-Chain Levels for Software Artifacts (SLSA) framework that moves us towards a common language that increases the transparency and integrity of our software supply chain. However, for some software that performs critical functions (e.g., security), food is an inadequate comparison. It may be more apt to compare this type of software to medicine. This analogy brings forth additional considerations. For example, the drug-facts label on medicines includes not just the ingredients, but also usage guidelines and contraindications (i.e., what to look for in case something goes wrong.) Furthermore, as we’ve all seen with the COVID-19 vaccine, medicines must undergo intensive review and testing before it is approved for use. 


Data Consistency Between Microservices

The root of the problem is querying data from other boundaries that will be immediately inconsistent the moment it’s returned, just as in my first example without a serializable transaction. If you’re making HTTP or gRPC calls to other services to retrieve data that you require to perform business logic, you’re dealing with inconsistent data. If you store a local cache copy that’s eventually consistent, you’re dealing with inconsistent data. Is having inconsistent data an issue? Go ask the business. If it is, then you need to get all relevant data within the same boundary that’s required. There are two pieces of data we ultimately needed. We required the Quantity on Hand from the warehouse. In reality in the distribution/warehouse domain, you don’t rely on the “Quantity on Hand”. When dealing with physical goods, the point of truth is actually what is actually in the warehouse, not what the software/database states. ... The system is eventually consistent with the real world. Because of this, Sales has the concept of Available to Promise (ATP) which is a business function for customer order promising based on what’s been ordered but not yet shipped, purchased but not yet received, etc.


5 ways CIOs are redefining teamwork for a hybrid world

Most CIOs face similar grand experiments as hybrid work environments are becoming permanent. They are evaluating which team structures have been successful remotely and are looking to replicate them, while balancing innovation, collaboration, mentorship, and culture transfer, which have traditionally been done in person. Some 30% of IT leaders surveyed by IDC say they prefer an “online-first” policy for collaboration, and practices that started during the pandemic will likely continue indefinitely. While many workers say they have been more productive working remotely, that doesn’t always equate to better teamwork. “We’ve squeezed a lot of innovation out of necessity, but some of that serendipitous innovation that occurs through creative collision has been less,” says Aaron De Smet, senior partner at McKinsey and Co., who spoke at the IDG Future of Work Summit in October. “Companies have started to get their heads around a hybrid workforce, but I don’t think they’ve cracked what hybrid interactions look like. More of the work people do is now part of a cross-functional team. It’s part of a collaborative effort … ” De Smet says. 


3 Signs You’re Ready For A Machine Learning Job When You’ve Come From Another Field

Your sense of direction is what lets you know where you are, or which way to go, even when lingering in unfamiliar territory. Religious people would often argue that a lack of direction is a result of not having a purpose, and to some degree, I agree. You shouldn’t have to wait to have a sense of purpose before you can be happy. Imagine how miserable life would be if that’s the case! When a person begins to question their sense of direction in regards to machine learning, it’s usually to do with a lack of appreciation for how far they’ve come. As you learn more, it’s harder to see the small increments you improve by and this may feel as though you are no longer learning — especially when you compare it to a time when you were learning something new every day. Given you meet the generic requirements of the machine learning role you want, then it’s time to apply for a job that challenges you differently from how you could if you were working alone. Start applying.


Google Opens Up Spanner Database With PostgreSQL Interface

The integration of PostgreSQL into Cloud Spanner is deep; it is not just some conversion overlay. At the database schema level, the PostgreSQL interface for Cloud Spanner supports native PostgresSQL data types and its data description language (DDL), which is a syntax for creating users, tables, and indexes for databases. The upshot is that if you write a schema for the PostgreSQL interface for Cloud Spanner is that it will port to and run on any real PostgreSQL database, which means customers are not trapped on the Google Cloud is they use this service in production and want to switch. But customers do have to be careful. Spanner functions, like table interleaving, have been added to the PostgreSQL layer because they are important features in Spanner. You can get stuck because of these. ... The PostgreSQL interface for Cloud Spanner compiles PostgresSQL queries down to Spanner’s native distributed query processing and storage primitives and does not just support the PostgreSQL wire protocol, which allows for clients and myriad third-party analytics tools to interact with the PostgreSQL database.


The pursuit of transformation: Opportunities and pitfalls

Some transformations fail when there is a lack of alignment between the company’s strategy and its employees, customers and partners. There is a famous fable of an ant trying its hardest to change its trajectory but not realising that it is sitting on an elephant that’s going in the opposite direction. No matter how hard the little ant tries, it will not reach its destination as long as the elephant is not in alignment. All organizations have a culture and an emotional ethos, which if left unaddressed can sabotage the move to change. When Satya Nadella took over Microsoft in 2014, he had to first restructure the company to eliminate destructive internal competition so that all departments could focus on a common services goal. The result is a two and a half fold growth in the stock price over 5 years. On the other hand, when GE decided to launch GE Digital as a transformation vehicle, it did not release the subsidiary from the obligation of quarterly revenue and profitability targets. In addition, the company had to continue to meet GE’s software needs across business units, thereby not having the bandwidth to focus on true innovation and transformation. 


How Machine Learning can be used with Blockchain Technology?

Machine learning algorithms have amazing capabilities of learning. These capabilities can be applied in the blockchain to make the chain smarter than before. This integration can be helpful in the improvement in the security of the distributed ledger of the blockchain. Also, the computation power of ML can be used in the reduction of time taken to find the golden nonce and also the ML can be used for making the data sharing routes better. Further, we can build many better models of machine learning using the decentralized data architecture feature of blockchain technology. Machine learning models can use the data stored in the blockchain network for making the prediction or for the analysis of data purposes. let’s take an example of any smart BT-based application where the data is collecting by different sources such as sensors, smart devices, IoT devices and the blockchain in this application works as an integral part of the application where on the data the machine learning model can be applied for real-time data analytics or predictions. 


The tech recruiter – an unsung hero

The idiom ‘your first impression is your last impression’ holds true for recruiters. They have one opportunity to deliver that perfect elevator pitch to the candidate – convince them why your company provides the best opportunity for them- in the time it takes to ride an elevator. Landing the right impression will determine the candidate’s unalterable opinion and employment decision. To understand this better, let’s take a quick look at the talent landscape today. With the digitalization mega trend sweeping across Tech Inc., organizations are scurrying to bolster their workforce across technology skill sets. Economic Times reported that Indian IT firms plan to hire over 150,000 freshers in FY22 and NASSCOM remarked that India’s five largest companies are likely to hire 96,000 employees this year. Although this will be a huge boost for the $150 Billion industry, the demand-supply technology talent gap is only widening. Today, it is the candidates who hold the power and have the pleasure of the last word as prolonged notice periods allow them time to hedge their bets with the four-five job offers they have on hand. And the more skilled they are, the more offers they juggle.


Better Scrum Through Essence

First an anecdote from Jeff Sutherland – ‘The VP of one of the biggest banks in the country [USA] said recently: “I have 300 product owners and only three were delivering. The other 297 were not delivering”. And, he said, “I checked on where the three that were delivering, where they got the right way of working. They went to your class. So, you need to tell me what you are doing differently.” I said, “What we are doing differently is using Ivar’s work with Essence to really clarify to people what is working, what is not working, what you need to do next to improve things.” By using Essence on many Scrum Master courses we (Jeff, I and others) have also observed that of the 21 components of the original Scrum Essentials, the average team implements 1/3 of them well, 1/3 of them poorly and 1/3 of them not at all. With that level and quality of implementation it is not surprising that we are not always seeing the full potential that Scrum offers. At the heart of getting better Scrum through Essence are the use of the Scrum Foundation, the Scrum Essentials and the Scrum Accelerator practices to play games, facilitate events and drive team improvements.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - October 15, 2021

You’ve migrated to the cloud, now what?

When thinking about cost governance, for example, in an on-premises infrastructure world, costs increase in increments when we purchase equipment, sign a vendor contract, or hire staff. These items are relatively easy to control because they require management approval and are usually subject to rigid oversight. In the cloud, however, an enterprise might have 500 virtual machines one minute and 5,000 a few minutes later when autoscaling functions engage to meet demand. Similar differences abound in security management and workload reliability. Technologies leaders with legacy thinking are faced with harsh trade-offs between control and the benefits of cloud. These benefits can include agility, scalability, lower cost, and innovation and require heavy reliance on automation rather than manual legacy processes. This means that the skillsets of an existing team may be not the same skillsets needed in the new cloud order. When writing a few lines of code supplants plugging in drives and running cable, team members often feel threatened. This can mean that success requires not only a different way of thinking but also a different style of leadership.


A new edge in global stability: What does space security entail for states?

Observers recently recentred the debate on a particular aspect of space security, namely anti-satellite (ASAT) technologies. The destruction of assets placed in outer space is high on the list of issues they identify as most pressing and requiring immediate action. As a result, some researchers and experts rolled out propositions to advance a transparent and cooperative approach, promoting the cessation of destructive operations in both outer space and launched from the ground. One approach was the development of ASAT Test Guidelines, first initiated in 2013 by a Group of Governmental Experts on Outer Space Transparency and Confidence-Building Measures. Another is through general calls to ban anti-satellite tests, to not only build a more comprehensive arms control regime for outer space and prevent the production of debris, but also reduce threats to space security and regulate destabilising force. Many space community members threw their support behind a letter urging the United Nations (UN) General Assembly to take up for consideration a kinetic anti-satellite (ASAT) Test Ban Treaty for maintaining safe access to Earth orbit and decreasing concerns about collisions and the proliferation of space debris.


From data to knowledge and AI via graphs: Technology to support a knowledge-based economy

Leveraging connections in data is a prominent way of getting value out of data. Graph is the best way of leveraging connections, and graph databases excel at this. Graph databases make expressing and querying connection easy and powerful. This is why graph databases are a good match in use cases that require leveraging connections in data: Anti-fraud, Recommendations, Customer 360 or Master Data Management. From operational applications to analytics, and from data integration to machine learning, graph gives you an edge. There is a difference between graph analytics and graph databases. Graph analytics can be performed on any back end, as they only require reading graph-shaped data. Graph databases are databases with the ability to fully support both read and write, utilizing a graph data model, API and query language. Graph databases have been around for a long time, but the attention they have been getting since 2017 is off the charts. AWS and Microsoft moving in the domain, with Neptune and Cosmos DB respectively, exposed graph databases to a wider audience.


Observability Is the New Kubernetes

So where will observability head in the next two to five years? Fong-Jones said the next step is to support developers in adding instrumentation to code, expressing a need to strike a balance between easy and out of the box and annotations and customizations per use case. Suereth said that the OpenTelemetry project is heading in the next five years toward being useful to app developers, where instrumentation can be particularly expensive. “Target devs to provide observability for operations instead of the opposite. That’s done through stability and protocols.” He said that right now observability right now, like with Prometheus, is much more focused on operations rather than developer languages. “I think we’re going to start to see applications providing observability as part of their own profile.” Suereth continued that the OpenTelemetry open source project has an objective to have an API with all the traces, logs and metrics with a single pull, but it’s still to be determined how much data should be attached to it.


Data Exploration, Understanding, and Visualization

Many scaling methods require knowledge of critical values within the feature distribution and can cause data leakage. For example, a min-max scaler should fit training data only rather than the entire data set. When the minimum or maximum is in the test set, you have reduced some data leakage into the prediction process. ... The one-dimensional frequency plot shown below each distribution provides understanding to the data. At first glance, this information looks redundant, but these directly address problems when representing data in histograms or as distributions. For example, when data is transformed into a histogram, the number of bins is specified. It is difficult to decipher any pattern with too many bins, and with too few bins, the data distribution is lost. Moreover, representing data as a distribution assumes the data is continuous. When data is not continuous, this may indicate an error in the data or an important detail about the feature. The one-dimensional frequency plots fill in the gaps where histograms fail.


DevSecOps: A Complete Guide

Both DevOps and DevSecOps use some degree of automation for simple tasks, freeing up time for developers to focus on more important aspects of the software. The concept of continuous processes applies to both practices, ensuring that the main objectives of development, operation, or security are met at each stage. This prevents bottlenecks in the pipeline and allows teams and technologies to work in unison. By working together, development, operational or security experts can write new applications and software updates in a timely fashion, monitor, log, and assess the codebase and security perimeter as well as roll out new and improved codebase with a central repository. The main difference between DevOps and DevSecOps is quite clear. The latter incorporates a renewed focus on security that was previously overlooked by other methodologies and frameworks. In the past, the speed at which a new application could be created and released was emphasized, only to be stuck in a frustrating silo as cybersecurity experts reviewed the code and pointed out security vulnerabilities.


Skilling employees at scale: Changing the corporate learning paradigm

Corporate skilling programs have been founded on frameworks and models from the world of academia. Even when we have moved to digital learning platforms, the core tenets of these programs tend to remain the same. There is a standard course with finite learning material, a uniformly structured progression to navigate the learning, and the exact same assessment tool to measure progress. This uniformity and standardization have been the only approach for organizations to skill their employees at scale. As a result, organizations made a trade-off; content-heavy learning solutions which focus on knowledge dissemination but offer no way to measure the benefit and are limited to vanity metrics have become the norm for training the workforce at large. On the other hand, one-on-one coaching programs that promise results are exclusive only to the top one or two percent of the workforce, usually reserved for high-performing or high-potential employees. This is because such programs have a clear, measurable, and direct impact on behavioral change and job performance.


The Ultimate SaaS Security Posture Management (SSPM) Checklist

The capability of governance across the whole SaaS estate is both nuanced and complicated. While the native security controls of SaaS apps are often robust, it falls on the responsibility of the organization to ensure that all configurations are properly set — from global settings, to every user role and privilege. It only takes one unknowing SaaS admin to change a setting or share the wrong report and confidential company data is exposed. The security team is burdened with knowing every app, user and configuration and ensuring they are all compliant with industry and company policy. Effective SSPM solutions come to answer these pains and provide full visibility into the company's SaaS security posture, checking for compliance with industry standards and company policy. Some solutions even offer the ability to remediate right from within the solution. As a result, an SSPM tool can significantly improve security-team efficiency and protect company data by automating the remediation of misconfigurations throughout the increasingly complex SaaS estate.


Why gamification is a great tool for employee engagement

Gamification is the beating heart of almost everything we touch in the digital world. With employees working remotely, this is the golden solution for employers. If applied in the right format, gaming can help create engagement in today's remote working environment, motivate personal growth, and encourage continuous improvement across an organization. ... In the connected workspace, gamification is essentially a method of providing simple goals and motivations that rely on digital rather than in-person engagement. At the same time, there is a tacit understanding among both game designer and "player" that when these goals are aligned in a way that benefits the organization, the rewards often impact more than the bottom line. Engaged employees are a valuable part of defined business goals, and studies show that non-engagement impacts the bottom line. At the same time, motivated employees are more likely to want to make the customer experience as satisfying as possible, especially if there is internal recognition of a job well done.


10 Cloud Deficiencies You Should Know

What happens if your cloud environment goes down due to challenges outside your control? If your answer is “Eek, I don’t want to think about that!” you’re not prepared enough. Disaster preparedness plans can include running your workload across multiple availability zones or regions, or even in a multicloud environment. Make sure you have stakeholders (and back-up stakeholders) assigned to any manual tasks, such as switching to backup instances or relaunching from a system restore point. Remember, don’t wait until you’re faced with a worst-case scenario to test your response. Set up drills and trial runs to make sure your ducks are quacking in a row. One thing you might not imagine the cloud being is … boring. Without cloud automation, there are a lot of manual and tedious tasks to complete, and if you have 100 VMs, they’ll require constant monitoring, configuration and management 100 times over. You’ll need to think about configuring VMs according to your business requirements, setting up virtual networks, adjusting for scale and even managing availability and performance. 



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - October 13, 2021

Stop Using Microservices. Build Monoliths Instead.

Building out a microservices architecture takes longer than rolling the same features into a monolith. While an individual service is simple, a collection of services that interact is significantly more complex than a comparable monolith. Functions in a monolith can call any other public functions. But functions in a microservice are restricted to calling functions in the same microservice. This necessitates communication between services. Building APIs or a messaging system to facilitate this is non-trivial. Additionally, code duplication across microservices can’t be avoided. Where a monolith could define a module once and import it many times, a microservice is its own app — modules and libraries need to be defined in each. ... The luxury of assigning microservices to individual teams is reserved for large engineering departments. Although it’s one of the big touted benefits of the architecture, it’s only feasible when you have the engineering headcount to dedicate several engineers to each service. Reducing code scope for developers gives them the bandwidth to understand their code better and increases development speed.


DevOps at the Crossroads: The Future of Software Delivery

Even though DevOps culture is becoming mainstream, organizations are struggling with the increasing tool sprawl, complexity and costs. These teams are also dealing with a staggering (and growing) number of tools to help them get their work done. This has caused toil, with no single workflow and lack of visibility. At Clear Ventures, the problems hit close to home as 17 of the 21 companies we had funded had software development workflows that needed to be managed efficiently. We found that some of the companies simply did not have the expertise to build out a DevOps workflow themselves. On the other hand, other companies added expertise over time as they scaled up but that required them to completely redo their workflows resulting in a lot of wasted code and effort. We also noticed that the engineering managers struggled with software quality and did not know how to measure productivity in the new remote/hybrid working environment. In addition, developers were getting frustrated with the lack of ability to customize without a significant burden on themselves. 
A stateful architecture was invented to solve these problems, where database and cache are started in the same process as applications. There are several databases in the Java world that we can run in embedded mode. One of them is Apache Ignite. Apache Ignite supports full in-memory mode (providing high-performance computing) as well as native persistence. This architecture requires an intelligent load balancer. It needs to know about the partition distribution to redirect the request to the node where the requested data is actually located. If the request is redirected to the wrong node, the data will come over the network from other nodes. Apache Ignite supports data collocation, which guarantees to store information from different tables on the same node if they have the same affinity key. The affinity key is set on table creation. For example, the Users table (cache in Ignite terms) has the primary key userId, and the Orders table may have an affinity key - userId. 


Here’s Why You Should Consider Becoming a Data Analyst

Data analysts specialize in gathering raw data and being able to derive insights from it. They have the patience and curiosity to poke around large amounts of data until they find meaningful information from it — after which they clean and present their findings to stakeholders. Data analysts use many different tools to come up with answers. They use SQL, Python, and sometimes even Excel to quickly solve problems. The end goal of an analyst is to solve a business problem with the help of data. This means that they either need to have necessary domain knowledge, or work closely with someone who already has the required industry expertise. Data analysts are curious people by nature. If they see a sudden change in data trends (like a small spike in sales at the end of the month), they would go out of their way to identify if the same patterns can be observed throughout the year. They then try to piece this together with industry knowledge and marketing efforts, and provide the company with advice on how to cater to their audience.


Siloscape: The Dark Side of Kubernetes

Good IT behavior starts with the user. As someone who has witnessed the impacts of ransomware firsthand, I can attest to the importance of having good password hygiene. I recommend using unique, differentiated passwords for each user account, ensuring correct password (and data) encryption when static or in transit and keeping vulnerable and valuable data out of plaintext whenever possible. In the case of Kubernetes, you must ensure that you understand how to secure it from top to bottom. Kubernetes offers some of the most well-written and understandable documentation out there and includes an entire section on how to configure, manage and secure your cluster properly. Kubernetes can be an awesome way to level-up applications and services. Still, the importance of proper configuration of each Kubernetes cluster cannot be overstated. In addition to good hygiene, having a trusted data management platform in place is essential for making protection and recovery from ransomware like Siloscape less burdensome.


An Introduction to Hybrid Microservices

Put simply, a hybrid microservices architecture comprises a mix of the two different architectural approaches. It comprises some components that adhere to the microservices architectural style and some other components that follow the monolithic architectural style. A hybrid microservices architecture is usually comprised of a collection of scalable, platform-agnostic components. It should take advantage of open-source tools, technologies, and resources and adopt a business-first approach with several reusable components. Hybrid microservices architectures are well-suited for cloud-native, containerized applications. A hybrid microservices-based application is a conglomeration of monolithic and microservices architectures – one in which some parts of the application are built as a microservice and the remaining parts continue to remain as a monolith.  ... When using microservices architecture in your application the usual approach is to refactor the application and then implement the microservices architecture in the application.


The Inevitability of Multi-Cloud-Native Apps

Consistently delivering rapid software iteration across a global footprint forces DevOps organizations to grapple with an entirely new set of technical challenges: Leveraging containerized applications and microservices architectures in production across multiple Kubernetes clusters running in multiple geographies. Customers want an on-demand experience. This third phase is what we call multi-cloud-native, and it was pioneered by hyperscale IaaS players like Google, AWS, Azure and Tencent. The reality is, of course, that hyperscalers aren’t the only ones who have figured out how to deliver multi-cloud-native apps. Webscale innovators like Doordash, Uber, Twitter and Netflix have done it, too. To get there, they had to make and share their multi-cloud-native apps across every geography where their customers live. And, in turn, to make that happen they had to tackle a new set of challenges: Develop new tools and techniques like geographically distributed, planet-scale databases and analytics engines, application architectures that run apps on the backend close to the consumer in a multi-cloud-native way. 


DeepMind is developing one algorithm to rule them all

The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning. Like all well-grounded research, NAR has a pedigree that goes back to the roots of the fields it touches upon, and branches out to collaborations with other researchers. Unlike much pie-in-the-sky research, NAR has some early results and applications to show. We recently sat down to discuss the first principles and foundations of NAR with Veličković and Blundell, to be joined as well by MILA researcher Andreea Deac, who expanded on specifics, applications, and future directions. Areas of interest include the processing of graph-shaped data and pathfinding.


Microservices Transformed DevOps — Why Security Is Next

Microservices break that same application into tens or hundreds of small individual pieces of software that address discrete functions and work together via separate APIs. A microservices-based approach enables teams to update those individual pieces of software separately, without having to touch each part of the application. Development teams can move much more quickly and software updates can happen much more frequently because releases are smaller. This shift in the way applications are built and updated has created a second movement/change: how software teams function and work. In this modern environment, software teams are responsible for smaller pieces of code that address a function within the app. For example, let’s say a pizza company has one team (Team 1) solely focused on the software around ordering and another (Team 2) on the tracking feature of a customer’s delivery. If there is an update to the ordering function, it shouldn’t affect the work that Team 2 is doing. A microservices-based architecture is not only changing how software is created


Transitioning from Monolith to Microservices

While there are many goals for a microservice architecture, the key wins are flexibility, delivery speed, and resiliency. After establishing your baseline for the delta between code commit and production deployment completion, measure the same process for a microservice. Similarly, establish a baseline for “business uptime” and compare it to that of your post-microservice implementation. “Business uptime” is the uptime required by necessary components in your architecture as it relates to your primary business goals. With a monolith, you deploy all of your components together, so a fault in one component could affect your entire monolithic application. As you transition to microservices, the pieces that remain in the monolith should be minimally affected, if at all, by the microservice components that you’re creating. ... Suppose you’ve abstracted your book ratings into a microservice. In that case, your business can still function—and would be minimally impacted if the book ratings service goes down—since what your customers primarily want to do is buy books.



Quote for the day:

"The essence of leadership is not giving things or even providing visions. It is offering oneself and one's spirit." -- Lee Bolman & Terence Deal

Daily Tech Digest - October 12, 2021

Proving the value of analytics on the edge

Las Vegas began deploying edge computing technology in 2018 while working on smart traffic solutions. A key driver for analyzing data at the network edge came from working with autonomous vehicle companies that needed near real-time data, Sherwood says. “Edge computing allowed for data to be analyzed and provided to the recipient in a manner which provided the best in speed,” Sherwood says. Visualizing data in a real-time format “allows for decision-makers to make more informed decisions.” The addition of predictive analytics and artificial intelligence (AI) is helping with decisions that are improving traffic flows, “and in the near future will have dramatic impacts on reducing traffic congestion and improving transit times and outcomes,” Sherwood says. To help bolster its data analytics operations overall and at the edge, the city government is developing a data analytics group as an offshoot of the IT department. The Office of Data and Analytics will drive how data is governed and used within the organization, Sherwood says. “We see lots of opportunities with many new technologies coming onto the market,” he says.


The Fundamentals of Testing with Persistence Layers

In order to learn how to test with databases, one must first ‘unlearn’ a few things starting with the concept of unit tests and integration tests. To put it bluntly, the modern definitions of these terms are so far removed from their original meanings that they are no longer useful for conversation. So, for the remainder of this article, we aren’t going to use either of them. The fundamental goal of testing is to produce information. A test should tell you something about the thing being tested you may not have known before. The more information you get the better. So, we are going to ignore anyone who says, “A test should only have one assertion” and replace it with, “A test should have as many assertions as needed to prove a fact”. The next problematic expression we need to deal with is, “All tests should be isolated”. This is often misunderstood to mean each test should be full of mocks so the function you’re testing is segregated from its dependencies. This is nonsense, as that function won’t be segregated from its dependencies in production.


Should We Resign Ourselves To The Great Resignation?

Is the Great Resignation a temporary trend or a long-term structural change? There’s no way to know but my money is on the latter. Life-changing events change lives, whether or not we realize it as it is occurring. An individual crisis changes individual behavior, worldwide crises cause lasting social and cultural consequences. The pandemic completely upended the employee experience, and while many employers continued to monitor productivity, most didn’t devote nearly the same amount of effort to soliciting real-time, real-world feedback from remote workers about the challenges, struggles and stresses they were facing. McKinsey identified “employees prioritize relational factors, whereas employers focus on transactional ones”. By neglecting to engage with remote employees, not listening to nor addressing their issues and concerns, employers missed a once-in-a-lifetime opportunity to build trust in within the organization and loyalty from workers. As the Great Resignation plays out and the workforce reshuffles, it will be interesting to see if employers and workers can engage, listen, and trust each other enough to find common ground.


How cyberattacks are changing according to new Microsoft Digital Defense Report

Ransomware offers a low-investment, high-profit business model that’s irresistible to criminals. What began with single-PC attacks now includes crippling network-wide attacks using multiple extortion methods to target both your data and reputation, all enabled by human intelligence. Through this combination of real-time intelligence and broader criminal tactics, ransomware operators have driven their profits to unprecedented levels. This human-operated ransomware, also known as “big game ransomware,” involves criminals hunting for large targets that will provide a substantial payday through syndicates and affiliates. Ransomware is becoming a modular system like any other big business, including ransomware as a service (RaaS). With RaaS there isn’t a single individual behind a ransomware attack; rather, there are multiple groups. For example, one threat actor may develop and deploy malware that gives one attacker access to a certain category of victims; whereas, a different actor may merely deploy malware.


Cybersecurity awareness month: Fight the phish!

Simply put, the phishing “game” only has two moves: the scammers always play first, trying to trick you, and you always get to play second, after they’ve sent out their fake message. There’s little or no time limit for your move; you can ask for as much help as you like; you’ve probably got years of experience playing this game already; the crooks often make really silly mistakes that are easy to sp …and if you aren’t sure, you can simply ignore the message that the crooks just sent, which means you win anyway! How hard can it be to beat the criminals every time? Of course, as with many things in life, the moment you take it for granted that you will win every time is often the very same moment that you stop being careful, and that’s when accidents happen. Don’t forget that phishing scammers get to try over and over again. They can use email attachments one day, dodgy web links the next, rogue SMSes the day after that, and if none of those work, they can send you fraudulent messages on a social network: The crooks can try threatening you with closing your account, warning you of an invoice you need to pay, flattering you with false praise, offering you a new job, or announcing that you’ve won a fake prize.


Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam. This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high. That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.


Test Automation for Software Development

Automating software and security testing in software development is an ongoing process, yet truly reaching full automation may never happen. In SmartBear Software’s “2021 State of Software Quality | Testing” the percentage of organizations that conduct all tests manually rose from 5% in 2019 to 11% in 2021. This does not mean that automation is not happening. On the contrary, both manual and automated tests are being conducted. The biggest challenge to test automation is no longer dealing with changing functionality but instead not having enough time to create and conduct tests. Testers are not being challenged by demands to deploy more frequently but instead to test more frequently across more environments. Testing of the user interface layer is more common, and to address this 50% currently conduct some automated usability testing as compared to just 34% in 2019. The remainder of the article provides additional highlights on this and two other reports that highlight DevSecOps metrics and practices.


API Design Principles and Process at Slack

Slack’s list of design principles begins with each API doing one thing well and the developer experience. The first is that APIs should focus on a specific use case, thus becoming more straightforward, safer and easier to scale. The authors believe that APIs should be so well designed and documented that developers should be able to build a simple use case in a matter of minutes and discover parts of the API intuitively. In case of errors, the API should return all the information necessary for developers to understand the cause of the error and take the first steps towards solving it. The fifth principle concerns scale and performance. The authors provide concrete advice, recommending pagination of big collections, avoiding nesting big collections inside other big collections, and implementing rate limiting on the API. The last principle enumerated by the authors is that breaking changes should be avoided. 


How to Build a Strong and Effective Data Retention Policy

The first step toward creating a comprehensive DRP strategy is to identify the specific business needs the retention policy must address. The next step should be reviewing the compliance regulations that are applicable to the entire organization. “Designate a team of individuals across various business practices to begin data inventorying and devising a plan to implement and maintain a data retention policy that meets your business requirements while adhering to compliance regulations,” Gandhi advises. The enterprise's chief data officer (CDO) should oversee the DRP's design and implementation, Ferreira recommends. “However, everyone who deals with the data must be aware of the mechanisms implemented ... so that they can behave in ways that facilitate the implementation of the DRP,” he adds. “Implementing a robust DRP may be a top-down decision, but it requires buy-in from all levels of the organization.” Stakeholders from records, legal, IT, security, privacy, and other relevant posts and departments all need a chance to weigh in on an enterprise's data retention policy, Read says.


FSU’s university-wide resiliency program focuses on doing the basics better

In addition to its far-reaching geographical footprint, FSU has a broad range of operational needs to support the diversity of work typical of a university. It also has distributed IT. All those factors make for additional levels of complexity within disaster recovery and business continuity plans. Furthermore, at the time of the audit, the university had 307 different units expected to devise their own disaster and recovery plans as well as complete an annual 140-question risk assessment. Hunkapiller sought to overcome those complexities by using a multipronged approach to first tackle the inadequacies in the university’s business continuity, disaster preparedness and response capabilities and then encourage continuous improvement. “The idea was to better identify risks, improve our vulnerability management and resiliency plans, ensure continuity of operations and bring risk down to a level that was tolerable,” says Hunkapiller, who worked with FSU’s Department of Emergency Management to devise Seminole Secure.



Quote for the day:

"So much of what we call management consists in making it difficult for people to work." -- Peter Drucker

Daily Tech Digest - October 11, 2021

How businesses can combat data security and GDPR issues when working remotely

Whether using a business or personal device, having robust Secure Device Management and effective Mobile Device Management (MDM) is key to implementing security measures to keep data on mobile devices secure from threats. Adopting data encryption across software and devices being used remotely also allows data to be kept safe and secure from unauthorised use, even in the event of a security breach. In addition, implementing a corporate Virtual Private Network (VPN) enables an encrypted connection from a device to a network that allows the safe transmission of data from the office to remote working environments. Employees should have access only to the data they require to complete their work to mitigate against unnecessary risk of unauthorised access, with measures that restrict data on a ‘need-to-know’ basis implemented where possible. Crucially, companies should provide all employees working from home with a clear and documented remote working policy that outlines precisely how personal and company data should be handled to keep it secure.


Digital transformation: 4 excuses to leave behind

Outdated, manual, and siloed processes not only slow your business, but they boost costs because it is more expensive to maintain broken, outdated processes. As we emerge from the pandemic, most businesses are realizing that their existing business processes are not sustainable in the new normal. With remote and hybrid work becoming standard, organizations have had to think on their feet to maintain business as usual, and digital transformation makes this possible. COVID lockdowns made it urgent for enterprises to enable secure remote operations, which in turn made them realize the importance of migrating their operations to the cloud. There has been an exponential increase in the adoption of cloud technology post-pandemic. It has enabled businesses to operate in a remote environment without impacting the speed and quality of services. If you haven’t already done so, start by identifying the “low-hanging fruit” – i.e., processes that are best for your initial automation roadmaps. Then start scaling up. Transitioning to the cloud gives you countless possibilities, from reducing IT infrastructure costs to achieving scalability per business needs.


4 questions that get the answers you need from IT vendors

Enterprises don’t plan on how to adopt abstract technology concepts, they plan for product adoption and deployment. Network vendors who offer the products are the usual source of information, which can be delivered through news stories, vendor websites, or sales engagement. Enterprises expect the seller to explain why their product or service is the right idea, and sellers largely agree. It’s just a question of what specific sales process is supposed to provide that critical information. Technology salespeople, like all salespeople make their money largely on commissions. They call on prospects, pitch their products/services, and hopefully get the order. Their goal is a fast conversion from prospect to customer, and nearly all salespeople will tell you that they dread above all the “educational sell”. That happens when a prospect knows so little about the product/service being sold that they can’t make a decision at all and have to be taught the basics. The salesperson who’s teaching isn’t making commissions, and their company isn’t hitting their revenue goals.  


3 Things to Consider Before Investing in New Technology for Your Small Business

When you are searching for tech to suit your business's unique needs, it’s important to keep the happiness of your employees at the forefront. That’s what authentically attracts new talent to your company and entices people to stay. In many cases, happiness is derived from productivity. If workers know what they need to do but just don’t have the tools to do it quickly, they will get discouraged and customers will complain because they didn’t have a great experience. So, stop and assess why they’re experiencing each challenge as they move through tasks. Consider what you genuinely wish could be better or easier for you, your employees and everyone else involved. Then think about how technology may be able to solve each problem. If you equip a first-day employee with a mobile device that helps them get through a full inventory count comfortably and without making a single mistake, they are going to leave work feeling empowered. They’ll share their positive experience with friends, family and (if you’re lucky) social media. Word will spread about how great it is to work for your company.


Cloud Cost Optimization: A Pivotal Part of Cloud Strategy

To maintain an optimal state, you need to ensure that sound policies around budgeting are adhered to. In terms of Governance, the framework should oversee resource creation permissions as well. ... Once you gain visibility into spending metrics, you must observe which unused resources can be disposed of and which resources could be optimized. The journey for any cloud cost optimization starts with initial analyses of current cloud estate and identifying optimization opportunities across compute, network, storage, and other cloud-native features. Any cloud cost optimization framework needs to have a repository of cost levers with associated architecture and feature trade-offs. Businesses would need governance — the policies around budget adherence, resource creation permissions, etc. — to maintain an optimal state. A practical cost optimization framework requires all three of the above. Achieving initial savings would entail analyzing the estate and identifying optimization opportunities across compute, storage, and networking, focusing on the highest costs first and/or incremental/additional cost, month over month- cloud vendors provide access to the costs and utilization.


Applying Behavioral Psychology to Strengthen Your Incident Response Team

Orlando says it's natural for relationships to form, and for trust to form, in an incident response team and within a larger organization. In his experience, he often encounters what he calls the "rock star problem." "You've got one or a few people [who are] very, very capable, very knowledgeable, and the team sort of coalesces around those individuals," he says. "Which is not necessarily a bad thing, but it can create issues when those individuals inevitably move on, or maybe they [have] less than optimal work habits, or behaviors, or things we want to try to account for." Compounding CSIRTs' collaboration issues is a prominent focus on technical tools and skills, Orlando adds. Incident response teams are "often inundated" with tools to address technical problems in security and incident response; however, there is a "definite lack" of tools to address some of the social and collaboration challenges CSIRTs face in operating within the context of a multigroup, multiteam system as they need to do.


Netherlands Says Armed Forces May Combat Ransomware Attacks

Countries are being held accountable for their actions and inaction via diplomatic responses such as actions against cross-border criminal cyber operations and measures such as sanctions, which are more powerful if they are designed in a broad coalition context, Knapen says. "Within the EU, the Netherlands has therefore been a driving force behind the EU Cyber Diplomacy Toolbox and the adoption of the ninth EU cyber sanctions regime in May 2019, and the Netherlands is committed to further developing these instruments. This provides the EU with good tools to respond faster and more vigorously to cyber incidents. Recent EU statements and sanctions show that these instruments are delivering concrete results," he notes. Knapen is also pushing for diplomatic channels for bilateral cooperation between countries in judicial investigations against ransomware, which he says can be useful if cooperation through international judicial channels is insufficient. "The Netherlands can then emphasize the importance attached to cooperation through diplomatic channels," he says.


Can India Address the Growing Cybersecurity Challenges in the Nuclear Domain?

India has established several key agencies to counter the growing challenges on cybersecurity. However, the effectiveness of its cybersecurity policies in the nuclear domain lies with the ability to effectively incorporate cybersecurity, cyber infrastructure, and its operating agencies into the larger nuclear security framework. Efficient and effective cybersecurity mechanisms require cohesive inter-agency coordination to strengthen said mechanisms. It is also essential for government authorities to acknowledge, interact with, and evolve cybersecurity protocols and procedures regularly to reflect a rapidly changing security environment. An effective cybersecurity policy also requires clear demarcation of roles, responsibilities, and contingency plans for short and long-term implementation and altering based on circumstances and technological advancements. Additionally, and most importantly, a renewed emphasis on understanding cyber risks and acknowledging the importance of cyber-nuclear security is essential in the Indian context.


How technology can drive positive change in insurance post-COVID

From forced closures to operational transformation, the COVID-19 pandemic has impacted businesses both UK and worldwide. The world of insurance is no exception to this rule – but the nature of the industry and its interests have led to a layered set of challenges and opportunities beyond the obvious disruptions to working practices. These challenges have been laid out in a recent report from EY, which lists a number of early pandemic issues for the industry including the tricky transition to remote working, a “strong push toward digitisation”, and the embrace of virtual interactions for clients and distribution partners. While these concerns may feel familiar, EY’s report goes on to draw out the specific difficulties faced by insurers, where COVID-19 has occasioned “mounting consumer, political, and legislative pressure to cover pandemic-related business interruption claims”. Not only has the industry needed to embrace new technologies and practices to adapt to the pandemic, but it has also needed to address some of the COVID-driven burdens faced by clients. 


Safe and secure disposal of end-of-life IT hardware

First, your business needs to develop a plan of action that brings together your IT, information security and office management staff, with oversight from senior executives. To be fully effective, it should establish a decommissioning strategy that covers the compliant disposal of retired hardware and the destruction of data. Next, you need to ensure that all the data on your old hardware has been permanently eradicated and is non-recoverable. Given the importance of this step, it is likely that you’ll need assistance from a third-party disposition expert. Third, you need to know the whereabouts of your assets throughout the disposition process. A secure chain of custody is vital to prove compliance and so, once again, it is advisable to employ the services of an outside expert – a company that offers rigorous security practices, such as asset itemisation, GPS tracking and protected transportation, all backed up with supporting documentation. Having a secure chain of custody is critical because it ensures that the IT assets are tracked during each step of the process from pick-up to final disposition.



Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann

Daily Tech Digest - October 10, 2021

Data Science Process Lifecycle

When you’re SO focused on tech and coding, it can be easy to lose sight of the actual business goal and vision. You might start spinning your wheels, going off on tangents, and overall contributing to business inefficiencies - often without noticing. Not to mention, having to execute projects without a firm understanding of your place in the company’s vision and without a strategy for forward momentum can be downright frustrating and inefficient. ... How are data pros supposed to excel without strong leadership and frameworks to guide them in their execution? We need to make sure that as data implementation folks, we keep our eyes on the prize. And as leaders, we need to make sure data implementation workers are included in the overarching strategy from the get-go. If you’re ready to make sure the data projects you work on always stay on track and profitable, let’s dive into the data science process lifecycle framework. ... Essentially, the data science process lifecycle is a structure through which you can manage the implementation of your data initiatives. It allows those who work in data implementation to see where their role first comes into the bigger picture of the project, and ensures there’s a cohesive management structure.


Distributed transaction patterns for microservices compared

Having a monolithic architecture does not imply that the system is poorly designed or bad. It does not say anything about quality. As the name suggests, it is a system designed in a modular way with exactly one deployment unit. Note that this is a purposefully designed and implemented modular monolith, which is different from an accidentally created monolith that grows over time. In a purposeful modular monolith architecture, every module follows the microservices principles. Each module encapsulates all the access to its data, but the operations are exposed and consumed as in-memory method calls. With this approach, you have to convert both microservices (Service A and Service B) into library modules that can be deployed into a shared runtime. You then make both microservices share the same database instance. Because the services are written and deployed as libraries in a common runtime, they can participate in the same transactions. Because the modules share a database instance, you can use a local transaction to commit or rollback all changes at once. 


How disagreement creates unity in open source

For something to be learned in a disagreement, both sides must be open to different perspectives. I once coached an engineer who had strong opinions and constantly found himself in decision gridlock. Team meetings became so tense that we couldn't get past even the first agenda item before the hour was up. This engineer was frustrated and wanted to know why he couldn't convince people of his ideas. My advice surprised him: he should allow himself to be convinced as much as he tried to convince others. When he applied this advice, it became noticeably easier to make progress in meetings. Because other team members felt respected, we were arguing less and focusing more on how to reach our goals as a team. When you focus solely on advocating for your own ideas, you are more likely to miss the critical points seen by others, however unintentionally. Having a collaborative mindset keeps disagreement healthy. A collaborative mindset means prioritizing the needs of the team or community rather than the individual. When these needs fall out of balance, having a shared purpose can recenter a team. It's not about being right; it's about doing right by the group.


Microservices Adoption and the Software Supply Chain

What we continue to call technical debt is really the activities that are related to tending to and upgrading our software when third-party components are evolving or have common vulnerabilities and exposures (CVEs) and need to be upgraded. These are tedious, repetitive tasks that usually fall to the most experienced engineers as they require technical expertise to do correctly. Such activities can paralyze engineering organizations and are a tremendous burden on engineers; that often leads to burnout. Up to 30% of engineering time is spent on technical debt. The perception that somehow developers were responsible for accruing this technical debt and are doing something wrong that prevents them from keeping up is hugely demoralizing and demotivating. However, if we reframe technical debt as software supply chain management and stop blaming engineering for it, we can make maintenance more predictable and consistent. By taking steps like inventorying third-party components and determining how pervasive they are in the application, an organization can arrive at a maintenance estimate.


When it Comes to Ransomware, Should Your Company Pay?

Theoretically, if organizations pay the ransom, the attackers will provide a decryption tool and withdraw the threat to publish stolen data. However, payment doesn’t guarantee all data will be restored. Executives need to carefully consider the realities of ransomware, including: On average, only 65% of the data is recovered, and only 8% of organizations manage to recover all data; Encrypted files are often unrecoverable. Attacker-provided decrypters may crash or fail. You may need to build a new decryption tool by extracting keys from the tool the attacker provides; Recovering data can take several weeks, particularly if a large amount of it has been encrypted; There is no guarantee that the hackers will delete the stolen data. The could sell or disclose the information later if it has value. Ransomware is a sustainable and lucrative business model for cybercriminals, and it puts every organization that uses technology at risk. In many cases, it is easier and cheaper to pay the ransom than to recover from backup. But supporting the attackers’ business model will only lead to more ransomware.


Open source for good

In the pandemic, open source has been critical. It has touched billions of lives, and it has saved lives. I saw this unfold daily at SUSE, which specialises in bringing open source software to business. I marvelled at the importance of universal access to critical code to design contact-tracing technology, helping unravel the complexities of the virus’s path across the planet. When Singapore led the world implementing contact-tracing, open source made it possible. When large-scale Covid-19 testing and analysis became available, open source made it possible (and we are proud to have empowered our customer, Ruvos, to achieve this). When healthcare organisations needed a cost-effective way to analyse torrents of data at a moment’s notice, open source made it possible. Open source pervades our lives. It is a remarkable, often unsung, force for good. Open source software is embedded in mammogram machines, it powers autonomous driving to make people safer on the road, air traffic control systems at airports, and weather forecasting technology to warn of storms and even earthquakes. 


5 principles for your cloud-oriented open-source strategy

Practitioners on your team should investigate projects that have the potential to solve a “job to be done” for your business. What they turn up may need more time to bake before it can be used in a meaningful way at your company, but if a project isn’t immediately useful, star the repo and keep tabs on the project. More importantly, make sure your engineers have time to learn and try new things every week and even to contribute to open-source projects. It can do wonders for morale, retention, and recruitment, and if the open-source projects are ones that your business depends on, the benefits multiply. ... It’s easy to have more open-source tech in your IT organization than you realize. Using open-source software is often the easiest way for an engineer to add a feature to in-house software or fix a bug in third-party software. While open-source proliferation means your team is finding creative ways to solve business problems, you need to understand what technology is being used and how it affects your organization.


Enterprise architecture and the sustainability puzzle

When it comes to revolutionizing digital infrastructure, the opportunity to increase sustainability and lower emissions lies directly with Enterprise Architecture (EA) teams and related disciplines. In short, the purpose of these teams is to create sustainable organizations delivering business objectives supported by modern digital platforms. This integrated perspective enables business and IT executives to quickly develop an understanding of where change is required and the impact this will have. So as we look to achieve the United Nations’ goals for sustainable development, EA’s overall targets should therefore include sustainable IT practices. A particular goal EA should help organizations achieve includes goal 9, which highlights the need to ‘build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation.’ Striving to achieve this goal won’t be a simple fix, but should certainly be seen as an opportunity for a profound, systemic shift to a more sustainable economy that works for both people and the planet.


Cybersecurity Risk Management: Are Your Enterprise Architecture and Security Teams Lacking Engagement?

The Digital Twin of an Organization (DTO) gives you a virtual representation of your organization, showing how the company performs as a system. It’s also a highly effective communication tool. With it, you’re able to visualize ongoing projects and see where they overlap. In addition, it tracks processes, systems, and information. The DTO modeling can be expanded with Scenarios. Using Scenarios, you can focus on various points to map out potential futures, including risk scenarios. From a risk management and security perspective, you can see what would happen if a critical system went down, which departments would be paralyzed, and how they could continue to function. For example, when mapping out the cybersecurity risk of ransomware hits, the Digital Twin of an Organization could give you a clear overview of which parts of the organization are most exposed and show how the attack could develop. Let’s say there’s a new virus affecting laptops that aren't fully patched. You can easily identify which parts have the most unpatched laptops.


How To Leverage Enterprise Architects As CXO Advisors

Enterprise architects create holistic transparency and expertise across all layers, from business to IT and infrastructure landscapes. They also proactively identify the potential for optimization when it comes to business models and processes. The most important value that enterprise architects bring to the table is overseeing technology selection and the design of solution architecture. This showcases the increasing importance of enterprise architecture in shaping the agenda of CXOs (chief experience officers). Enterprise architects take a holistic approach in design, planning, implementation and KPI measurement. They are able to fully understand the business strategy and identify needed changes, as well as additional technological capabilities that are required. They not only indentify the requirements but can plan and implement them. This is done by being cognizant of the organizational strategy, business environment, stakeholder interests, constraints and risks. Finally, they ensure that the relevant outcomes are achieved or if course corrections are needed.



Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward