Daily Tech Digest - September 27, 2021

How to Get Started With Zero Trust in a SaaS Environment

While opinions vary on what zero trust is and is not, this security model generally considers the user's identity as the root of decision-making when determining whether to allow access to an information resource. This contrasts with earlier approaches that made decisions based on the network from which the person was connecting. For example, we often presumed that workers in the office were connecting directly to the organization's network and, therefore, could be trusted to access the company's data. Today, however, organizations can no longer grant special privileges based on the assumption that the request is coming from a trusted network. With the high number of remote and geographically dispersed employees, there is a good chance the connections originate from a network the company doesn't control. This trend will continue. IT and security decision-makers expect remote end users to account for 40% of their workforce after the COVID-19 outbreak is controlled, an increase of 74% relative to pre-pandemic levels, according to "The Current State of the IT Asset Visibility Gap and Post-Pandemic Preparedness," with research conducted by the Enterprise Strategy Group for Axonius.

Tons Of Data At The Company Store

Confidentially, many chief data officers will admit that their companies suffer from what might euphemistically be called “data dyspepsia:” they produce and ingest so much data that they cannot properly digest it. Like it or not, there is such a thing as too much data – especially in an era of all-you-can-ingest data comestibles. “Our belief is that more young companies die of indigestion than starvation,” said Adam Wilson, CEO of data engineering specialist Trifacta, during a recent episode of Inside Analysis, a weekly data- and analytics-focused program hosted by Eric Kavanagh. So what if Wilson was referring specifically to Trifacta’s decision to stay focused on its core competency, data engineering, instead of diversifying into adjacent markets. So what if he was not, in fact, alluding to a status quo in which the average business feels overwhelmed by data. Wilson’s metaphor is no less apt if applied to data dyspepsia. It also fits with Trifacta’s own pitch, which involves simplifying data engineering – and automating it, insofar as is practicable – in order to accelerate the rate at which useful data can be made available to more and different kinds of consumers.

Hyperconverged analytics continues to guide Tibco strategy

One of the trends we're seeing is that people know how to build models, but there are two challenges. One is on the input side and one is on the output side. On the input side, you can build the greatest models in the world, but if you feed them bad data that's not going to help. So there's a renewed interest around things like data governance, data quality and data security. AI and ML are still very important, but there's more to it than just building the models. The quality of the data, and the governance and processes around the data, are also very important. That way you get your model better data, which makes your model more accurate, and from there you're going to get better outcomes. On the output side, since there are so many models being built, organizations are having trouble operationalizing them all. How do you deploy them into production, how do you monitor them, how do you know when it's time to go back and rework that model, how do you deploy them at the edge, how do you deploy them in the cloud and how do you deploy them in an application? 

Gamification: A Strategy for Enterprises to Enable Digital Product Practices

As digital products take precedence, the software ecosystem brings new possibilities to products. With the rise of digital products, cross-functional boundaries are blurring. New skills and unlearning old ways are critical. Gamification can support creating a ladder approach to acquiring and utilizing new skills for continuous software delivery ecosystems, testing and security. However, underpinning collective wisdom through gamification needs a systematic framework where we are able to integrate game ideation, design, validation & incentives with different persona types. To apply gamification in a systematic manner to solve serious problems, ideate, and come together to create new knowledge in a fun way, is challenging. To successfully apply gamification for upskilling and boosting productivity, it will have to be accompanied by understanding the purposefulness through the following two critical perspectives: Benefits of embracing gamification for people – Removing fear, having fun, and making the desirable shift towards new knowledge; creating an environment that is inclusive and can provide a learning ecosystem for all. 

Artificial Intelligence: The Future Of Cybersecurity?

Cybersecurity in Industry 4.0 can't be tackled in the same way as that of traditional computing environments. The number of devices and associated challenges are far too many. Imagine monitoring security alerts for millions of connected devices globally. IIoT devices possess limited computing power and, therefore, lack the ability to run security solutions. This is where AI and machine learning come into play. ML can make up for the lack of security teams. AI can help discover devices and hidden patterns while processing large amounts of data. ML can help monitor incoming and outgoing traffic for any deviations in behavior in the IoT ecosystem. If a threat or anomaly is detected, alarms can be sent to security admins warning them about the suspicious traffic. AI and ML can be used to build lightweight endpoint detection technologies. This can be an indispensable solution, especially in situations where IoT devices lack the processing power and need behavior-based detection capabilities that aren't as resource intensive. AI and ML technologies are a double-edged sword. 

3 ways any company can guard against insider threats this October

Companies don’t become cyber smart by accident. In fact, cybersecurity is rarely top-of-mind for the average employee as they go about their day and pursue their professional responsibilities. Therefore, businesses are responsible for educating their workforce, training their teams to identify and defend against the latest threat patterns. For instance, phishing scams have increased significantly since the pandemic’s onset, and each malicious message threatens to undermine data integrity. Meanwhile, many employees can’t identify these threats, and they wouldn’t know how to respond if they did. Of course, education isn’t limited to phishing scams. One survey found that 61 percent of employees failed a basic quiz on cybersecurity fundamentals. With the average company spending only 5 percent of its IT budget on employee training, it’s clear that education is an untapped opportunity for many organizations to #BeCyberSmart. When coupled with intentional accountability measures that ensure training is implemented, companies can transform their unaware employees into incredible defensive assets.

VMware gears up for a challenging future

“What we are doing is pivoting our portfolio or positioning our portfolio to become the multi-cloud platform for our customers in three ways,” Raghuram said. “One is enabling them to execute their application transformation on the cloud of their choice using our Tanzu portfolio. And Tanzu is getting increased momentum, especially in the public cloud to help them master the complexities of doing application modernization in the cloud. And of course, by putting our cloud infrastructure across all clouds, and we are the only one with the cloud infrastructure across all clouds and forming the strategic partnerships with all of the cloud vendors, we are helping them take their enterprise applications to the right cloud,” Raghuram said. Building useful modern enterprise applications is a core customer concern, experts say. “Most new apps are built-on containers for speed and scalability. The clear winner of the container wars was Kubernetes,” said Scott Miller, senior director of strategic partnerships for World Wide Technology (WWT), a technology and supply-chain service provider and a VMware partner. 

Software cybersecurity labels face practical, cost challenges

Cost and feasibility are among the top challenges of creating consumer labels for software. Adding to these challenges is the fact that software is continually updated. Moreover, software comes in both open-source and proprietary formats and is created by a global ecosystem of firms that range from mom-and-pop shops all the way up to Silicon Valley software giants. "It's way too easy to create requirements that cannot be met in the real world," David Wheeler, director of open source supply chain security at the Linux Foundation and leader of the Core Infrastructure Initiative Best Practices Badge program, said at the workshop. "A lot of open-source projects allow people to use them at no cost. There's often no revenue stream. You have to spend a million dollars at an independent lab for an audit. [That] ignores the reality that for many projects, that's an impractical burden." ... Another critical aspect of creating software labels is to ensure that they don't reflect static points in time but are instead dynamic, taking into account the fluid nature of software. 

Work’s not getting any easier for parents

Part of many managers’ discomfort with remote work is that they are unsure how to gauge their off-site employees’ performance and productivity. Some business leaders equate face time with productivity. I’ll never forget a visit I had to a Silicon Valley startup in which the manager showing me around described a colleague this way: “He’s such a great worker. He’s here every night until 10, and back in early every morning!” In my work helping businesses update their policies and cultures to accommodate caregivers, I often have to rid managers of this old notion. There’s nothing impressive, or even good, about being in the office so much. To help change the paradigm, I work with managers to find new ways of measuring an individual’s performance and productivity. Instead of focusing on hours worked per day, we look at an employee’s achievements across a broader time metric, such as a month or quarter. We ask, what did the employee do for the company during that time? It’s often then that businesses realize how little overlap there is between those who are seen working the most and those who have the greatest impact on the company. 

How to use feedback loops to improve your team's performance

In systems, feedback is a fundamental force behind their workings. When we fly a plane, we get feedback from our instruments and our co-pilot. When we develop software, we get feedback from our compiler, our tests, our peers, our monitoring, and our users. Dissent works because it’s a form of feedback, and clear, rapid feedback is essential for a well functioning system. As examined in “Accelerate”, a four-year study of thousands of technology organizations found that fostering a culture that openly shares information is a sure way to improve software delivery performance. It even predicts ability to meet non-technical goals. These cultures, known as “generative” in Ron Westrum’s model of organizational culture, are performance–and learning–oriented. They understand that information, especially if it’s difficult to receive, only helps to achieve their mission, and so, without fear of retaliation, associates speak up more frequently than in rule-oriented (“bureaucratic”) or power-oriented (“pathological”) cultures. Messengers are praised, not shot.

Quote for the day:

"A pat on the back is only a few vertebrae removed from a kick in the pants, but is miles ahead in results." -- W. Wilcox

Daily Tech Digest - September 26, 2021

You don't really own your phone

When you purchase a phone, you own the physical parts you can hold in your hand. The display is yours. The chip inside is yours. The camera lenses and sensors are yours to keep forever and ever. But none of this, not a single piece, is worth more than its value in scrap without the parts you don't own but are graciously allowed to use — the copyrighted software and firmware that powers it all. The companies that hold these copyrights may not care how you use the product you paid a license for, and you don't hear a lot about them outside of the right to repair movement. Xiaomi, like Google and all the other copyright holders who provide the things which make a smartphone smart, really only wants you to enjoy the product enough to buy from them the next time you purchase a smart device. Xiaomi pissing off people who buy its smartphones isn't a good way to get those same people to buy another or buy a fitness band or robot vacuum cleaner. When you set up a new phone, you agree with these copyright holders that you'll use the software on their terms.

Edge computing has a bright future, even if nobody's sure quite what that looks like

Edge computing needs scalable, flexible networking. Even if a particular deployment is stable in size and resource requirements over a long period, to be economic it must be built from general-purpose tools and techniques that can cope with a wide variety of demands. To that end, software defined networking (SDN) has become a focus for future edge developments, although a range of recent research has identified areas where it doesn't yet quite match up to the job. SDN's characteristic approach is to divide the task of networking into two tasks of control and data transfer. It has a control plane and a data plane, with the former managing the latter by dynamic reconfiguration based on a combination of rules and monitoring. This looks like a good match for edge computing, but SDN typically has a centralised control plane that expects a global view of all network activity. ... Various approaches – multiple control planes, increased intelligence in edge switch hardware, dynamic network partitioning on demand, geography and flow control – are under investigation, as are the interactions between security and SDN in edge management.

TangleBot Malware Reaches Deep into Android Device Functions

In propagation and theme, TangleBot resembles other mobile malware, such as the FluBot SMS malware that targets the U.K. and Europe or the CovidLock Android ransomware, which is an Android app that pretends to give users a way to find nearby COVID-19 patients. But its wide-ranging access to mobile device functions is what sets it apart, Cloudmark researchers said. “The malware has been given the moniker TangleBot because of its many levels of obfuscation and control over a myriad of entangled device functions, including contacts, SMS and phone capabilities, call logs, internet access, [GPS], and camera and microphone,” they noted in a Thursday writeup. To reach such a long arm into Android’s internal business, TangleBot grants itself privileges to access and control all of the above, researchers said, meaning that the cyberattackers would now have carte blanche to mount attacks with a staggering array of goals. For instance, attackers can manipulate the incoming voice call function to block calls and can also silently make calls in the background, with users none the wiser. 

Why CEOs Should Absolutely Concern Themselves With Cloud Security

Probably the biggest reason cybersecurity needs to be elevated to one of your top responsibilities is simply that, as the CEO, you call most of the shots surrounding how the business is going to operate. To lead anyone else, you have to have a crystal-clear big picture of how everything interconnects and what ramifications threats in one area have to other areas. Additionally, it’s up to you to hire and oversee people who truly understand servers and cloud security and who can build a secure infrastructure and applications. That said, virtually all businesses today are “digital” businesses in some sense, if that means having a website, an app, processing credit cards with point of sale readers or using the ‘net for your social media marketing. All of these things can be potential points of entry for hackers, who happily take advantage of any vulnerability they can find. And with more people working remotely and generally enjoying a more mobile lifestyle, the risks of cloud computing are here to stay.

Better Incident Management Requires More than Just Data

To the uninitiated, all complexity looks like chaos. Real order requires understanding. Real understanding requires context. I’ve seen teams all over the tech world abuse data and metrics because they don’t relate it to its larger context: what are we trying to solve and how might we be fooling ourselves to reinforce our own biases? In no place is this more true in the world of incident management. Things go wrong in businesses, large and small, every single day. Those failures often go unreported, as most people see failure through the lens of blame, and no one wants to admit they made a mistake. Because of that fact, site reliability engineering (SRE) teams establishing their own incident management process often invest in the wrong initial metrics. Many teams are overly concerned with reducing MTTR: mean time to resolution. Like the British government, those teams are overly relying on their metrics and not considering the larger context. Incidents are almost always going to be underreported initially: people don’t want to admit things are going wrong.

Three Skills You’ll Need as a Senior Data Scientist

In the light of data science, I would say, critical thinking is, answering the “why”s in your data science project. Before elaborating what I mean, the most important prerequisite is, know the general flow of a data science project. The diagram below shows that. This is a slightly different view to the cyclic series of steps you might see elsewhere. I think this is a more realistic view than seeing it as a cycle. Now off to elaborating. In a data science project, there are countless decisions you have to make; supervised vs unsupervised learning, selecting raw fields of data, feature engineering techniques, selecting the model, evaluation metrics, etc. Some of these decisions would be obvious, like, if you have a set of features, and a label associated with it, you’d go with supervised learning instead of unsupervised learning. A seemingly tiny checkpoint you overlooked might be enough. And it can cost money for the company and put your reputation on the line. When you answer not just “what you’re doing”, but also “why you’re doing”, it closes down most of the cracks, where problems like above can seep in.

The Benefits and Challenges of Passwordless Authentication

Passwordless authentication is a process that verifies a user's identity with something other than a password. It strengthens security by eliminating password management practices and the risk of threat vectors. It is an emerging subfield of identity and access management and will revolutionize the way employees work. ... asswordless authentication uses some modern authentication methods that reduce the risk of being targeted via phishing attacks. With this approach, employees won't need to provide any sensitive information to the threat actors that give them access to their accounts or other confidential data when they receive a phishing email. ... Passwordless authentication appears to be a secure and easy-to-use approach, but there are challenges in its deployment. The most significant issue is the budget and migration complexity. While setting up a budget for passwordless authentication, enterprises should include costs for buying hardware and its setup and configuration. Another challenge is dealing with old-school mentalities. Most IT leaders and employees are reluctant to move away from traditional security methods and try new ones.

Using CodeQL to detect client-side vulnerabilities in web applications

The idea of CodeQL is to treat source code as a database which can be queried using SQL-like statements. There are lots of languages supported among which is JavaScript. For JavaScript both server-side and client-side flavours are supported. JS CodeQL understands modern editions such as ES6 as well as frameworks like React (with JSX) and Angular. CodeQL is not just grep as it supports taint tracking which allows you to test if a given user input (a source) can reach a vulnerable function (a sink). This is especially useful when dealing with DOM-based Cross Site Scripting vulnerabilities. By tainting a user-supplied DOM property such as location.hash one can test if this value actually reaches one of the XSS sinks, e.g. document.innerHTML or document.write(). The common use-case for CodeQL is to run a query suite against open-source code repositories. To do so you may install CodeQL locally or use https://lgtm.com/. For the latter case you should specify a GitHub repository URL and add it as your project. 

Moving beyond agile to become a software innovator

Experience design is a specific capability focused on understanding user preferences and usage patterns and creating experiences that delight them. The value of experience design is well established, with organizations that have invested in design exceeding industry peers by as much as 5 percent per year in growth of shareholder return. What differentiates best-in-class organizations is that they embed design in every aspect of the product or service development. As a core part of the agile team, experience designers participate in development processes by, for example, driving dedicated design sprints and ensuring that core product artifacts, such as personas and customer journeys, are created and used throughout product development. This commitment leads to greater adoption of the products or services created, simpler applications and experiences, and a substantial reduction of low-value features. ... Rather than approaching it as a technical issue, the team focused on addressing the full onboarding journey, including workflow, connectivity, and user communications. The results were impressive. The team created a market-leading experience that enabled their first multimillion-dollar sale only four months after it was launched and continued to accelerate sales and increase customer satisfaction.

The relationship between data SLAs & data products

The data-as-a-product model intends to mend the gap that the data lake left open. In this philosophy, company data is viewed as a product that will be consumed by internal and external stakeholders. The data team’s role is to provide that data to the company in ways that promote efficiency, good user experience, and good decision making. As such, the data providers and data consumers need to work together to answer the questions put forward above. Coming to an agreement on those terms and spelling it out is called a data SLA. An SLA stands for a service-level agreement. An SLA is a contract between two parties that defines and measures the level of service a given vendor or product will deliver as well as remedies if they fail to deliver. They are an attempt to define expectations of the level of service and quality between providers and consumers. They’re very common when an organization is offering a product or service to an external customer or stakeholder, but they can also be used between internal teams within an organization.

Quote for the day:

"If you can't handle others' disapproval, then leadership isn't for you." -- Miles Anthony Smith

Daily Tech Digest - September 25, 2021

Top 5 Objections to Scrum (and Why Those Objections are Wrong)

Many software development teams are under pressure to deliver work quickly because other teams have deadlines they need to meet. A common objection to Agile is that teams feel that when they have a schedule to meet, a traditional waterfall method is the only way to go. Nothing could be further from the truth. Not only can Scrum work in these situations, but in my experience, it increases the probability of meeting challenging deadlines. Scrum works well with deadlines because it’s based on empiricism, lean thinking, and an iterative approach to product delivery. In a nutshell, empiricism is making decisions based on what is known. In practice, this means that rather than making all of the critical decisions about an initiative upfront, when the least is known, Agile initiatives practice just-in-time decision-making by planning smaller batches of work more often. Lean thinking means eliminating waste to focus only on the essentials, and iterative delivery involves delivering a usable product frequently.

The Future Is Data Center as a Service

The fact is that whether we realize it or not, we’ve gotten used to thinking of the data center as a fluid thing, particularly if we use cluster paradigms such as Kubernetes. We think of pods like tiny individual computers running individual applications, and we start them up and tear them down at will. We create applications using multicloud and hybrid cloud architectures to take advantage of the best situation for each workload. Edge computing has pushed this analogy even further, as we literally spin up additional nodes on demand, with the network adjusting to the new topology. Rightfully so; with the speed of innovation, we need to be able to tear down a data center that is compromised or bring up a new one to replace it, or to enhance it, at a moment’s notice. In a way, that’s what we’ve been doing with public cloud providers: instantiating “hardware” when we need it and tearing it down when we don’t. We’ve been doing this on the cloud providers’ terms, with each public cloud racing to lock in as many companies and workloads as possible with a race to the bottom on cost so they can control the conversation.

DevSecOps: 5 ways to learn more

There’s a clear connection between DevSecOps culture and practices and the open source community, a relationship that Anchore technical marketing manager Will Kelly recently explored in an opensource.com article, “DevSecOps: An open source story.” As you build your knowledge, getting involved in a DevSecOps-relevant project is another opportunity to expand and extend your experience. That could range from something as simple as joining a project’s community group or Slack to ask questions about a particular tool, to taking on a larger role as a contributor at some point. The threat modeling tool OWASP Threat Dragon, for example, welcomes new contributors via its Github and website, including testers and coders.  ... The value of various technical certifications is a subject of ongoing – or at least on-again, off-again – debate in the InfoSec community. But IT certifications, in general, remain a solid complementary career development component. Considering a DevSecOps-focused certification track is in itself a learning opportunity since any credential worth more than a passing glance should require some homework to attain.

How Medical Companies are Innovating Through Agile Practices

Within regulatory constraints, there is plenty of room for successful use of Agile and Lean principles, despite the lingering doubts of some in quality assurance or regulatory affairs. Agile teams in other industries have demonstrated that they can develop without any compromise to quality. Additional documentation is necessary in regulated work, but most of it can be automated and generated incrementally, which is a well-established Agile practice. Medical product companies are choosing multiple practices, from both Agile and Lean. Change leaders within the companies are combining those ideas with their own deep knowledge of their organization’s patterns and people. They’re finding creative ways to achieve business goals previously out of reach with traditional “big design up front” practices. ... Our goal here is to show how the same core principles in Agile and Lean played out in very different day-to-day actions at the companies we profiled, and how they drove significant business goals for each company.

The Importance of Developer Velocity and Engineering Processes

At its core, an organization is nothing more than a collection of moving parts. A combination of people and resources moving towards a common goal. Delivering on your objectives requires alignment at the highest levels - something that becomes increasingly difficult as companies scale. Growth increases team sizes creating more dependencies and communication channels within an organization. Collaboration and productivity issues can quickly arise in a fast-scaling environment. It has been observed that adding members to a team drives inefficiency with negligible benefits to team efficacy. This may sound counterintuitive but is a result of the creation of additional communication lines, which increases the chance of organizational misalignment. The addition of communication lines brought on by organization growth also increases the risk of issues related to transparency as teams can be unintentionally left “in the dark.” This effect is compounded if decision making is done on the fly, especially if multiple people are making decisions independent of each other.

Tired of AI? Let’s talk about CI.

Architectures become increasingly complex with each neuron. I suggest looking into how many parameters GPT-4 has ;). Now, you can imagine how many different architectures you can have with the infinite number of configurations. Of course, hardware limits our architecture size, but NVIDIA (and others) are scaling the hardware at an impressive pace. So far, we’ve only examined the computations that occur inside the network with established weights. Finding suitable weights is a difficult task, but luckily math tricks exist to optimize them. If you’re interested in the details, I encourage you to look up backpropagation. Backpropagation exploits the chain rule (from calculus) to optimize the weights. For the sake of this post, it’s not essential to understand how the learning of the weights, but it’s necessary to know backpropagation does it very well. But, it’s not without its caveats. As NNs learn, they optimize all of the weights relative to the data. However, the weights must first be defined — they must have some value. This begs the question, where do we start?

How do databases support AI algorithms?

Oracle has integrated AI routines into their databases in a number of ways, and the company offers a broad set of options in almost every corner of its stack. At the lowest levels, some developers, for instance, are running machine learning algorithms in the Python interpreter that’s built into Oracle’s database. There are also more integrated options like Oracle’s Machine Learning for R, a version that uses R to analyze data stored in Oracle’s databases. Many of the services are incorporated at higher levels — for example, as features for analysis in the data science tools or analytics. IBM also has a number of AI tools that are integrated with their various databases, and the company sometimes calls Db2 “the AI database.” At the lowest level, the database includes functions in its version of SQL to tackle common parts of building AI models, like linear regression. These can be threaded together into customized stored procedures for training. Many IBM AI tools, such as Watson Studio, are designed to connect directly to the database to speed model construction.

A Comprehensive Guide to Maximum Likelihood Estimation and Bayesian Estimation

An estimation function is a function that helps in estimating the parameters of any statistical model based on data that has random values. The estimation is a process of extracting parameters from the observation that are randomly distributed. In this article, we are going to have an overview of the two estimation functions – Maximum Likelihood Estimation and Bayesian Estimation. Before having an understanding of these two, we will try to understand the probability distribution on which both of these estimation functions are dependent. The major points to be discussed in this article are listed below. ... As the name suggests in statistics it is a method for estimating the parameters of an assumed probability distribution. Where the likelihood function measures the goodness of fit of a statistical model on data for given values of parameters. The estimation of parameters is done by maximizing the likelihood function so that the data we are using under the model can be more probable for the model.

DORA explorers see pandemic boost in numbers of 'elite' DevOps performers

DORA has now added a fifth metric, reliability, defined as the degree to which one "can keep promises and assertions about the software they operate." This is harder to measure, but nevertheless the research on which the report is based asked tech workers to self-assess their reliability. There was a correlation between reliability and the other performance metrics. According to the report, 26 per cent of those polled put themselves into the elite category, compared to 20 per cent in 2019, and seven per cent in 2018. Are higher performing techies more likely to respond to the survey? That seems likely, and self-assessment is also a flawed approach; but nevertheless it is an encouraging trend, presuming agreement that these metrics and survey methodology are reasonable. Much of the report reiterates conventional DevOps wisdom. NIST's characteristics of cloud computing [PDF] are found to be important. "What really matters is how teams implement their cloud services, not just that they are using cloud technologies," the researchers said, including things like on-demand self service for cloud resources.

Why Our Agile Journey Led Us to Ditch the Relational Database

Despite our developers having zero prior experience with MongoDB prior to our first release, they still were able to ship to production in eight weeks while eliminating more than 600 lines of code, coming in under time and budget. Pretty good, right? Additionally, the feedback provided was that the document data model helped eliminate the tedious work of data mapping and modeling they were used to from a relational database. This amounted to more time that our developers could allocate on high-priority projects. When we first began using MongoDB in summer 2017, we had two collections into production. A year later, that had grown into 120 collections deployed into production, writing 10 million documents daily. Now, each team was able to own its own dependency, have its own dedicated microservice and database leading to a single pipeline for application and database changes. These changes, along with the hours saved not spent refactoring our data model, allowed us to cut our deployment time to minutes, down from hours or even days.

Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik