Developer Adoption
Developer Adoption captures an organizationâ€™s ability to reduce waste by adopting common patterns, platforms and tools.
Why Measure?
Lean Thinking, the grandparent of Agile, DevOps and everything else we talk about in the digital transformation realm, is centered around the idea of eliminating muda  waste in the form of human effort that produces no value.
Further Reading
Read more about muda and the 5 principles of Lean Thinking that help to banish waste in the book Lean Thinking by James P. Womack and Daniel T. Jones.
A common form of waste in an IT organization is when multiple people or teams with similar needs work separately to solve the same problem. This can happen for a number of reasons, but the culprit is always some form of communication gap. The recent trends toward decentralization of IT, agile, cloud adoption, microservices architectures and full stack development have made it increasingly difficult for teams that are independent of each other to be able to share a common set of tools, platforms, and solutions that helps to reduce the cognitive load of the development team, while also allowing for the autonomy that teams need to do great work.
This struggle has given birth to new practices like Platform Engineering, Inner Sourcing, and Recommoning. The goal is to provide teams with a set of common components (platforms, tools, and solutions) that they can consume ondemand as a way of reducing cognitive load and rework by individual teams.
Successful adoption of the right combination of components should lead to an improved developer experience, allowing developers more focus on valuerelated work, therefore reducing waste and increasing value potential to the organization. To achieve this, it is important to track the value that each component available to developers provides by measuring whether they are being well adopted. This is how we get the outcome of Developer Adoption.
Parameters of success
Before we get into the measures that make up the Developer Adoption outcome, we have a few decisions to make and some shared understanding to build. Across each of the measures in Developer Adoption, we have three parameters of success that need to be defined and agreed upon: target user, adoption event, and active user. A good way to figure out what success looks like for the component, think about who has to conduct what sorts of observable actions in that component to translate most directly to value.
We need to define an entity that will call our target user. Typically we would think of a user as an individual person. However, depending on the platform or tool we are trying to measure we may want to define a user differently. A user might be a team, an application, a product, or a service.
Getting more specific about the type of user can help us create a stronger product and will help us define the adoption event. For instance, if you're measuring adoption of a learning platform, then your adoption event might be an employee completing a course. In this case the target user would be an individual employee. However, if you're measuring adoption of an application hosting platform or deployment tool, then your adoption event would likely be an application getting deployed into a certain environment, or passing a particular stage of a pipeline. In this case it makes more sense to track either the application itself or the team that owns the application.
Different components will likely have different definitions of target users and adoption events, and that's okay. What's important is to make sure that within a component and its measures, we're consistent in our definitions for target user and adoption event.
Once we have identified who our target user is and the adoption event we want to track, we can then use those to define what we consider to be an active user of a given component. An active user would be a user who has achieved an adoption event within a certain period of time, let's say the past week.
Parameter  Description  Examples 

Target user  The type of user that the component provides value to (individual, team, application, product or a service)  Frontend developers, businesstobusiness APIs, customerfacing apps 
Adoption event  The observable action in the component that translate most directly to value  Deploys an application to a specific environment, completes a story, commits some code to a specific branch, logs a certain number of hours of active use 
Active user  Combines a target user and an adoption event with a time scope and frequency to determine what it means to be "active" in the component  A developer who has committed code from their cloud workspace within the past week 
Measures
Here we break down the 6 measures of the Developer Adoption outcome in detail. We'll cover the raw data points that we'll need to collect from our various components and then present the formulas to calculate each measure.
Common knowledge
Data points and formulas used by more than one measure.
Data Points
Adoption events

Adoption events \((E)\): A set of valuable target users interactions with the component
Expressed as a set of tuples (string, timestamp), where
 the first element is the target user's username
 the second element is the timestamp when the interaction occurred
Formulas
Active users at a given time interval

Active users at a given time interval \((A(t,\Delta t))\): Given a timestamp \(t\) and a time interval \(t\Delta t\), the unique target users who have at least one adoption event between \(t\) and \(t\Delta t\), is the set:
\[ A(t,\Delta t) = \{\text{username} : \exists\ x,t \Delta t \leq x \leq t, (\text{username},x) \in E\} \]
Number of active users at a given time interval

Number of active users at a given time interval \((U(t,\Delta t))\): Given a timestamp \(t\) and a time interval \(t\Delta t\), the number of unique target users who have at least one adoption event between \(t\) and \(t\Delta t\), is calculated as follows:
\[ U(t,\Delta t) = \text{count}(A(t,\Delta t)) \]
Number of adoption events at a given time interval

Number of adoption events at a given time interval \((E(t,\Delta t))\): Given a timestamp \(t\) and a time interval \(t\Delta t\), the number of adoption events between \(t\) and \(t\Delta t\), is calculated as follows:
\[ E(t,\Delta t) = \text{count}(\{(\text{username},x) : \exists\ x,t \Delta t \leq x \leq t, (\text{username},x) \in E\}) \]
Adoption rate
Adoption Rate (\(AR\)) is the rate at which a component is acquiring new users. This serves as an indicator of the ability of the component to scale to support multiple teams and products, as well as whether or not the component is compelling developers to want to try it out.
Data Points
Formulas
Number of active users at a given time interval \((U(t,\Delta t))\)
 Adoption Rate \((AR(t,\Delta t))\)

The rate of change of users adoption the component at timestamp \(t\) over time interval \(\Delta t\)
Retention rate
So we've got a platform up and running, onboarding is really fast and we've been able to attract a bunch of new users. Now we need to ask a new set of questions. Is the platform meeting the needs of its users? Are people continually using it? Measuring retention rate tells us whether users that have adopted our platform continue to use it over time to do valuable work. It is the yin to adoption rate's yang.
Data Points
Formulas
Active users at a given time interval \((A(t,\Delta t))\)
Number of active users at a given time interval \((U(t,\Delta t))\)
 Number of new users at a given time interval \((N(t,\Delta t))\)

Given a timestamp \(t\) and a time interval \(t\Delta t\), the number of new target users between \(t\) and \(t\Delta t\), is calculated as follows:
\[ N(t,\Delta t) = \text{count}(A(t,\Delta t)  A(t\Delta t,\Delta t)) \]
 Retention Rate \((RR(t,\Delta t))\)

The percentage of users that were active at timestamp \(t\) over time interval \(\Delta t\)
\[ RR(t,\Delta t) = \left(\frac{U(t,\Delta t)  N(t,\Delta t)}{U(t\Delta t,\Delta t)}\right) \cdot 100 \]
Adoption lead time
Adoption lead time is the speed at which a new user or team is able to onboard to a new component. By tracking lead time to adopt an internal product, we gain insight into the quality of the user experience for new users as well as indications of constraints in the process.
It's important with this measurement to capture, not only the time it takes to get access to a given product, but the total time it takes from when a user shows intent to use the product (usually by capturing a request made) to the moment when that user has been able to use the product to do something valuable.
Data Points
Formulas
 Adoption lead time of a target user \((L(\text{username}))\)

Given a target user username, the adoption lead time for an adoption event for username, is calculated as follows:
\[ L(\text{username}) = \min(\{x:(\text{username},x) \in E\})  t_{\text{username}} \] 
where \(t_{\text{username}}\) is the timestamp when username requests access to the component
 Average Adoption Lead Time \((\bar{L}(\text{username}_1,\ldots,\text{username}_N))\)

The average adoption lead time collection of \(N\) adoption lead times of different target users
\[ \bar{L}(\text{username}_1,\ldots,\text{username}_N) = \frac{\sum_{1}^{N}L(\text{username}_{i})}{N} \]
Adoption density
So far we've focused most of our measurement on identifying active users  those who meet a minimum criteria to be considered active. This treats users who use a component once the same as those who use it 100 times. Since we've identified our adoption events as being instances of a user doing something valuable, we should naturally want to see more and more of those events over time. This is what we want to capture with Adoption Density.
Data Points
Formulas
Number of adoption events at a given time interval \((E(t,\Delta t))\)
 Adoption Density \((AD(t,\Delta t))\)

The density growth of adoption events at timestamp \(t\) over time interval \(\Delta t\)
Operational efficiency
As our reusable components get adopted and we start seeing success from a user perspective, it becomes important to monitor the financial sustainability of maintaining and evolving these components. This helps ensure that we are actually getting a return on investment as we scale. To track this, we measure operational efficiency  the ratio of effort required to maintain and evolve the platform to level of active adoption of the platform.
The simplest way to calculate operational efficiency is by comparing the number of people maintaining the component to the total adoption events the platform has.
Data Points
 Component Maintainers (\(M\))

A set of how many maintainers the component have over the time
Expressed as a set of tuples (integer, timestamp), where
 the first element is the number of maintainers
 the second element is the timestamp when the number changed
Formulas
Number of adoption events at a given time interval \((E(t,\Delta t))\)
 Number of maintainers at a given time interval \((M(t,\Delta t))\)

Given a timestamp \(t\) and a time interval \(t\Delta t\), the number maintainers between \(t\) and \(t\Delta t\), is calculated as follows:
\[ M(t,\Delta t) = \max(\{\text{maintainer} : \min(x: t \Delta t x \geq 0) \lor t \Delta t < x \leq t, (\text{maintainer},x) \in M\}) \]
 Operational Efficiency \((OE(t,\Delta t))\)

The operational efficiency at timestamp \(t\) over time interval \(\Delta t\)
\[ OE(t,\Delta t) = \frac{E(t,\Delta t)}{M(t,\Delta t)} \]
Developer Satisfaction
Measures the extent to which the platform is meeting the needs and wants of developers.
Data Points
 Net promoter score survey \((S)\)

A set of scores from a net promoter score survey over the time
Expressed as a set of tuples (set of integers, timestamp), where
 the first element are the scores (between 0 and 10) the target users gave the component
 the second element is the timestamp when the number of scores changed
Formulas
Number of active users at a given time interval \((U(t,\Delta t))\)
 Number of net promoter score survey responses at a given time interval \((S(t,\Delta t))\)

Given a timestamp \(t\) and a time interval \(t\Delta t\), the number of net promoter score survey responses between \(t\) and \(t\Delta t\), is calculated as follows:
\[ S(t,\Delta t) = \min(\{\text{count}(\text{scores}) : \min(x: t \Delta t x \geq 0) \lor t \Delta t < x \leq t, (\text{scores},x) \in S\}) \]  Survey response rate \((SR(t,\Delta t))\)

Given a timestamp \(t\) and a time interval \(t\Delta t\), the percentage of the active users who responded the survey between \(t\) and \(t\Delta t\), is calculated as follows:
\[ SR(t,\Delta t) = \frac{S(t,\Delta t)}{U(t,\Delta t)} \cdot 100 \]
 Net Promoter Score (\(NPS(t,\Delta t)\))

The Net Promoter Score at timestamp \(t\) over time interval \(\Delta t\)
\[ NPS(t,\Delta t) = \frac{ S_P(t,\Delta t)  S_D(t,\Delta t) }{S(t,\Delta t)} \] 
where \({S_P}\) is the subset of \(S\) of scores 9 or 10 (promoters) and \({S_D}\) is the subset of \(S\) of scores 6 or below (detractors)