9 metrics that can be relevant to modern software development teams

Translation of the article prepared in advance of the start of the course Team Lead 2.0.

As I noted in the article “Why metrics don’t matter in software development unless you pair them with business goals”, the choice of metrics needs to be thought out very carefully to give answers to the questions that sets itself business. This is the critical point: measurements must be designed to answer business questions. And these questions will never sound like “How many thousands of lines of code do we have in the project right now?”

This article continues the theme begun in the previous one. First, we’ll talk about specific metrics that each team should use, or at least plans to use, in order to significantly improve performance. Please note that the title of this article begins with “9 metrics that MAY matter …” because it is important how these metrics add value to the business. How you use them is up to you. We conclude the article with a story about how to meaningfully combine these metrics with each other, as well as formulate and test a hypothesis about the value of a business.

Start by measuring the right things.

Here are 9 objective indicators (or metrics) that need to be constantly monitored in order to incrementally improve processes and the production environment. Improving these indicators does not guarantee that the level of satisfaction of your customers will increase by the day, but by the hour. However, these are indicators that really need to be monitored. In one of the following sections “Putting it all together”, you will find out why.

Agile process metrics

For agile and lean processes, the main metrics will be leadtime, cycle time, velocity teams, and open / close coefficient. These indicators help in planning and making decisions to improve processes. Although they do not measure success or added value, have nothing to do with the objective quality of the software, you still need to track them. Below I will explain why.

  • Leadtime – The time that passes from the idea to the delivery of software. If you want to respond more quickly to the needs of your customers, work to reduce leadtime, often by simplifying decision-making and shortening waiting times. Leadtime includes cycle time.
  • Cycle time – the amount of time it takes to make changes to the software system and deliver these changes to production. Teams using continuous deliverycan measure cycle time in minutes or even seconds instead of months.
  • Velocity teams – the number of units of software that a team usually performs in one iteration (sprint). This number should be used only for iteration planning. Compare teams by metric velocity – The idea is meaningless, since the metric is based on biased estimates. To consider velocity as an indicator of success is also useless, and setting goals like “stick to a certain velocity»Distorts the value of this metric for evaluation and planning activities.
  • open / close coefficient – the number of emerged and closed issue per unit time. The overall trend will be more important than specific numbers.

When any of these metrics or all of them fall outside the expected range or slowly deviate from the course, do not puzzle over the cause. Go and talk with the team, find out the whole story, and let the team decide for itself if there is a reason for concern, and if so, how to fix the situation.

You can’t know exactly or at least guess where these or other numbers come from, but these metrics will give you an understanding of where and what processes you should pay attention to. For example, a high rate open and low close over several iterations may mean that issue Production now has a lower priority than new features, or perhaps the team is focused on reducing technical debt and fixing entire classes of problems, or that the only person who knew how to fix them quit or something happened to him. Of the numbers you do not know exactly the root cause.

Production Analytics

  • Mean Time Between Failure (Mean time between failures, MTBF)
  • Average recovery time (Mean time to recover / repair, MTTR)

Both of these metrics are common indicators of the performance of your software system in your current production environment.

Application crash rate – the number of times that the application “falls” divided by the number of uses. This indicator is directly related to MTBF and MTTR.

Note that none of these three metrics will give you an idea of ​​the individual functions or the users involved. And yet, the lower the value of the indicators, the better. Modern application monitoring systems make it easy to collect these metrics by program and transaction, but to set up the correct ranges for sending alerts or triggers for scaling (with respect to cloud systems), you need time and careful reflection.

Of course, I would like our software to never fail, but statistically this is unlikely. I would like that no important data is lost and the application is restored instantly when it crashes, but it can be extremely difficult to achieve this. However, if your software is your source of income, then the effort is worth it.

Besides MTBF and MTTR, more accurate metrics are based on individual transactions, applications, etc., and they reflect the delivered business value, as well as the cost of troubleshooting. If your transaction processing application crashes once in a hundred, but it rises after 1-2 seconds without critical loss of information, then you can live with a failure rate of 1%. But if with such a frequency the application crashes, which processes 100,000 transactions per day, loses $ 100 for a failure and recovery costs $ 50, then fixing this 1% will be a priority. And such a correction in the end will greatly affect the final situation.

Safety metrics

Safety is an important aspect of quality softwarewhich in the later stages of development (or subsequent stages of the life cycle) is often overlooked. Security analysis tools can be used during the assembly process along with more specialized assessment methods and stress tests. Security requirements are consistent with common sense, but the development team should always remember them, as well as the metrics that come from them.

The full range of security methods and related metrics is beyond the scope of this article, but, as with agile process metrics and production metrics, there are several very specific metrics that will go a long way towards satisfying the needs of your customers.

  • Endpoint incidents – the number of endpoints (mobile devices, workstations, etc.) on which the virus was detected for a certain time.
  • MTTR (Mean Time Between Failures) – in terms of security – this is the time between the detection of a system security violation and the deployment and operation of the security tool. As with MTTR, which we talked about in production metrics, the security issue should be monitored for specific time intervals. If the MTTR value decreases over time, then developers work more efficiently and better understand security problems, find errors and fix them.

For both of these metrics, decreasing the value of an indicator over time means that you are moving in the right direction. Fewer incidents at endpoints speak for themselves. As the value MTTR decreases, developers work more efficiently and better understand security problems, find errors and fix them.

You will find more ways to apply security metrics in software development in articles. Application Security for Agile Projects and Security Threat Models: An Agile Introduction.

Note on source code metrics

To date, connecting the source scanner to the assembly pipeline and getting a bunch of objective metrics is quite easy. There are empirical averages, suggested ranges, and logical arguments in favor of the relative importance of these indicators. However, in practice, these tools are more useful for ensuring compliance with a certain style of writing code, highlighting certain antipatterns and identifying anomalies and patterns.

Do not get stuck in numbers. I will give an example so that you understand what I am driving at.

Suppose you find in a class a method with an absurd value of the NPATH difficulty index of 52 million. That is, 52 million test cases will be required to test each variant of the method. You can refactor the code and get a simpler structure, but before you do this, think about how this will affect the business logic. Most likely, this old scary code works quite well (despite the fact that it may not be fully covered by tests). It will be valuable to show this antipattern to your team so that they don’t do it anymore, since the fix of this method as a whole will not change any worthwhile business metrics.

It is best if the team agrees with the level of requirements and the ruleswhich their code answers, but keep in mind that studying anomalies and worrying about the appearance of any patterns can be time-consuming.

Putting It All Together: Success is a Universal Metric

The main advantage of using automated tools for tracking and measuring quality indicators and analyzing user behavior lies in the fact that they free up time to work on really important indicators – indicators of success.

How to use metrics to succeed

A business has goals. For purposes, questions are hidden, for example: “What does success look like?” or “How will a product affect customer behavior?” Correctly formulated questions are fraught with metrics.

In other words, metrics should only be used to answer the questions posed, to test hypotheses formulated regarding a specific business goal. These questions should be answered as long as the answers lead to positive changes.

Do not all projects have a certain set of invariant goals, questions and hypotheses, and therefore metrics?

Yes, but only from a business perspective. Business level metrics, such as user engagement, closing ratio, revenue generation, etc., provide feedback on how the business affects the real world. Software changes that will affect the business will also affect these metrics.

At a finer level of resolution, each function and User Story can have their own success rate – it is preferable that it be the only one and be directly related to the value that is delivered to the client. Closing 9 out of 10 stories for a sprint for functions that remain undeliverable is not a success, but a defeat. Delivering stories that customers don’t need and don’t use is not a success, but a waste of time and effort. Creating stories that make the user a little happier is success. Creating a story that didn’t improve user experience is also a success, because now you know that this business hypothesis does not work, and you can free up resources to search for other ideas.

How to formulate a value hypothesis

The value hypothesis is a statement about what, in your opinion, will happen as a result of adding a certain function. The relationship between software, the desired result and metrics forms a value hypothesis for a function (or system, history, update, etc.). The hypothesis should express the expected change in the target metric over a certain period of time, as well as the concept of its effectiveness. You will need to talk with the team and the Product Owner to at least find out what exactly this function or story can create or improve in relation to the business in order to formulate a value hypothesis regarding it. You may have to ask the question “why” several times (as a child) to get rid of unspoken assumptions, so be patient. Once you understand what business value should be, you can start asking questions that will lead you to metrics that provide answers to them.

For example, a “technical” story about increasing the speed of placing an order in an online store may have the idea that accelerating sales will lead to an increase in their number. But why do we think so? How many people leave shopping baskets during the checkout process? If you come to a consensus (backed up by your assumptions with data), then value hypothesis it may sound like this: “We believe that accelerating the process of placing an order will lead to a decrease in the rate of“ abandonment of the basket ”. In turn, this will lead to increased sales and improved user experience. ”

You probably assume that users will like the speed of ordering, but it will not be superfluous to ask if they noticed it at all. The abandonment rate of the basket and sale can be measured for a certain time before the introduction of the new process and after. If the cart abandonment rate falls and sales increase (taking into account statistical fluctuations), then the evidence confirms the hypothesis, and you may wonder if a further increase in the speed of checkout is justified. If not, then let this metric recede into the background (or exclude it altogether so as not to distract), and turn your attention to the following hypothesis. If the cart abandonment rate is reduced and sales remain unchanged, take longer measurements or try rethinking the alleged relationship between cart abandonment and sales. In other words, use metrics that bring meaningful improvements in order to learn.

In some cases, the hypothesis is erroneous, and we discard the metrics (and roll back to the old version of the software) after a few days. In other cases, the hypothesis may be true, so we continue to improve performance in this area for many years.

Six heuristics for efficient use of metrics

We saw how subjective metrics made a greater contribution to business success than the good old objective quality indicators. The acquired knowledge and training opportunities prevail over the efforts necessary to search and measure relevant business performance indicators. Business conditions and opportunities are constantly changing, so instead of deriving a fragile formula that could be followed, I will give six rules of thumb, or heuristics, that will help maintain focus and flexibility. Let them help you on your journey to quality software and success!

  1. Metrics will not tell you the whole story, but the team will tell you (take off my hat to Toddom Decapula);
  2. Comparing snowflakes is a waste of time.
  3. You can measure almost everything, but you can’t pay attention to everything.
  4. Business success metrics help improve software, not the other way around.
  5. Each function adds value, whether you measure it or not.
  6. Measure only those indicators that matter at a given moment.


  • Build workflows on a remote site: practical recommendations
  • Why don’t you have to become a team leader?
  • Why do my colleagues / employees behave like @% §?
  • Burn, but not burn – burn to shine

Learn more about the course

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *