How to properly measure code speed in .NET

Let's talk about code benchmarking: what it is and what it is needed for. We will also show you how to evaluate the performance of code in a project written in C# based on benchmarking results.

So, you have a solution to a problem. And now you need to evaluate the optimality of this solution from a performance point of view. The most obvious option is to use StopWatch like this:

But there are several problems with this method:

  • It is quite imprecise, since the code being evaluated is executed only once, and the execution time can be affected by various side effects (hard disk, not warmed up cache and processor context switching, other applications, etc.).

  • Doesn't force you to test the application in Production mode. Much of the code is optimized automatically during compilationwithout our participation, and this can seriously affect the final result.

  • Your algorithm may perform well on a small data set but underperform on a large one (or vice versa). Therefore, to test the performance in different situations with different data sets, you will have to write separate code for each of them.

So what other options do we have? How to properly evaluate the performance of our code? Will come to our aid in this situation BenchmarkDotNet.

Setting up a benchmark

BenchmarkDotNet is a NuGet package that you can install on any type of application and then use it to measure the speed of code execution. To do this, we only need two things – a class that will perform benchmarking code, and a way to launch a runner to execute it.

Here's what a benchmarking class would look like in its simplest form:

Let's figure out what we have in this class. Let's start with the class attributes:

MemoryDiagnoser collects information about the operation of the Garbage Collector and the allocated memory during the execution of your code.

Orderer determines the order in which the final results are displayed in the table. In our case it is FastestToSlowest. This means that the fastest code will be first in the results, and the slowest will be last.

RankColumn adds a column to the final report and numbers the results from 1 to X.

On the method itself we have added an attribute Benchmark. It marks the method as one of the test cases that needs to be tested. A parameter Baseline=true says that we will consider the performance of this method to be 100%. And then we will evaluate other algorithm options in relation to it.

To run the benchmark we need the second piece of the puzzle – Runner. Everything is simple with him: let's go to our Program.cs (we're still talking about a console application) and add one line with BenchmarkRunner:

After this, we can build our application in Production mode and run the code for execution.

Analysis of results

If we did everything correctly above, then after launch we will see how BenchmarkRunner executes our code many times and at the end produces this report:

Important: any anomalous code executions (that were much faster or much slower than the average execution) will be excluded from the final report. The clipped anomalies can be seen below in the resulting table.

The report contains quite a lot of data about the performance of the code, the version of the OS on which the test was run, the processor used and the version of .Net. But the main information that interests us is the last plate. In it we see:

  • Mean is the average time it takes to execute our code;

  • Error—estimation error (half of the 99.9 percentile);

  • StdDev is the standard deviation of the estimate;

  • Ratio – a percentage estimate of improvement or deterioration in performance relative to Baseline – the basic method that we consider as the starting point (remember Baseline=true above?);

  • Rank – ranking;

  • Allocated – allocated memory during execution of our method.

Real test

To make the final results a little more interesting, let's add a few more variations of our algorithm and see how the results change.

Now the benchmark class will look like this:

Our task for this article is to deal with benchmarking. We will leave the algorithms themselves that we are evaluating aside for now – this is a topic for the next article in the series.

And here is the result of performing such benchmarking:

Here we see that GetYearFromDateTimewhich we took as a starting point, is the slowest and runs in about 218 nanoseconds, while the fastest option. GetYearFromSpanWithManualConversion requires only 6.2 nanoseconds – 35 times faster than the original method.

We can also see how much memory was allocated for the two methods GetYearFromSplit And GetYearFromSubstringand how long it took Garbage Collector to clear this memory (which also reduces overall system performance).

Working with Various Inputs

Finally, I would like to talk about how you can evaluate the results of your algorithm on large and small data sets. For this BenchmarkDotNet offers us two attributes – Params And GlobalSetup.

This is what the benchmark class would look like using these two attributes:

In our case, the field Size parameterized and affects the code that runs in GlobalSetup.

As a result of execution GlobalSetup We generate an initial array of 10, 1000 and 10000 elements to run all test scenarios. As I said at the beginning of the article, some algorithms can behave effectively only with a large or small number of elements.

Let's try to run this benchmark and look at the results:

Charts

The BenchmarkDotNet library allows you to analyze the received data not only in text and tabular form, but also graphically – in the form of graphs.

To demonstrate, we will create a benchmark class to measure the running time of different sorting algorithms on the .NET8 platform and configure it so that it runs 3 times for different numbers of sorted elements: 1000, 5000, 10000. List of sorting algorithms used:

  • DefaultSort – default sorting algorithm used in .NET8

  • InsertionSort – insertion sort

  • MergeSort – merge sort

  • QuickSort – quick sort

  • SelectSort – selection sorting

As a result of the benchmark, we received summary result in the form of a table and graph:

BenchmarkDotNet also generated graphs for each benchmark separately (in our case, for each sorting algorithm) in terms of the number of sorted elements:

Conclusion

So, we figured out the basics of working with BenchmarkDotNet and how it helps us evaluate the results of our work and make informed decisions – what code to keep, and what code to rewrite or even delete. This approach allows us to build the most productive systems, and therefore improve the lives of users.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *