how we reduced the method call time in Java code by 16 times

In the article, I will use the Java language, Python for plotting, and a set of JMH libraries – they are also adapted for Kotlin, Scala, etc.

Why measure code performance at all…

… if everything seems to be working fine anyway? In fact, apart from the obvious benefits of speed or stability, there is another reason: optimization can be seen as an important part of the CI/CD culture. In small projects, these parameters may not be critical. But almost all Agile teams today work with DevOps practitioners and realize the value of continuous delivery.

While the project is “young”, deployment and testing can be relatively easy. But as soon as the module grows beyond the scope of the build agent, or it starts to take two hours to build tests locally, bright thoughts come into the minds of developers: “Maybe we should reduce the code? Focus on productivity? Suddenly some methods are inflated to unprecedented proportions? This is where code optimization comes in. It becomes part of continuous integration and the next step in the development of DevOps.

In our case, all aspects are important. Speed ​​- because Platform V DataSpace is supplied as part of the Platform V cloud platform, which underlies most of Sber’s products. DevOps optimization – due to the fact that the product is growing rapidly and it is important to ensure the continuous delivery of functionality to production.

How to measure performance: methods and difficulties

Let’s go back to our example. In one of the Platform V DataSpace projects, a method call was taking a very long time. At the same time, the algorithm is built in such a way that it was impossible to avoid difficulties and call the method “with little bloodshed”.

Upon closer inspection, it turned out that the code called the same method several times, which caused the timestamp to stretch. For optimization, it was necessary to estimate the exact duration of the method.

The classic approach to measuring code optimality is the big O notation, O(). But there is one difficulty with this method: it does not allow you to measure the code “in combat conditions”. Even if you make every effort, evaluate using the notation and ensure the apparent optimality of all code blocks individually, you can get a suboptimal solution (remember the “greedy algorithms” when performance is not composable). Various factors can affect the result: programming styles, types of data used, processor features. Then we decided to turn to an alternative solution – benchmarks.

Benchmark is a test for evaluating the duration of the method. It is good because it allows you to measure the speed of the algorithm, taking into account all external factors and on real equipment. The basis of any benchmark is the system time of the processor and the calculation of the duration of the execution of a block of code. The most commonly used method for this is System.nanoTime(), which, as it turned out, also has its own characteristics. The fact is that the method of system time taking itself inevitably gives an error, even if we do this:

Void checkTime(){
Long oldTime = System.nanoTime();
return System.nanoTime() – oldTime;

The error is due to the fact that the method itself is not “free”: its cost is equal to the cost of the machine’s resources that it spends on the calculation. This cannot be avoided, just like in quantum mechanics: if an observer enters the quantum world, he already introduces an error by the very fact of observation.

Questions arise:

  1. It turns out that we can not avoid the error in calculating the “cost” of the method work()?

  2. What to consider baseline and on the basis of what to build it?

  3. How to “burn” time when measuring?

But after a detailed study of the method, it turned out: no, errors can be avoided. There are several options for this. The simplest is to use the JMH libraries for Java.

JMH: what makes them so good and why they didn’t suit us

JMH is a set of libraries for testing the performance of small functions. If you use libraries, then you can avoid the error due to the fact that we:

  • find out latency – time to call System.nanoTime(),

  • we measure the granularity of the method – resolution, the minimum non-zero difference between method calls System.nanoTime().

This will allow us to get a value that will correlate with the duration of the execution of the time-taking method. Everything seems to be simple: JMH itself calculates latency and granularity. But you still shouldn’t relax. On different OS, measuring time using latency and granularity can be different, so when calling this method with a large number of threads, you need to be careful.

In our case, it was not possible to use only JMH benchmarks due to internal limitations and process requirements. Therefore, I had to look for a third option – to measure the performance of the code and write benchmarks on my own.

Testing code optimality in “combat conditions”

Another way to avoid error is to measure the “cost” System.nanoTime(). This will allow us to get a value that will correlate with the duration of the execution of the time-taking method.

To solve the problem with the repeated method call, I prepared a self-written benchmark and tried to subtract the baseline before and after optimization. Then I compared how much time it took to execute the code before and after the improvements. To make it easier to work, I assembled a graphical interface for visualization in Python. Here’s what happened:

Data set before optimization.

Data set before optimization.

Data set visualization.

Data set visualization.

Similar measurements after optimization gave a visible result. The graph below shows measurements before optimization (upper lines) and after (lower lines):

Visual comparison of two datasets.

Visual comparison of two datasets.

Visual comparison of three datasets after averaging the values ​​of the experiments.

Visual comparison of three datasets after averaging the values ​​of the experiments.

The analytical conclusion is the performance gain after optimization.

The analytical conclusion is the performance gain after optimization.

As a result, the duration of the method call was reduced by 16 times. On the graphs, this value can be displayed with an error that is acceptable when visualizing measurements taken in “combat conditions”. But the value of the gain itself is real, since the error of the two measurements is subtracted from itself, and we need a relative value, not an absolute one.


It’s worth measuring code performance if only out of curiosity, but better in order to increase the speed of the product and simplify deployment. Benchmarks are a great solution for this. In our case, self-written benchmarks helped to significantly reduce the duration of the call. System.nanoTime(). Now we are working on testing the optimality of the entire system in most projects in order to reduce the number of potential problems.

Writing your own benchmarks and eliminating errors by subtracting the “cost” of the method is not for everyone. Moreover, in JMH all these problems are solved automatically by measuring latency and granularity. So it is quite possible to use such solutions: there is much less work, and the benefits are obvious.

Similar Posts

Leave a Reply