How to generate constant rate requests in k6 with the new scripting API?

Hello, Khabrovites. On the eve of the start of the course “Stress Testing” prepared for you a translation of one more interesting material.


The v0.27.0 release brought us a new execution engine and many new executors to address your specific requirements. It also includes a new API scripts with many different options for tuning and simulating the load on the system under test (SUT). This is the result of one and a half years of work on the infamous # 1007 pool request.

To generate queries at a constant rate, we can use constant-arrival-rate performer. This runner runs the test with iterations at a fixed frequency for the specified time. This allows k6 to dynamically change the number of active virtual users (VUs) during test execution in order to achieve the specified number of iterations per unit of time. In this article, I am going to explain how to use this scenario to generate requests at a constant rate.

Basics of script configuration options

Let’s take a look at the key parameters used in k6 to describe a test configuration in a scenario that uses constant-arrival-rate executor:

  • executor (performer):
    Performers Are the workhorses of the k6 execution engine. They each plan VUs and iterations differently – you choose them based on the type of traffic you want to simulate to test your services.
  • rate (quantity) and timeUnit (time unit):
    k6 tries to run rate iterations each timeUnit period.

    For instance:

    • rate: 1, timeUnit: '1s' means “try to run one iteration every second”
    • rate: 1, timeUnit: '1m' means “try to run one iteration every minute”
    • rate: 90, timeUnit: '1m' means “try to run 90 iterations per minute”, that is, 1.5 iterations / s, or try to start a new iteration every 667 ms,
    • rate: 50, timeUnit: '1s' means “try to run 50 iterations every second”, that is, 50 requests per second (RPS), if there is one request in our iteration, i.e. try to start a new iteration every 20ms
  • duration (duration):
    The total duration of the script excluding gracefulStop
  • preAllocatedVUs:
    The number of pre-allocated virtual users prior to the start of the test.
  • maxVUs:
    the maximum number of virtual users allowed for a test run.

Together, these parameters form scenariowhich is part of options test configuration. The below code snippet is an example constant-arrival-rate script.

In this configuration, we have constant_request_rate script, which is a unique identifier used as a label for the script. This scenario uses constant-arrival-rate performer and performed within 1 minute. Every second (timeUnit) will execute 1 iteration (rate). The pool of pre-provisioned virtual users contains 20 instances and can reach 100, depending on the number of requests and iterations.

Be aware that initializing virtual users during a test can be CPU-intensive and thus skew test results. In general, it is better to preAllocatedVU was sufficient to run the load test. Therefore, do not forget to allocate more virtual users depending on the number of requests in your test, and the rate at which you want to run the test.

export let options = {
  scenarios: {
    constant_request_rate: {
      executor: 'constant-arrival-rate',
      rate: 1,
      timeUnit: '1s',
      duration: '1m',
      preAllocatedVUs: 20,
      maxVUs: 100,

An example of generating requests with a constant frequency with constant-arrival-rate

AT previous manual we have demonstrated how to calculate constant request rate. Let’s take a look at it again, keeping in mind how scripting works:

Let’s say you expect your system under test to handle 1000 requests per second at an endpoint. Preallocating 100 virtual users (maximum 200) allows each virtual user to send approximately 5 ~ 10 requests (based on 100 ~ 200 virtual users). If each request takes more than 1 second to complete, you end up making fewer requests than expected (more dropped_iterations), which is a sign of performance issues or unrealistic expectations for your system under test. If this is the case, you should fix the performance issues and re-run the test, or moderate your expectations by correcting timeUnit

In this scenario, each pre-allocated virtual user will make 10 requests (rate divided by preAllocatedVU). If no requests are received within 1 second, for example, it took more than 1 second to get a response, or your system under test took more than 1 second to complete the task, k6 will increase the number of virtual users to compensate for the missed requests. The following test generates 1000 requests per second and runs for 30 seconds, which is roughly 30,000 requests, as you can see below in the output: http_reqs and iterations… In addition, k6 only used 148 out of 200 virtual users.

import http from 'k6/http';

export let options = {
    scenarios: {
        constant_request_rate: {
            executor: 'constant-arrival-rate',
            rate: 1000,
            timeUnit: '1s', // 1000 итераций в секунду, т.е.1000 запросов секунду
            duration: '30s',
            preAllocatedVUs: 100, // насколько большой начальный пул виртуальных пользователей
            maxVUs: 200, // если preAllocatedVU недостаточно, мы можем инициализировать еще, но больше этого количества

export default function () {

The result of executing this script will be as follows:

$ k6 run test.js

          /      |‾‾|  /‾‾/  /‾/

     /  /       |  |_/  /  / /

    /  /        |      |  /  ‾‾

   /             |  |‾   | (_) |

  / __________   |__|  __ ___/ .io

  execution: local
     script: test.js
     output: -

  scenarios: (100.00%) 1 executors, 200 max VUs, 1m0s max duration (incl. graceful stop):
           * constant_request_rate: 1000.00 iterations/s for 30s (maxVUs: 100-200, gracefulStop: 30s)

running (0m30.2s), 000/148 VUs, 29111 complete and 0 interrupted iterations
constant_request_rate ✓ [======================================] 148/148 VUs  30s  1000 iters/s

    data_received..............: 21 MB  686 kB/s
    data_sent..................: 2.6 MB 85 kB/s
    *dropped_iterations.........: 889    29.454563/s
    http_req_blocked...........: avg=597.53µs min=1.64µs  med=7.28µs   max=152.48ms p(90)=9.42µs   p(95)=10.78µs
    http_req_connecting........: avg=561.67µs min=0s      med=0s       max=148.39ms p(90)=0s       p(95)=0s
    http_req_duration..........: avg=107.69ms min=98.75ms med=106.82ms max=156.54ms p(90)=111.73ms p(95)=116.78ms
    http_req_receiving.........: avg=155.12µs min=21.1µs  med=105.52µs max=34.21ms  p(90)=147.69µs p(95)=190.29µs
    http_req_sending...........: avg=46.98µs  min=9.81µs  med=41.19µs  max=5.85ms   p(90)=53.33µs  p(95)=67.3µs
    http_req_tls_handshaking...: avg=0s       min=0s      med=0s       max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=107.49ms min=98.62ms med=106.62ms max=156.39ms p(90)=111.52ms p(95)=116.51ms
    *http_reqs..................: 29111  964.512705/s
    iteration_duration.........: avg=108.54ms min=99.1ms  med=107.08ms max=268.68ms p(90)=112.09ms p(95)=118.96ms
    *iterations.................: 29111  964.512705/s
    vus........................: 148    min=108 max=148
    vus_max....................: 148    min=108 max=148

When writing a test script, consider the following points:

  1. Since k6 is tracking redirects (redirects), the number of redirects is added to the total requests per second in the output. If you don’t need it, you can disable it globally by setting maxRedirects: 0 in their options. You can also customize maximum number of redirects for the http request itself, which will override the global maxRedirects
  2. Complexity matters. Therefore, try to keep the function being executed simple, preferably by making only a few requests, avoiding additional processing or calls whenever possible. sleep()
  3. You will need a fair amount of virtual users to get the results you want, otherwise you will run into varnings like the following. In this case, just increase preAllocatedVU and / or maxVU, but keep in mind that sooner or later you will reach the maximum limit of the machine the test is running on when preAllocatedVUnor maxVU will no longer matter.
    WARN[0005] Insufficient VUs, reached 100 active VUs and cannot initialize more  executor=constant-arrival-rate scenario=constant_request_rate

  4. As you can see in the above results, there is drop_iterations, and the number iterations and http_reqs less than the specified frequency. The presence of many dropped_iterations means there weren’t enough provisioned virtual users to perform some of the iterations. Typically, this problem can be solved by increasing preAllocatedVU… The exact value takes a little trial and error as it depends on various factors including endpoint response time, network bandwidth, and other associated latency.
  5. During testing, you may come across the following warnings, which means that you have reached the limits of your operating system. If so, consider fine tuning your operating system:
    WARN[0008] Request Failed
  6. Remember that the scripting API does not support global use of duration, vus, and stages, although they can still be used. This also means that you cannot use them in conjunction with scripts.


Before release v0.27.0 k6 did not have enough support to generate constant rate requests. Therefore, we have implemented JavaScript workaroundby calculating the time it takes to execute queries at each iteration of the script. With v0.27.0, this is no longer necessary.

In this article, I discussed how k6 can achieve consistent request rate with the new scripting API using constant-arrival-rate performer. This executor simplifies the code and provides the means to achieve a fixed number of requests per second. This is in contrast to a previous version of the same article, in which I described another method to achieve substantially the same results by calculating the number of virtual users, iterations, and duration using a formula and some boilerplate JavaScript code. Fortunately, this new approach works as intended and we no longer need to use any hacks.

I hope you enjoyed reading this article. I would love to hear your feedback.

Read more:

  • Flame graphics: “fire” from all engines

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *