How to generate constant rate requests in k6 with the new scripting API?

Hello, Khabrovites. On the eve of the start of the "Load Testing" course, we have prepared for you a translation of another interesting material.






Introduction



The v0.27.0 release brought us a new execution engine and many new executors to address your specific requirements. It also includes a new scripting API with many different options for tuning and simulating the system under test (SUT) load. This is the result of a year and a half working on the infamous # 1007 pull request.



To generate queries at a constant rate, we can useconstant-arrival-rateperformer. This runner runs the test with iterations at a fixed frequency for the specified time. This allows k6 to dynamically change the number of active virtual users (VUs) during test execution in order to achieve the specified number of iterations per unit of time. In this article, I am going to explain how to use this scenario to generate requests at a constant rate.



Basics of script configuration options



Let's take a look at the key parameters used in k6 to describe a test configuration in a script that uses an constant-arrival-rateexecutor:



  • executor ():

    — k6. VU - — , , .
  • rate () timeUnit ( ):

    k6 rate timeUnit .



    :



    • rate: 1, timeUnit: '1s' « »
    • rate: 1, timeUnit: '1m' « »
    • rate: 90, timeUnit: '1m' « 90 », 1,5 /, 667 ,
    • rate: 50, timeUnit: '1s' « 50 », 50 (requests per second — RPS), , .. 20
  • duration ():

    , gracefulStop.
  • preAllocatedVUs:

    .
  • maxVUs:

    , .


Together, these parameters form a script that is part of the test configuration options . The code snippet below is a sample constant-arrival-ratescript.



In this configuration, we have a constant_request_ratescript, which is a unique identifier used as a label for the script. This scenario uses an constant-arrival-rateexecutor and runs in 1 minute. Every second ( timeUnit), 1 iteration ( rate) will be performed . The pool of pre-provisioned virtual users contains 20 instances and can be up to 100, depending on the number of requests and iterations.



Keep in mind that initializing virtual users during a test can be CPU-intensive and thus distort test results. In general, it is better to preAllocatedVUhave enough to run the load test. Therefore, do not forget to allocate more virtual users depending on the number of requests in your test, and the rate at which you want to run the test.



export let options = {
  scenarios: {
    constant_request_rate: {
      executor: 'constant-arrival-rate',
      rate: 1,
      timeUnit: '1s',
      duration: '1m',
      preAllocatedVUs: 20,
      maxVUs: 100,
    }
  }
};


An example of generating requests with a constant frequency with constant-arrival-rate



In the previous tutorial, we demonstrated how to calculate constant request rate. Let's take a look at it again, keeping in mind how scripting works:



Suppose you expect your system under test to process 1000 requests per second at an endpoint. Preallocating 100 virtual users (maximum 200) allows each virtual user to send approximately 5 ~ 10 requests (based on 100 ~ 200 virtual users). If each request takes more than 1 second to complete, you end up making fewer requests than expected (moredropped_iterations), which is a sign of performance issues or unrealistic expectations for your system under test. If this is the case, you should fix the performance problems and restart the test, or moderate your expectations by adjusting timeUnit.



In this scenario, each pre-provisioned virtual user will make 10 requests ( ratedivisible bypreAllocatedVU). If no requests are received within 1 second, for example, it took more than 1 second to get a response, or your system under test took more than 1 second to complete the task, k6 will increase the number of virtual users to compensate for the missed requests. The following test generates 1000 requests per second and runs for 30 seconds, which is roughly 30,000 requests, as you can see in the output below: http_reqsand iterations. In addition, k6 only used 148 out of 200 virtual users.



import http from 'k6/http';

export let options = {
    scenarios: {
        constant_request_rate: {
            executor: 'constant-arrival-rate',
            rate: 1000,
            timeUnit: '1s', // 1000   , ..1000  
            duration: '30s',
            preAllocatedVUs: 100, //      
            maxVUs: 200, //  preAllocatedVU ,    ,    
        }
    }
};

export default function () {
    http.get('http://test.k6.io/contacts.php');
}


The result of executing this script will be as follows:



$ k6 run test.js


          /\      |‾‾|  /‾‾/  /‾/

     /\  /  \     |  |_/  /  / /

    /  \/    \    |      |  /  ‾‾\

   /          \   |  |‾\  \ | (_) |

  / __________ \  |__|  \__\ \___/ .io

  execution: local
     script: test.js
     output: -

  scenarios: (100.00%) 1 executors, 200 max VUs, 1m0s max duration (incl. graceful stop):
           * constant_request_rate: 1000.00 iterations/s for 30s (maxVUs: 100-200, gracefulStop: 30s)

running (0m30.2s), 000/148 VUs, 29111 complete and 0 interrupted iterations
constant_request_rate ✓ [======================================] 148/148 VUs  301000 iters/s

    data_received..............: 21 MB  686 kB/s
    data_sent..................: 2.6 MB 85 kB/s
    *dropped_iterations.........: 889    29.454563/s
    http_req_blocked...........: avg=597.53µs min=1.64µs  med=7.28µs   max=152.48ms p(90)=9.42µs   p(95)=10.78µs
    http_req_connecting........: avg=561.67µs min=0s      med=0s       max=148.39ms p(90)=0s       p(95)=0s
    http_req_duration..........: avg=107.69ms min=98.75ms med=106.82ms max=156.54ms p(90)=111.73ms p(95)=116.78ms
    http_req_receiving.........: avg=155.12µs min=21.1µs  med=105.52µs max=34.21ms  p(90)=147.69µs p(95)=190.29µs
    http_req_sending...........: avg=46.98µs  min=9.81µs  med=41.19µs  max=5.85ms   p(90)=53.33µs  p(95)=67.3µs
    http_req_tls_handshaking...: avg=0s       min=0s      med=0s       max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=107.49ms min=98.62ms med=106.62ms max=156.39ms p(90)=111.52ms p(95)=116.51ms
    *http_reqs..................: 29111  964.512705/s
    iteration_duration.........: avg=108.54ms min=99.1ms  med=107.08ms max=268.68ms p(90)=112.09ms p(95)=118.96ms
    *iterations.................: 29111  964.512705/s
    vus........................: 148    min=108 max=148
    vus_max....................: 148    min=108 max=148


When writing a test script, consider the following points:



  1. k6 (), . , , maxRedirects: 0 . http , maxRedirects.
  2. . , , , , sleep().
  3. , , . preAllocatedVU / maxVU, , , , preAllocatedVU, maxVU .



    WARN[0005] Insufficient VUs, reached 100 active VUs and cannot initialize more  executor=constant-arrival-rate scenario=constant_request_rate


  4. , drop_iterations, iterations http_reqs . dropped_iterations , , . , , preAllocatedVU. , , , .
  5. , , . :



    WARN[0008] Request Failed
  6. Remember that the scripting API does not support global use of duration, vus, and stages, although they can still be used. This also means that you cannot use them in conjunction with scripts.


Conclusion



Prior to the v0.27.0 release , k6 did not have sufficient support to generate requests at a constant rate. Therefore, we implemented a workaround in JavaScript , calculating the time it takes to complete requests for each iteration of the script. With v0.27.0, this is no longer necessary.



In this article, I discussed how k6 can achieve consistent request rate with the new scripting API usingconstant-arrival-rateperformer. This executor simplifies the code and provides the means to achieve a fixed number of requests per second. This is in contrast to a previous version of the same article in which I described another method to achieve much the same results by calculating the number of virtual users, iterations and duration using a formula and some boilerplate JavaScript code. Fortunately, this new approach works as intended and we no longer need to use any hacks.



I hope you enjoyed reading this article. I would love to hear your feedback.







Read more






All Articles