Plugin idea Kong load testing

Been thinking about if it would be possible to write a plugin right into Kong that utilizes the lua-resty-http lib for http load testing on top of a dummy proxy endpoint like /loadtesting where I pass in say a flag that lets the plugin know this request is for it, url / headers / payload / users (# of lua-resty-http clients) or a total tx or timer of sorts as post body or get query parameters and the plugin immediately goes to work. Do we think it would be possible to create multiple instances of that lua-resty-http library running at once within one kong plugin, anyone know how to instantiate the lib to make like 0-100 instances of itself(representing users) and start sending requests in a time / request count driven loop? Seems like we would need to utilize the ngx.thread.spawn call potentially(https://github.com/openresty/lua-nginx-module#ngxthreadspawn ?)

Other features I was thinking about including:

  1. Email with response times / 95th percentile / status code counts and such.

Do we think this is doable as a plugin or I am better off using Gatling or some other tool externally. Would really be neat to leverage our Kong infra as a load testing tool too (potentially use my Kong dev env cluster to stress test my Kong stage env proxy services). Just would give Kong another use case as a flexible tool around APIs.

I don’t see any reason this wouldn’t be possible but in my humble opinion you’d be better off utilizing Kong for measuring / monitoring the performance and using a dedicated load generating tool (Gatling, locust, httperf etc).

My thinking is the existing load generating tools will be more actively developed, supported, documented then making something specific to lua / Kong

1 Like

Thanks for the input! Yeah I certainly agree other load testing tools would be better maintained and provide better usability. I believe I can whip something up in a few hours just as a POC test directly with Kong for fun just to see some basic behavior anyways.

1 Like

It is my impression that such a plugin would not get anywhere close to the native performance offered by tools such as wrk, httpload, and others… Even more so, given the cooperative scheduling paradigm followed by Nginx/OpenResty, in which the processing of other requests being proxied (and plugins running) will take their share of CPU time. Another difficulty I envision is the IPC (or any other synchronization mechanism) between the nginx workers you will need to implement in order to match the multi-threading capabilities offered by these tools (e.g. wrk -t 8 ...).

Anyway, still curious to hear about the experiment if you happen to go through with it :slight_smile:

1 Like