Rate limiting behaviour

I am a bit mixed up with rate-limiting plugin. Is there anyway so that we rate-limit a service for aggregated calls from any consumer not from each customer ?
I mean imagine my API has a load tolerance of 100 request per second. I want to enforce this rate limit and I do not know how many consumers will have access. based on the docs it cares about consumer and it use authentication or ip of conusmer machine.
This way if I put 100 req per second, and if I have 10 consumers they can make 10*100 request.
This will be worse if there are 100 consumers and so on.

1 Like

Hi,
I have also the same issue : I want to protect my backend overall:
I want to enforce rate limit on each end point linked to hardware/stack capacities (in addition to rate limit per consumer). However, the current rate-limiting plugin needs to be aggregate counts either to an ip, a consumer or a credential.

Is it something only possible with the “pro” edition and rate-limiting-advanced plugin ?
Has anyone hacked the lua code to implement this in the community edition ?
Regards
Julien

I just submitted a pull request on github implementing this.

You do not need to recompile anything to use it.
Overwrite the lua scripts from your existing rate-limiting plugin and restart kong.

Example:
git clone blabla
scp kong/plugins/rate-limiting/handler.lua scp kong/plugins/rate-limiting/schema.lua root@server-kong:/usr/local/share/lua/5.1/kong/plugins/rate-limiting/

Note related discussion in Common rate limiting for all consumers

You are indeed correct that if you have fast-growing usage of your backend service, and you don’t have plans to scale your backend in response, you will have problems - but I suggest that the correct way to solve those problems is by scaling your backend service and/or implementing per-consumer (or per-IP) rate limits.

Global rate limiting has the potential to cause increased usage of your service, by a single consumer, to effectively deny all users access to your service - I doubt that is what you want.

1 Like

Thanks Cooper for the link, I will now switch to this thread.