What happens when the rate limiting counters dict fills up?

I’m curious about the behaviour of the “local” rate limiting policy.

Every time a request is processed, a counter is either created or incremented in the kong_rate_limiting_counters shared-memory dictionary. See this line in the Kong source code.

This code is using LuaResty’s shm:incr function, documented here. Note that the function accepts an optional TTL, but in this case no TTL is specified so the value will not expire.

Unless I’m missing something, this means the dictionary will grow in an unbounded manner. However, I notice here that the shared memory zone is created with a size of 12 MB. So my question is, what happens when the dictionary grows to fill that space? Is it smart enough to expire old entries using LRU or something similar? Or will it stop accepting new entries, meaning the rate limiting logic stops working? Or does nginx just crash? Or something else?

As an aside, I think it would make sense to set a TTL when calling shm:incr. The appropriate TTL can be trivially calculated based on the rate limiting period: if it’s a per-minute metric, set the TTL to one minute.

As another aside, the “cluster” policy exhibits a similar problem. It writes rate limiting metrics to postgres but never deletes them, so we have to run a nightly job to clean them up and prevent the DB from filling its disk. Correction: it looks like a periodic DB cleanup task was added in 0.15.x.

Yes, lua shared memory zones (shm) evict old entries with an LRU queue when the zone is full.

The ttl argument was contributed by us to OpenResty after this plugin was written. A PR making use of this new argument would be most welcome here! :slight_smile:


Thanks for the reply.

I’ve just submitted a PR as you suggested. https://github.com/Kong/kong/pull/4422

@cb372 Great, thank you! We’re currently busy with the 1.1.0 release until the coming week, so we’ll look at it some time from now. Thanks again.