Db_cache_ttl and mem_cache_size


we are using 0.13 kong and cassandra in kubernetes.
we have performed a load test on kong with db_cache_ttl set to 1800 sec ( 30 mins) . we see few calls were having high proxy latency.
After changing db_cache_ttl to 3600sec the load test was passed.

what is the use of db_cache_ttl and how it works for kong. After the cache expires what kong will load into the memory( all the cassandra data/ only per api’s as per the request).

what is the use of mem_cache_size. what is the best value to set for both the parameters for production environment for good response time.



Given any thought to db_cache_ttl of 0 (which allows cache to persist and trust the eviction logic in place). Otherwise I believe Kong does rebuild lots of internal routers/plugins and resources which is why you see those latency spikes at times. mem_cache_size is space for all things core kong, I don’t think it accounts for anything like allocated lua vm mem or anything though so its just kong elements.

In prod I run
1024m (so 1gb allocated, certainly likely overkill :smiley: ). I think you could get away with 100m pretty easily or even less in 99% of cases without sweating it.
And I run cache ttl 0 and have no issues with it.

I recommend upgrading your .13 Kong to at least 0.14.x though until your ready to make the jump to Kong 1.03+ .