Kong caching in Redis (centralized caching)

Is it possible to switch Kong caching to use Redis instead of memory - use a single cache for multiple instances of Kong, multiple nodes?

Our use case is that we have Kong deployed in Kubernetes on 6 pods and we would like to cache all resources only once and not per pod. One solution I see would be using a single Redis in-memory DB with failover to internal memory (if Redis appears to be down). Does anyone have any experience with that?

The most problematic part for us is rate-limiting plugin as we have it set to policy=cluster (the behaviour we want) and it causes some bigger latencies as it accesses the DB constantly. I know we could use Redis there but afaik, we would have to pass the redis arguments always - on every creation of rate-limiting - it seems we cannot default to our custom deployment of redis and make redis the default policy instead of cluster.

Check out https://docs.konghq.com/hub/kong-inc/proxy-cache/

Thanks @Cooper, but I don’t think that’s what I am looking for. This plugin can cache whole responses.

What I would like is to redirect Kong’s local caching of its objects (consumers, credentials, services, rate limits, etc.) to a centralized place like Redis - a place that would be accessible by all pods in Kubernetes (multiple Kong nodes would share a single cache store).

Hi @gasper

No, it is not possible to swap out Kong’s in-memory caching with an external datastore. It is unlikely to ever be possible. If I understand you correctly, you would wish to deploy Kong as such:

Kong <-> Redis <-> PostgreSQL/Cassandra

I have to say that I question the benefits if this pattern. It makes almost no sense to not cache configuration values in Kong, since:

  1. The value has to reach Kong’s memory at some point for Kong to act on it.
  2. If we do not cache it then, every subsequent request would have to hit Redis again, which seems suboptimal.
  3. Kong already handles cluster-wide cache invalidation appropriately, so there is little to worry about on that front.

Concerning the rate-limiting configuration question, yes, Redis is not the default rate-limiting policy. But standardizing your Admin API requests (by way of scripting, tooling, etc…) is an easy way to ensure that all of your configuration requests use Redis as the rate-limiting policy.

some bigger latencies as it accesses the DB constantly

Indeed, which is why in-memory caching exists.

I then have to ask, what purpose would externalizing the cache serve?

Thanks @thibaultcha , and I concur. @gasper I suggest you’ll have good performance/$ by allocating adequate local cache https://docs.konghq.com/0.14.x/configuration/#mem_cache_size - yes, each node may cache the same entities, but this is fairly efficient, and there is no need (nor, as pointed out, any opportunity) to also provision and manage Redis.

Thanks both @thibaultcha and @Cooper, really useful.

I guess we have a special case, we might be exploiting Kong a bit. We have two deployments with a single PSQL database as we need to have centralized consumers and credentials datastore. And one of the deployments has a bigger latency to access the DB. We are fine for most of the parts except for rate-limiting so far. That’s why I saw as one of the solutions to have external centralized caching in Redis.

I guess we will have to start using Redis policy for rate-limiting and think a bit more how to bring down the latency of the second Kong deployment accessing it’s PSQL DB (the DB is in a different region).

Are there any other solutions how to centralize consumers DB data and make calls to it efficient with low latency? I guess Cassandra would be a better option compared to PSQL for multi-regional storage.