LRU_SIZE option

Does the LRU_SIZE option (in the kong/cache.lua file) define the size of the L1 or L2 cache or both?

L1 only. It is not configurable on purpose, because it would’ve very easy to shoot yourself in the foot with it, and use a too high value that would result in Lua VM out of memory crashes.

I see. What about L2? If it isn’t limited by a number defining the size, what limits it? MEM_CACHE_SIZE env? If L2 is only limited by the MEM_CACHE_SIZE, how it works in the following case:

Imagine that we’ve the LRU_SIZE = 10 and we’re caching 20 elements. As I understand, at the end there are the last ten elements (from 11 to 20) in the L1 and what is in the L2?

  1. There are all 20 elements in the L2 (as no memory error hasn’t been reched) OR
  2. There are also the last ten elements (from 11 to 20) in the L2 as a LRU algorithm removes least recently used elements from L2 as well OR
  3. Different solution?

Could you give me answers to bold questions?

BTW: What is the genesis of the problem that I am trying to solve? I’m asking because I set LRU_SIZE = 1 to check something and I sent 3 requests.

  • Step 1: First request inkoved the cache to get some value for a key = 1. There wasn’t this key in the cache, so the cache ran L3 to fetch and cache value.
  • Step 2: Second request inkoved the cache to get some value for a key = 2. There wasn’t this key in the cache, so the cache ran L3 to fetch and cache value.
  • Step 3: Third request inkoved the cache to get some value for a key = 1 again. There was this key in the cache, so it returned the value from the cache.

I expected that it would go to L3 in the step 3 as the LRU algorithm would remove the key=1 from the cache in the step 2 as the key=2 would override it. At now a only solution that comes to my mind is: it fetched the value for the key=1 from L2 in the step 3, but I’d like to know why there was this key in L2. If it’s a true what I wrote.

The L2 cache as you refer to it is indeed defined by the mem_cache_size configuration value.

The database cache is actually this library:

Which you should read the documentation of if you want to get familiar with how it is built.

Considering your use-case, the 3rd step does not trigger a callback (or L3) because the value is still cached in L2 (the lua_shared_dict).