Paging @thibaultcha as I know the mlcache is your baby . This may have impact dating back to your suggestion in:
Of breaking out separate lua_shared_dict’s to prevent a risk of rate limiting entries filling up the standard kong_cache.
Now onto my findings:
So I use a custom OIDC plugin where I cache userInfo that gets returned from an OIDC lookup. Where I take a session cookie as the key and userinfo in json as the value. Originally I wanted to break out where that cache gets stored too, so I added:
/blob/master/templates/optum_nginx.template
lua_shared_dict kong 5m;
lua_shared_dict kong_cache ${{MEM_CACHE_SIZE}};
# Rate limit isolated shared dict
lua_shared_dict kong_ratelimit 10m;
# Optum OIDC limit isolated shared dict
lua_shared_dict kong_oidcusers 10m;
optum-kong-oidc-plugin/blob/master/access.lua -
local singletons = require "kong.singletons"
--Exclusive shm for oidc users
--local shm = ngx.shared.kong_oidcusers
And code like so(you can see singletons use vs my own shm):
--local userInfo, err = shm:get(eoauth_token, { ttl = 28800 }, getUserInfo, access_token, callback_url, conf)
local userInfo, err = singletons.cache:get(eoauth_token, { ttl = 28800 }, getUserInfo, access_token, callback_url, conf)
The behavior is this:
Using the kong singletons.cache:get only has to make that call once and then its cached and behaving as expected. Using the shm I declared it seems to never get in the callback…
With singletons.cache:get:
2018/04/10 18:16:38 [error] 54#0: *9490 [lua] access.lua:42: getKongKey(): Can we get it from the cache?, client: 10.129.2.1, server: kong, request: "GET /oidc/test HTTP/1.1", host: "gateway.com"
2018/04/10 18:16:39 [error] 54#0: *9490 [lua] access.lua:27: In the Callback, not cached!, client: 10.129.2.1, server: kong, request: "GET /oidc/test HTTP/1.1", host: "gateway.com"
2018/04/10 18:16:39 [error] 54#0: *9490 [lua] access.lua:169: run(): Iterate on needed keys in header!, client: 10.129.2.1, server: kong, request: "GET /oidc/test HTTP/1.1", host: "gateway.com"
2018/04/10 18:17:03 [error] 55#0: *11643 [lua] access.lua:169: run(): Iterate on needed keys in header!, client: 10.130.4.1, server: kong, request: "GET /oidc/test HTTP/1.1", host: "gateway.com"
2018/04/10 18:17:06 [error] 55#0: *11643 [lua] access.lua:42: getKongKey(): Can we get it from the cache?, client: 10.130.4.1, server: kong, request: "GET /oidc/test HTTP/1.1", host: "gateway.com"
2018/04/10 18:17:06 [error] 55#0: *11643 [lua] access.lua:169: run(): Iterate on needed keys in header!, client: 10.130.4.1, server: kong, request: "GET /oidc/test HTTP/1.1", host: "gateway.com"
With shm:
Can we get it from the cache?, client: 10.129.2.1, server: kong, request: "GET /oidc/test HTTP/1.1", host: "gateway.com"
2018/04/10 21:53:23 [error] 55#0: *63306 lua coroutine: runtime error: ...e/lua/5.1/kong/plugins/optum-kong-oidc-plugin/access.lua:166: attempt to concatenate local 'userInfo' (a nil value)
^ Repeating that every time, like its not making it into the callback at all and object seems to be nil when I try to print it outside the lua method as the local result of getUserInfo() that calls the cache
Is this due to the fact my Plugin declares the shm in an access.lua file as opposed to an init.lua for plugins? Is this due to something special kong.singletons is doing to its kong_cache that separately created shm’s declared are not taking advantage of?
EDIT - I just reread your topic a bit cause I see you made more changes to your suggestion on rate limiting than last time I read that git issue. Maybe I need to add these to constants too:
kong/constants.lua
I will try a few more changes to see if I can get it working as I would anticipate like the default shm does. If fails then may need a little more guidance. Will update later tonight.
Thanks,
Jeremy