I have been trying to implement a bit of a /logout functionality for cached entities with the mlcache lib that will clean out a cached entry from all workers of a node (or multiple clustered nodes if possible), I modified my code like so adding an update so the delete gets propagated against all workers on a single node. I am thinking Kong maybe handles this so I don’t need to?
First looked at this readme:
Then I went back and started looking over here:
local singletons = require "kong.singletons"
if conf.user_info_cache_enabled then
local ok, err = singletons.cache:delete(encrypted_token)
if not ok then
ngx.log(ngx.ERR, "Failed to find cache entry: ", err)
return responses.HTTP_NOT_FOUND()
end
end
Maybe that delete should actually be:
cache:invalidate(key) or cache:invalidate_local(key) ??? This cached value could technically reside on one of the nodes in memory(not in the db itself as a value, but that may be a road-map feature to store the token to db and propagate across multiple nodes like oauth2) or multiple nodes in memory only. Not sure if this is a problem for Kong so figured I would ask here, would prefer to use :invalidate it if its safe to do so even if it does not exist on the other nodes(or does ).
The other snippet of my code was updated to this:
local function getKongKey(eoauth_token, access_token, callback_url, conf)
-- make sure L1 cache is evicted of stale values before calling get()
local ok, err1 = singletons.cache:update()
if not ok then
ngx.log(ngx.ERR, "failed to poll eviction events: ", err1)
end
local userInfo, err = singletons.cache:get(eoauth_token, { ttl = 28800 }, getUserInfo, access_token, callback_url, conf)
if err then
ngx.log(ngx.ERR, "Could not retrieve UserInfo: ", err)
return
end
return userInfo
end
The :update is throwing a nil currently as well so I am assuming this may not be needed(and does not even exist) in the Kong mlcache singleton?
EDIT:
So just studying the code it seems that yes I can do a :invalidate() and that will create a record in the db that helps it propagate to other nodes within the cluster(should not hurt if no entry in other cluster nodes). And then update is not needed in the next part since Kong will handle that during its broadcast and worker_events execution?