Mlcache Error, Any cause for alarm? My callback impl incorrect?

I have seen this error logged to stage and prod lately for some of our plugins code leveraging the singletons cache, specifically happens when our OIDC provider throws a 401 when validating an access_token:

2018/11/27 16:02:01 [warn] 36#0: *26768119 [lua] mlcache.lua:780: get(): callback returned an error (response indicates failure, status=401, body=<!DOCTYPE html>
<!-- template name: -->
<html lang="en" dir="ltr">
    <base href=" " />
    <meta name="robots" content="noindex, nofollow" />
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    <meta name="viewport" content="initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no" />
    <meta http-equiv="x-ua-compatible" content="IE=edge" />
    <link rel="stylesheet" type="text/css" href="assets/css/main.css" />
<div class="ping-container">
     if there is a logo present in the 'company-logo' container,
     then 'has-logo' class should be added to 'ping-header' container.
    <div class="ping-header">
        <span class="company-logo"><!-- client company logo here --></span>
    </div><!-- .ping-header -->
    <div class="ping-body-container">
        <div class="section-title">
        <div class="ping-messages ping-nopad">
                Looks like something is not right. Please contact your administrator.
            <div class="ping-note-text">
                401 - Unauthorized
    </div> <!-- .ping-body-container -->
    <div class="ping-footer-container">
        <div class="ping-footer">
            <div class="ping-credits"></div>
            <div class="ping-copyright">Copyright © 2003-2017. Ping Identity Corporation. All rights reserved.</div>
        </div> <!-- .ping-footer -->
    </div> <!-- .ping-footer-container --> 
</div> <!-- .ping-container -->
) but stale value found in shm will be resurrected for 30s (resurrect_ttl), client: **.***.*.***, server: kong, request: "POST /api/service/v1.0 HTTP/1.1", host: "", referrer: " "

Now my code looks like this:

   -- call the userInfo endpoint with access_token
  local json, err = singletons.cache:get(access_token, { ttl = 1800 }, openidc_call_userinfo_endpoint, opts, access_token)
return json, err

And callback logic like so:

-- make a call to the userinfo endpoint
local function openidc_call_userinfo_endpoint(opts, access_token)
  if not opts.discovery.userinfo_endpoint then
    ngx.log(ngx.DEBUG, "no userinfo endpoint supplied")
    return nil, nil

  local headers = {
      ["Authorization"] = "Bearer "..access_token,
  ngx.log(ngx.DEBUG,"authorization header '"..headers.Authorization.."'")

  local httpc =
  openidc_configure_timeouts(httpc, opts.timeout)
  openidc_configure_proxy(httpc, opts.proxy_opts)
  local res, err = httpc:request_uri(opts.discovery.userinfo_endpoint, {
    headers = headers,
    ssl_verify = (opts.ssl_verify ~= "no")
  if not res then
    err = "accessing ("..opts.discovery.userinfo_endpoint..") failed: "..err
    return nil, err

  ngx.log(ngx.DEBUG, "userinfo response: ", res.body)

  -- parse the response from the user info endpoint
  return openidc_parse_json_response(res)

And lastly the openidc_parse_json_response() method:

-- parse the JSON result from a call to the OP
local function openidc_parse_json_response(response)

  local err
  local res

  -- check the response from the OP
  if response.status ~= 200 then
    err = "response indicates failure, status="..response.status..", body="..response.body    
    -- decode the response and extract the JSON object
    res = cjson_s.decode(response.body)

    if not res then
      err = "JSON decoding failed"

  return res, err

So in my case underlying method is populating res with the response from user validation check html error blob and err is populated as well when 401 unauthorized occurs from oidc validation endpoint, so does that mean mlcache is storing this response and I am leveraging it incorrectly? Ideally I would like these giant error logs to go away in my Kong std out and I certainly don’t want the singletons cache holding this dumb body of html error text if its doing so. I don’t believe the error to be anything critical of sorts as our OIDC proxies work fine from client perspective, but this error log on the 401’s is bugging me :laughing: . I made sure to avoid doing things like ngx.exit() or returning responses from the callback methods itself(ngx.say() ) as I was instructed in the past :slight_smile: .

When resurrect_ttl is configured, then mlcache will log the underlying callback errors itself (unlike the default behavior, in which the callback error is returned as part of val, err = cache:get(...)). See:

Feel free to discard the error when returning from your callback, or throw the error if you want to catch it from cache:get() but do something else with it. Also beware that mlcache maintains a negative cache, so returning nil, nil will cause the miss to be cached for negative_ttl (which, in Kong, is the same as db_cache_ttl).

You also may want to consider using your own instance of mlcache for this, since the one provided by Kong is internal to Kong itself, and sets its own options and can be quite full with DB values.

1 Like

Ah, so it sounds like in reality what I need is make my own separate mlcache instance and do not set the resurrect_ttl at all for my specific configuration(as I would never want the stale cached value being returned on those error calls which would indeed be the users info at the time of their access_token being valid because we proxy that along on successful transactions). Is there any challenge in declaring mlcache instances for a specific Kong plugin or are their any examples that do this? I will just give it a go but thanks for clarifying :slight_smile: !

To any who also want to leverage their own awesome mlcache that thibaultcha has engineered here is some sample code I hope gets people off the ground running. Someone can correct me if I have questionable values but I believe them to seem sufficient:

In your kong custom template you will need 3 new shm’s to leverage to get best perf:

# exclusive oidc shm caches
lua_shared_dict kong_oidc_cache      10m;
lua_shared_dict kong_oidc_cache_miss  6m;
lua_shared_dict kong_oidc_cache_locks 3m;

In your init worker of the plugin you need something like so

function OidcHandler:init_worker()
  local oidccache, err ="kong_oidc_cache", "kong_oidc_cache", {
    shm_miss         = "kong_oidc_cache_miss",
    shm_locks        = "kong_oidc_cache_locks",
    shm_set_retries  = 3, --Default standard
    lru_size         = 1000, -- 1000 records in L1 cache
    ttl              = 43200, --12 hour ttl
    neg_ttl          = 30, 
    resty_lock_opts  = {exptime = 10, timeout = 5,},
  if not oidccache then
    ngx.log(ngx.ERR, "failed to instantiate oidc mlcache: " .. err)
  kong.oidc_cache = oidccache

Then to leverage it in an access phase:

 -- call the userInfo endpoint with access_token
 local json, err = kong.oidc_cache:get(access_token, { ttl = 1800 }, openidc_call_userinfo_endpoint, opts, access_token)
return json, err

I left out the ipc optional table as it seems if your leveraging just the get() functionality it seems unnecessary, and that you only need it if you use those other methods which will require special per worker process intercommunication towards invalidation.

I think in the future if Kong pdk gave a clean way to declare these mlcaches without code bulk it would be nice too :slight_smile: .