Custom jwt plugin, refresh tokens and some weirdness

Hi,
I am writing a custom jwt plugin trying to embed the concept of refresh tokens in it by making use of the provided JWT parser and extending the existing jwt logic. I have been able to call another service when I need to refresh my tokens (say 15 mins) by writing some custom code over it something like

local ok_refresh_time, errors = check_refresh_time(claims.refresh_time)
  if not ok_refresh_time then
    local new_jwt_token, status_code = getTokenFromAuthService(claims)

Everything seems to be working good until I see this: (refer the terminal image below)
I am hitting the api with auth headers, and I get one kind of response, which is expected, update_token = false, logged_in = true and JWT present in headers.
I hit the same request again in the same second, and the response changes suddenly, which should not happen for another 15 mins (the short term token expiry) but instead the same auth token is put in my headers for the new request for some reason and the update_token is true. That piece of code to set the new token is not executed anywhere unless the time of 15 mins is up, but still I get 2 different responses in same second.

I am really confused with this behaviour.
Does the lua plugin cache any kind of responses of any kind or anything?
I am using header_filter() to return back the new jwt after I have received response from backend server to populate the jwt for the clients. Does that change anything?
I am setting the update_token and the generated_token as global variables in lua, and setting it in access functions and using them in header_filter to update the response values (since the backend server won’t be sending them).

Something like:

local function update_token_headers()
  if update_token == true then
    ngx.header["update_token"] = true
    if generated_jwt ~= nil then
      ngx.header["Authorization"] = "Bearer " .. generated_jwt
    end
  else
    ngx.header["update_token"] = false
  end
end
function JwtHandler:header_filter(config)
  JwtHandler.super.header_filter(self)
  update_token_headers() 
  build_logged_in_headers()
end

Have tried to put all debug points and lines, and I don’t see the global variables being set anywhere in my logs, but it still showing this err behaviour.
Any leads would be really helpful.

I think since I am defining a global variable on top of the class, something like:

local generated_jwt = nil

function Handler:new() ...

Maybe the global variable is acting as static, and I might have to refresh it to nil everytime.
So the next request returns back the previous request values since its stored in the memory.
Damn lua!

I’m glad that you found your problem. Sometimes verbalizing them helps finding a solution.

Regarding globals in Lua: make sure to consider the scenario in which your plugin is executed inside an kong instance with multiple workers, and in a cluster of kong instances (each of them maybe having multiple workers). Openresty does not replicate changes to global variables to other workers or other kong instances automatically. In Kong we deal with this via the resty.worker.events module an cluster_events modules.

Yeah, setting the global variables for use in response transformation is a bad idea I think, it’s maintaining state on plugin level and is not thread safe as I found. Considering single instance itself, concurrent request will change them resulting in erronuos behaviour. Will have to make it stateless, and will have to pass it on to upstreams microservices which will have to return it back to clients. That’s the only way I think I could use this.
Thanks !

Hey, I was thinking, kong internally does maintain somekind of index/map to know the latency (and other metrics) of every request. But this plugin is becoming stateful (and not thread safe). Is there a way that I could make it stateless, but retain the same functionality that I am trying to do ( storing global variable for every request to be used in header filters for response headers modification) ?

kong internally does maintain somekind of index/map to know the latency (and other metrics) of every request.

Kong does as little as possible by default (in order to be as low-footprint as possible by default). There are several plugins that deal with analytics and monitoring.

Is there a way that I could make it stateless, but retain the same functionality that I am trying to do ( storing global variable for every request to be used in header filters for response headers modification) ?

You have several options. First one is using nginx’s shared dictionaries. They are a simple in-memory key-value store that persists between requests. They are shared across workers, but not across kong nodes. And you might need to customize Kong’s config file to include a custom shared dict.

Another option is using the database. Your PostgreSQL/Cassandra instance is shared across all workers and nodes. However, writing to it is relatively slow - depending on your use case, you might want to accumulate several calls using a shared dict and then do one “flush” to the db periodically, to make it more efficient than doing one db access per request.

You could use global variables and the worker/cluster events to maintain eventual consistency. This is what we do inside Kong (see the usage of worker/cluster events in https://github.com/Kong/kong/blob/master/kong/runloop/handler.lua#L296 ).

Finally, you could use a combination of some or all of the above. There is no “silver bullet” strategy for dealing with the worker/cluster spread, each solution has its own strengths and weaknesses.