Cluster_events invalidation on INSERT

Hi, during the new oauth2_token generation by calling api POST http://kong:8000/myapi/oauth2/token I’ve noticed that a new invalidation entry is also inserted into cluster_events table.
I’m wondering why, because the new generated token is not yet in any node cache and it’s just created, nor modified or deleted.
By taking a look to the code I saw that a new invalidation entry seems to be inserted into cluster_events table on every inserting action https://github.com/Kong/kong/blob/92ce76c317d01b7cae9e64115f7c635dce662dba/kong/dao/dao.lua#L126

Thanks

Hi,

The Kong database cache also caches misses (negative hits). As such, an entity being created needs to be invalidated across the cluster in case a miss has previously been cached. This behavior is implemented abstractly from the DAO and is currently applicable to all entities using it.

1 Like

Hi thanks for your reply.
By analyzing all postgres query generated by kong nodes we noticed that there are so many request like this: https://pastebin.com/bn61PpLW which involves actually also not enabled plugin(eg: loggly, aws-lambda) (our enabled plugins are oauth2, cors, tcp-log, request-size-limiting, acl.)
Actually on file with 59345 entries (each entry correspond to a single SQL query) about 28807 (50%~) involve the query:

SELECT (extract(epoch from created_at)*1000)::bigint as created_at, "config", "id", "enabled", "name", "api_id", "consumer_id" FROM plugins WHERE "name" = '...' AND "consumer_id" = '...'

We noticed this behaviour after increasing db_cache_ttl (from 1 day to 1 week), the number of connection to postgres did not decrease as well (we expected after a certain period a decreasing of cache misses).
Maybe this analysis is not complete and there are some lacks about kong logic which we missed but is that a correct in your opinion?