kong nginx timers


prometheus query kong_nginx_timers exposes 0 running timers and static number of pending timers all the time. Any explanations? how to test? What query to use in order to alert on too many pending timers?

1 Like

Interested in this too.

We use 2.6 and faced too many pending timers.

Please provide explanation on how to monitor for this using prometheus plugin in later kong versions.

The standard pending timer limit (which you can’t easily change) is 16384. You can set alerts at whatever threshold of that makes sense for you.

If you’re actually hitting it and aren’t sure why, however, you’ll want to try Issues · Kong/kong · GitHub instead–the timer system is internal to the gateway code, not part of the Kubernetes tooling.

Thanks for the reply traines.

Any idea why we timers are static when checking numbers with prometheus?
How can we make timers ‘move’ so we could test everything works?

Timer counts generally are static: there’s a defined set of timers used to handle recurring tasks like polling for pulling config updates and regenerating internal data structures, etc. that just run indefinitely until the process stops (at timers are only executed once, but these timers create the same timer again in the function they run–I’m not quite sure why we use that instead of timer.every).

There are also a number of calls like ngx.timer.at(0, do_something) that we use to split off background tasks–for example, most of the log plugins do this to actually send data to the logging system, so that the request thread terminates and frees resources. These execute immediately and probably won’t get counted because their lifetime is so brief.

Offhand I don’t know a good built-in way to force an increase in timer counts. You could add a function plugin that calls ngx.timer.at with a long delay, which would start a new timer every time you hit that route. ngx.today should be fine as a no-op callback function.