Prometheus Plugin - Cardinality

#1

Hi,

I’m using Kong with Prometheus Plugin from the beginning and everything is ok. With the evolution of the system, I will have around 1000 services with the trend to increase.

Right now I have these metrics for a service:

kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00001.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00002.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00005.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00007.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00010.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00015.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00020.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00025.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00030.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00040.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00050.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00060.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00070.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00080.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00090.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00100.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00200.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00300.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00400.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00500.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“01000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“02000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“05000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“10000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“30000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“60000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le="+Inf"} 4

The cardinality will increase a lot:

bucket type * bucket service * latency
1100027 = 27000

  1. A cardinality explosion is the sudden rapid creation of new series due to one or more labels on one or metrics being populated with high-cardinality data

  2. High-cardinality data is any data that, when placed into a proper set, has a high number of discrete elements. In this context, we care about cardinalities in the tens-of-thousands and up.

Any way to control this?

Can i put this by configuration ?
local DEFAULT_BUCKETS = { 1, 2, 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, 70,
80, 90, 100, 200, 300, 400, 500, 1000,
2000, 5000, 10000, 30000, 60000 }

0 Likes

#2

Hello @flowdopip,

This is indeed a problem that exists currently.
The solution here will be to introduce configuration in the plugin, where these the buckets and even the metrics which should be recorded and exposed can be configured.

Meanwhile, if you don’t need metrics for each of your service, you can apply the plugin to only a subset of your services, which will reduce the cardinality a little.

0 Likes

#3

Hi,

I want metrics to my services but i need to reduce the cardinality because i have around 500 services and this will create a lot of variations.

I´ve done a fork from the original prometheus repo, and now im able to setup the bucket list per service . I can add this feature to activate/deactivate the metric by configuration too.

Other thing that i will try to due, is expose lua metrics/openresty.

This make sense for you? This could be pull request for the original repo?

Thanks

0 Likes

#4

What do you have in mind? Could you elaborate?

0 Likes

#5

Hi,

The idea is to expose all the metrics from the kong stack: Nginx, OpenResty and Lua.

0 Likes

#6

I see but what metrics are you referring to?
Could you please more specific as to which exact metrics you’d like to add?
I’ve thought about adding shm related metrics that we can get from OpenResty.

I’m very interested in knowing more about what you have in mind.

0 Likes