Hi,
I’m using Kong with Prometheus Plugin from the beginning and everything is ok. With the evolution of the system, I will have around 1000 services with the trend to increase.
Right now I have these metrics for a service:
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00001.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00002.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00005.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00007.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00010.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00015.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00020.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00025.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00030.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00040.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00050.0”} 2
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00060.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00070.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00080.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00090.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00100.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00200.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00300.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00400.0”} 3
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“00500.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“01000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“02000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“05000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“10000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“30000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le=“60000.0”} 4
kong_latency_bucket{type=“kong”,service=“auth-runtime”,le="+Inf"} 4
The cardinality will increase a lot:
bucket type * bucket service * latency
1100027 = 27000
-
A cardinality explosion is the sudden rapid creation of new series due to one or more labels on one or metrics being populated with high-cardinality data
-
High-cardinality data is any data that, when placed into a proper set, has a high number of discrete elements. In this context, we care about cardinalities in the tens-of-thousands and up.
Any way to control this?
Can i put this by configuration ?
local DEFAULT_BUCKETS = { 1, 2, 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, 70,
80, 90, 100, 200, 300, 400, 500, 1000,
2000, 5000, 10000, 30000, 60000 }