Hello,
We have Kong in our kubernetes ingress, our idea is to use the metrics from Kong to detect if one of the underlying services is having issues. One thing we are facing is that the counter has huge jumps and downs:
I thought that the scrape interval was missconfigured, but changing it to 20s didn’t reflect any change in the collected values. Right now the job is configured as:
- job_name: kong-metrics
honor_timestamps: true
scrape_interval: 20s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- 10.0.26.33:8001
We are hitting the admin plane at the moment, but we have a few other instances:
gateway-kong-admin NodePort 10.0.26.33 <none> 8001:30757/TCP 280d
gateway-kong-proxy LoadBalancer 10.0.30.27 34.70.66.18 80:32457/TCP,443:30608/TCP 280d
gateway-kong-validation-webhook ClusterIP 10.0.17.255 <none> 443/TCP 280d
kubectl -n api get pods -l "app.kubernetes.io/name=kong" --field-selector "status.phase=Running"
NAME READY STATUS RESTARTS AGE
gateway-kong-574f6d4f88-c2b6v 2/2 Running 0 5d16h
gateway-kong-574f6d4f88-j8hb9 2/2 Running 0 3d19h
gateway-kong-574f6d4f88-jmdz5 2/2 Running 0 4d20h
gateway-kong-574f6d4f88-x2xfg 2/2 Running 0 5d16h
gateway-kong-574f6d4f88-zq7x6 2/2 Running 0 46h
Currently running Kong 2.2.2
. Any ideas why we are seeing such behavior?