Metrics update in Prometheus


We have Kong in our kubernetes ingress, our idea is to use the metrics from Kong to detect if one of the underlying services is having issues. One thing we are facing is that the counter has huge jumps and downs:

I thought that the scrape interval was missconfigured, but changing it to 20s didn’t reflect any change in the collected values. Right now the job is configured as:

- job_name: kong-metrics
  honor_timestamps: true
  scrape_interval: 20s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  - targets:

We are hitting the admin plane at the moment, but we have a few other instances:

gateway-kong-admin                NodePort    <none>           8001:30757/TCP               280d
gateway-kong-proxy                LoadBalancer      80:32457/TCP,443:30608/TCP   280d
gateway-kong-validation-webhook   ClusterIP   <none>           443/TCP                      280d
kubectl -n api get pods -l "" --field-selector "status.phase=Running"
NAME                            READY   STATUS    RESTARTS   AGE
gateway-kong-574f6d4f88-c2b6v   2/2     Running   0          5d16h
gateway-kong-574f6d4f88-j8hb9   2/2     Running   0          3d19h
gateway-kong-574f6d4f88-jmdz5   2/2     Running   0          4d20h
gateway-kong-574f6d4f88-x2xfg   2/2     Running   0          5d16h
gateway-kong-574f6d4f88-zq7x6   2/2     Running   0          46h

Currently running Kong 2.2.2. Any ideas why we are seeing such behavior?

© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ