Prometheus plugin: Get metrics: connection refused

Hello there,

I deployed kong on my ARM kubernetes, using :
helm install kong/kong --namespace kong --name kong --values values.yaml --set ingressController.installCRDs=false
My values files looks like this:

podAnnotations:
  prometheus.io/scrape: "true" # Ask Prometheus to scrape the
  prometheus.io/port: "9542"   # Kong pods for metrics
ingressController:
  image:
    repository: bastibast/kong-ingress-arm64
    tag: "0.0.1"
proxy:
  annotations:
    metallb.universe.tf/adress-pool: main

I built the image 10 days ago, with buildx from the kong-ingress Dockerfile, so no changes there.
I followed the guide from the doc, but once everything is setup, Grafana displays no data and when checking targets from Prometheus’s UI, the kong pod is the only with an error: “Get “http://10.244.1.108:9542/metrics”: dial tcp 10.244.1.108:9542: connect: connection refused”

kubectl describe pod kong:

Labels:       app.kubernetes.io/component=app
              app.kubernetes.io/instance=kong
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=kong
              app.kubernetes.io/version=2.2
              helm.sh/chart=kong-1.14.3
              pod-template-hash=78f587b54b
Annotations:  kubernetes.io/psp: privileged
              prometheus.io/port: 9542
              prometheus.io/scrape: true
Status:       Running
IP:           10.244.1.108

Could you help narrow down where the problem could come please ?

EDIT: I checked the logs from ingress-controller, nothing there either.

The latest version of the chart (1.14) migrated to the standard Kong status listen for the /metrics endpoint. The built-in ServiceMonitor should handle this change automatically, though it looks like you’ve added your scrape configuration manually.

The default port is 8100; changing to that should fix this issue.

1 Like

It fixed the issue indeed! I changed the scrape port following the prometheus plugin documentation :slight_smile: Thank you very much !