Kong sends /metrics requests to itself with wrong protocol and logs >=4xx errors

Hi,

This is strange and Im not sure how to fix this…

If status listener is enabled, /metrics is available according to this documentation:

We deploy using helm and by default so status listener is enabled (it cant be exposed by service or ingress - as described in default values file).

Our prometheus does not get any metrics unless we specify prometheus.io/scrape: "true" as pod annotation in values.yaml. This is not a problem.

The problem is that after adding prometheus.io/scrape: "true" kong starts sending /metrics requests to itself but using http protocol on https protocol port (8443).

x.x.18.5 is the ip of kong pod

[info] 1109#0: *13786 client sent plain HTTP request to HTTPS port while reading client request headers, client: **x.x.18.5**, server: kong, request: "GET /metrics HTTP/1.1", host: "x.x.18.5:8443"

[warn] 1109#0: *13786 using uninitialized "kong_proxy_mode" variable while logging request, client: x.x.18.5, server: kong, request: "GET /metrics HTTP/1.1", host: "x.x.18.5:8443"

[warn] 1109#0: *13786 [lua] reports.lua:83: log(): [reports] could not determine log suffix (scheme=http, proxy_mode=) while logging request, client: x.x.18.5, server: kong, request: "GET /metrics HTTP/1.1", host: "x.x.18.5:8443"

{
"remote_addr":"x.x.18.5",
"status":"400",
"body_bytes_sent":"220",
"bytes_sent":"365",
"http_referrer":"",
"http_user_agent":"Prometheus/2.17.1",
"request_uri":"/metrics",
"request_method":"GET",
"request_length":"329",
"request_time":"0.000",
"connection_requests":"1",
"request_trace":""
}

The main issue is that this behavior adds noice to our monitoring and we want to get rid of it - it logs >=4xx errors.

Adding prometheus.io/port: 8100 did not help.

On prometheus side I see this:

level=debug caller=scrape.go:962 component="scrape manager" scrape_pool=kubernetes-pods target=http://x.x.18.5:4143/metrics msg="Scrape failed" err="server returned HTTP status 500 Internal Server Error"

level=debug caller=scrape.go:962 component="scrape manager" scrape_pool=kubernetes-pods target=http://x.x.18.5:8000/metrics msg="Scrape failed" err="server returned HTTP status 404 Not Found"

level=debug caller=scrape.go:962 component="scrape manager" scrape_pool=kubernetes-pods target=http://x.x.18.5:8443/metrics msg="Scrape failed" err="server returned HTTP status 400 Bad Request"

In kong’s pod, there is initcon container with 4143 port:

- args:
    - --incoming-proxy-port
    - "4143"

This is our prometheus “kubernetes-pods” job (default):

- job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
        - role: pod
      relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: kubernetes_pod_name
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: (.+)(?::\d+);(\d+)
          replacement: $1:$2

Any suggestions?

Thanks

Do you have the Prometheus operator? The chart can deploy a ServiceMonitor config that we know works.

Not really sure why it’s not honoring the old port annotation. I suppose worst case you could add a proxy route for /metrics and a localhost:8100 service (using an ExternalDomain Service) to let it reach the metrics service that way?