Kong metric not visible in Prometheus

I have kong(kong-ingress-controller) installed on AWS EKS. In order to access it I have exposed admin API and kong proxy. I am using admin API with konga to manage the services. For getting the metrics promethues is there. I am able to see metric logs while hitting https://ADMIN_API/metrics.

# HELP kong_bandwidth Total bandwidth in bytes consumed per service/route in Kong
# TYPE kong_bandwidth counter
kong_bandwidth{service="RegionGroup",route="rg-route",type="egress"} 23670
kong_bandwidth{service="RegionGroup",route="rg-route",type="ingress"} 5050
kong_bandwidth{service="stacks-api",route="stack-route",type="egress"} 163600
kong_bandwidth{service="stacks-api",route="stack-route",type="ingress"} 963
# HELP kong_datastore_reachable Datastore reachable from Kong, 0 is unreachable
# TYPE kong_datastore_reachable gauge
kong_datastore_reachable 1
# HELP kong_http_status HTTP status codes per service/route in Kong
# TYPE kong_http_status counter
kong_http_status{service="RegionGroup",route="rg-route",code="200"} 29
kong_http_status{service="RegionGroup",route="rg-route",code="429"} 24
kong_http_status{service="RegionGroup",route="rg-route",code="499"} 1
kong_http_status{service="RegionGroup",route="rg-route",code="503"} 7
kong_http_status{service="stacks-api",route="stack-route",code="200"} 23
# HELP kong_latency Latency added by Kong, total request time and upstream latency for each service/route in Kong
# TYPE kong_latency histogram
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00002.0"} 16
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00005.0"} 48
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00007.0"} 49
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00010.0"} 50
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00015.0"} 51
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00020.0"} 51
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00025.0"} 51
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00030.0"} 51
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00040.0"} 54
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00050.0"} 54
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00060.0"} 58
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00070.0"} 60
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00080.0"} 61
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00090.0"} 61
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00100.0"} 61
kong_latency_bucket{service="RegionGroup",route="rg-route",type="kong",le="00200.0"} 61
.
.
.

Referring to this link Integrate the Kubernetes Ingress Controller with Prometheus/Grafana - v1.3.x | Kong Docs I am able to get my EKS(kube) related metrics but not kong related metrics like kong_latency_bucket.

So need help in order to see kong metrics on prometheus.

Also while watching metrics on prometheus:

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.2841e-05
go_gc_duration_seconds{quantile="0.25"} 1.6964e-05
go_gc_duration_seconds{quantile="0.5"} 2.1339e-05
go_gc_duration_seconds{quantile="0.75"} 3.8251e-05
go_gc_duration_seconds{quantile="1"} 0.002768192
go_gc_duration_seconds_sum 0.138215184
go_gc_duration_seconds_count 2863
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 151
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.13.8"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.5201356e+08
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 5.0525443952e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 2.10856e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 4.47708026e+08
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0.000647302343964197
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.3549952e+07
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 2.5201356e+08
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 3.36912384e+08
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 2.65527296e+08
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 2.366333e+06
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 3.149824e+08
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.0243968e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.630053027706373e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 4.50074359e+08
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 3472
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 4.169216e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 5.046272e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.77164336e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.608808e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.540096e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.540096e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 6.36309752e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 11
# HELP net_conntrack_dialer_conn_attempted_total Total number of connections attempted by the given dialer a given name.
# TYPE net_conntrack_dialer_conn_attempted_total counter
net_conntrack_dialer_conn_attempted_total{dialer_name="alertmanager"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="default"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-apiservers"} 2
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-nodes"} 2
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-nodes-cadvisor"} 1
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-pods"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-pods-slow"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-service-endpoints"} 5
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-service-endpoints-slow"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="kubernetes-services"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="prometheus"} 1
net_conntrack_dialer_conn_attempted_total{dialer_name="prometheus-pushgateway"} 1
# HELP net_conntrack_dialer_conn_closed_total Total number of connections closed which originated from the dialer of a given name.
# TYPE net_conntrack_dialer_conn_closed_total counter
net_conntrack_dialer_conn_closed_total{dialer_name="alertmanager"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="default"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-apiservers"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-nodes"} 1
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-nodes-cadvisor"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-pods"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-pods-slow"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-service-endpoints"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-service-endpoints-slow"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="kubernetes-services"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="prometheus"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="prometheus-pushgateway"} 0
# HELP net_conntrack_dialer_conn_established_total Total number of connections successfully established by the given dialer a given name.
# TYPE net_conntrack_dialer_conn_established_total counter
net_conntrack_dialer_conn_established_total{dialer_name="alertmanager"} 0
net_conntrack_dialer_conn_established_total{dialer_name="default"} 0
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-apiservers"} 2
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-nodes"} 2
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-nodes-cadvisor"} 1
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-pods"} 0
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-pods-slow"} 0
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-service-endpoints"} 5
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-service-endpoints-slow"} 0
net_conntrack_dialer_conn_established_total{dialer_name="kubernetes-services"} 0
net_conntrack_dialer_conn_established_total{dialer_name="prometheus"} 1
net_conntrack_dialer_conn_established_total{dialer_name="prometheus-pushgateway"} 1
# HELP net_conntrack_dialer_conn_failed_total Total number of connections failed to dial by the dialer a given name.
# TYPE net_conntrack_dialer_conn_failed_total counter
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-apiservers",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-apiservers",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-apiservers",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-apiservers",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes-cadvisor",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes-cadvisor",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes-cadvisor",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-nodes-cadvisor",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-pods",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-pods",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-pods",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-pods",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kubernetes-pods-slow",reason="refused"} 0
.
.
.

I think the installation guide does not have the right port needed for Prometheus to scrape metrics.

The status_listen port is 8100.

You can try installing Kong with below and see if it works for you.

helm install my-kong kong/kong -n kong \
    --set ingressController.installCRDs=false \
    --set podAnnotations."prometheus\.io/scrape"=true \
    --set podAnnotations."prometheus\.io/port"=8100 \
    --create-namespace

@fomm Thanks for your reply.
But I am using this template: kubernetes-ingress-controller/all-in-one-postgres.yaml at main · Kong/kubernetes-ingress-controller · GitHub (kubectl apply command)
And following link for prometheus setup: Integrate the Kubernetes Ingress Controller with Prometheus/Grafana - v1.3.x | Kong Docs.

Here I am a bit confused of where to use prometheus podAnnotation values.

@fomm I updated the kubernetes-ingress-controller/all-in-one-postgres.yaml at main · Kong/kubernetes-ingress-controller · GitHub.
I have updated the kong-ingress metadata to

      annotations:
        kuma.io/gateway: enabled
        **prometheus.io/port: "8100"**
**        prometheus.io/scrape: "true"**
        traffic.sidecar.istio.io/includeInboundPorts: ""

Now its working!!!

That’s good news.

it seems that the yaml file in single-v2 folder updates the CRD definition and it is using the KIC2.0 beta version.

You can also use the yaml file in single folder which has the annotation.

1 Like

@fomm Thanks for guiding me in the correct way.

i was using the singlev2 due to the api version, apiVersion: apiextensions.k8s.io/v1