Monitoring Kong with Prometheus and Grafana in EKS

Hi all,

I’m using Kong Ingress Controller on EKS, and have a three instances as dev, test, sandbox in different namespaces. As I want to monitor Kong and services behind it, I deployed Prometheus and Grafana on EKS.

I followed exactly the same steps in the document below:
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/prometheus-grafana.md

I’ve add specific annotations for every ingress class in KongPlugin as below:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: prometheus
annotations:
kubernetes.io/ingress.class: “kong-dev”
labels:
global: “true”
plugin: prometheus

Even though Prometheus is able to scrap metrics from the other pods in the cluster, it cannot scrap from Kong instances. There’s no data in the dashboard also.

Any idea how to deal with it?

I’m also confused here after reading this document and the following issue.

As described in the document below, there are three way to access metrics from Kong. One for enterprise version, and other two via Admin API or custom nginx template.

However, in the issue below @hbagdi you mentioned that ’ You only need to scrape the port 9542 of Kong deployment to get Prometheus metrics.
You do not need to change any configuration in the nginx template for this purpose.’

Do you have any suggestions in this regard?

Thanks in advance

Do you have traffic flowing through Kong?
Only then will you see metrics in the dashboard.

Yes I have. I create the traffic using while true, do curl structure and see the request logs on the proxy container.

Make sure the Prometheus installation you have supports scrapping pods based on the annotations.
One way to verify if Prometheus is actually scrapping the pods is by checking the “Targets” page in Prometheus UI, which lists all the pods that Prometheus is currently scrapping for data.

Yes I have all the pods listed in Prometheus UI Targets section.
Let me ask it another way. Do I have to configure custom nginx template if I want to use custom port such as (9542) as described in Prometheus plugins page above up?

Nope. The way you installed Kong, a ConfigMap is mounted into Kong’s container as a volume and there should be an KONG_NGINX_INCLUDE environment variable that takes care of the server that is listening on port 9542.

Let’s drill deeper.
What do you get if you make a HTTP GET /metrics request on port 9542 in Kong’s container?

I’m just trying to get metrics for Prometheus but couldn’t succeed yet.

What happens when you perform the request I described in my previous comment?
Do you get an error? IF so, what?

When I GET /metrics on port 9542 inside the proxy container, I could get the metrics below:

# HELP kong_datastore_reachable Datastore reachable from Kong, 0 is unreachable
# TYPE kong_datastore_reachable gauge
kong_datastore_reachable 1
# HELP kong_memory_lua_shared_dict_bytes Allocated slabs in bytes in a shared_dict
# TYPE kong_memory_lua_shared_dict_bytes gauge
kong_memory_lua_shared_dict_bytes{shared_dict="kong"} 40960
kong_memory_lua_shared_dict_bytes{shared_dict="kong_cluster_events"} 40960
kong_memory_lua_shared_dict_bytes{shared_dict="kong_core_db_cache"} 798720
kong_memory_lua_shared_dict_bytes{shared_dict="kong_core_db_cache_2"} 811008
kong_memory_lua_shared_dict_bytes{shared_dict="kong_core_db_cache_miss"} 86016
kong_memory_lua_shared_dict_bytes{shared_dict="kong_core_db_cache_miss_2"} 90112
kong_memory_lua_shared_dict_bytes{shared_dict="kong_db_cache"} 794624
kong_memory_lua_shared_dict_bytes{shared_dict="kong_db_cache_2"} 794624
kong_memory_lua_shared_dict_bytes{shared_dict="kong_db_cache_miss"} 86016
kong_memory_lua_shared_dict_bytes{shared_dict="kong_db_cache_miss_2"} 86016
kong_memory_lua_shared_dict_bytes{shared_dict="kong_healthchecks"} 45056
kong_memory_lua_shared_dict_bytes{shared_dict="kong_locks"} 61440
kong_memory_lua_shared_dict_bytes{shared_dict="kong_process_events"} 45056
kong_memory_lua_shared_dict_bytes{shared_dict="kong_rate_limiting_counters"} 86016
kong_memory_lua_shared_dict_bytes{shared_dict="prometheus_metrics"} 49152
# HELP kong_memory_lua_shared_dict_total_bytes Total capacity in bytes of a shared_dict
# TYPE kong_memory_lua_shared_dict_total_bytes gauge
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong"} 5242880
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_cluster_events"} 5242880
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_core_db_cache"} 134217728
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_core_db_cache_2"} 134217728
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_core_db_cache_miss"} 12582912
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_core_db_cache_miss_2"} 12582912
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_db_cache"} 134217728
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_db_cache_2"} 134217728
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_db_cache_miss"} 12582912
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_db_cache_miss_2"} 12582912
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_healthchecks"} 5242880
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_locks"} 8388608
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_process_events"} 5242880
kong_memory_lua_shared_dict_total_bytes{shared_dict="kong_rate_limiting_counters"} 12582912
kong_memory_lua_shared_dict_total_bytes{shared_dict="prometheus_metrics"} 5242880
# HELP kong_memory_workers_lua_vms_bytes Allocated bytes in worker Lua VM
# TYPE kong_memory_workers_lua_vms_bytes gauge
kong_memory_workers_lua_vms_bytes{pid="23"} 73140
# HELP kong_nginx_http_current_connections Number of HTTP connections
# TYPE kong_nginx_http_current_connections gauge
kong_nginx_http_current_connections{state="accepted"} 71988
kong_nginx_http_current_connections{state="active"} 2
kong_nginx_http_current_connections{state="handled"} 71988
kong_nginx_http_current_connections{state="reading"} 0
kong_nginx_http_current_connections{state="total"} 31241
kong_nginx_http_current_connections{state="waiting"} 1
kong_nginx_http_current_connections{state="writing"} 1
# HELP kong_nginx_metric_errors_total Number of nginx-lua-prometheus errors
# TYPE kong_nginx_metric_errors_total counter
kong_nginx_metric_errors_total 0

And to be sure,

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: prometheus
annotations:
kubernetes.io/ingress.class: “kong-dev”
labels:
global: “true”
plugin: prometheus

is it the right way to define KongPlugin for multiple kong ingress controllers? defining kong-x class?

The thing is, I was able to monitor Kong while there is only one. After creating 3 instances per namespaces dev-test-sandbox and setting ingress classes explicitly with env name i.e kong-dev and adding kubernetes.io/ingress.class: “kong-dev” annotation to the KongPlugin , the dashboard stopped showing the results.

Do you see any errors in the logs of any of the ingress controllers?

The only error that I saw so far is below;

kong-dev-kong-545cd86985-6hqbd proxy error “/kong_prefix/html/index.html” is not found (2: No such file or directory), client: 127.0.0.1, server: kong_prometheus_exporter, request: “GET / HTTP/1.1”, host: “localhost:9542”

Hi,

The post can be marked as solved. The problem was about indentation of the kubernetes.io/ingress.class: “kong-dev” annotation in the KongPlugin yaml. After fixing this in the yaml, prometheus has started to scrap metrics.

Using web admission hook might be a good idea.
Thank you for your efforts