Kong log error about certificate

Hi,

I have a kong error log in the proxy container: handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.244, server: 0.0.0.0:8443

I don’t understand what is the cause of this log.
Few hours earlier kong was returned me the default ssl cert and then without changing anything all my endpoints respond correctly with https.
Actually I didn’t reproduce the issue of this morning, and the error log continue to appear in my logs
extract of my logs:

10.126.0.28 - - [25/May/2020:09:59:13 +0000] "GET /users/jwt HTTP/2.0" 200 133 "https://orpheus.barney.hvbrt.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
2020/05/25 09:59:13 [error] 27#0: *2632859 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.28, server: 0.0.0.0:8443
10.126.0.251 - - [25/May/2020:09:59:51 +0000] "POST / HTTP/1.1" 404 19 "-" "SendGrid Event API"
2020/05/25 09:59:51 [error] 27#0: *2633178 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.251, server: 0.0.0.0:8443
10.126.0.244 - - [25/May/2020:10:00:06 +0000] "POST / HTTP/1.1" 404 19 "-" "SendGrid Event API"
2020/05/25 10:00:06 [error] 27#0: *2633305 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.244, server: 0.0.0.0:8443
10.126.0.24 - - [25/May/2020:10:00:12 +0000] "POST / HTTP/1.1" 404 19 "-" "SendGrid Event API"
2020/05/25 10:00:12 [error] 27#0: *2633359 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.24, server: 0.0.0.0:8443
10.126.0.24 - - [25/May/2020:10:00:12 +0000] "POST / HTTP/1.1" 404 19 "-" "SendGrid Event API"
2020/05/25 10:00:12 [error] 27#0: *2633369 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.24, server: 0.0.0.0:8443
10.126.0.240 - - [25/May/2020:10:00:13 +0000] "GET /?token=... HTTP/1.1" 101 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
2020/05/25 10:00:13 [error] 27#0: *2633371 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.240, server: 0.0.0.0:8443
10.126.0.28 - - [25/May/2020:10:00:13 +0000] "GET /users/jwt HTTP/2.0" 200 133 "https://orpheus.barney.hvbrt.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
2020/05/25 10:00:13 [error] 27#0: *2633374 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.28, server: 0.0.0.0:8443
10.126.0.248 - - [25/May/2020:10:00:18 +0000] "POST / HTTP/1.1" 404 19 "-" "SendGrid Event API"
2020/05/25 10:00:18 [error] 27#0: *2633419 [kong] handler.lua:27 reporter flush failed to request: 20: unable to get local issuer certificate, context: ngx.timer, client: 10.126.0.248, server: 0.0.0.0:8443

I’m running kong ingress controller in dbless mode with this config:

Name:           kong-kong-759876958-qxk94
Namespace:      default
Priority:       0
Node:           gke-barney-barney-general-purpose-6b9675f4-q8l4/10.126.0.24
Start Time:     Wed, 20 May 2020 17:12:03 +0200
Labels:         app.kubernetes.io/component=app
                app.kubernetes.io/instance=kong
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=kong
                app.kubernetes.io/version=2
                helm.sh/chart=kong-1.5.0
                pod-template-hash=759876958
Annotations:    cni.projectcalico.org/podIP: 10.140.4.103/32
                kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container ingress-controller
Status:         Running
IP:             10.140.4.103
IPs:            <none>
Controlled By:  ReplicaSet/kong-kong-759876958
Containers:
  ingress-controller:
    Container ID:  docker://dd92b8cf1587fb1ad7866cf490090fcde0e35bbc3916662b668ef21f702c07db
    Image:         kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.8.1
    Image ID:      docker-pullable://kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller@sha256:c7ccb6600d166a4986a30decf8a2db06161240b77e745de25ec8ce459a6570fa
    Port:          <none>
    Host Port:     <none>
    Args:
      /kong-ingress-controller
    State:          Running
      Started:      Wed, 20 May 2020 17:12:16 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Wed, 20 May 2020 17:12:05 +0200
      Finished:     Wed, 20 May 2020 17:12:05 +0200
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:      100m
    Liveness:   http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:                               kong-kong-759876958-qxk94 (v1:metadata.name)
      POD_NAMESPACE:                          default (v1:metadata.namespace)
      CONTROLLER_ELECTION_ID:                 kong-ingress-controller-leader-kong
      CONTROLLER_INGRESS_CLASS:               kong
      CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY:  true
      CONTROLLER_KONG_URL:                    https://localhost:8444
      CONTROLLER_PUBLISH_SERVICE:             default/kong-kong-proxy
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-kong-token-m46dn (ro)
  proxy:
    Container ID:   docker://c400e17164a4ae866db00a498e3b3f6ce6f306ee5c3250e304f801906f29c4a9
    Image:          kong:2.0.4
    Image ID:       docker-pullable://kong@sha256:32a09516a4fad6a7d42a90f7f754970555027a73e349b980a72c7120e00488b4
    Ports:          8444/TCP, 8000/TCP, 8443/TCP, 8100/TCP, 9542/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Wed, 20 May 2020 17:12:04 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     150m
      memory:  150Mi
    Requests:
      cpu:      150m
      memory:   150Mi
    Liveness:   http-get http://:metrics/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:metrics/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            0.0.0.0:8444 http2 ssl
      KONG_DATABASE:                off
      KONG_HEADERS:                 off
      KONG_LUA_PACKAGE_PATH:        /home/kong/maxmind-geoip2/?.lua;/opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_DAEMON:            off
      KONG_NGINX_DAEMON:            off
      KONG_NGINX_HTTP_INCLUDE:      /kong/servers.conf
      KONG_NGINX_WORKER_PROCESSES:  1
      KONG_PLUGINS:                 bundled
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
      KONG_REAL_IP_HEADER:          proxy_protocol
      KONG_REAL_IP_RECURSIVE:       on
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_TRUSTED_IPS:             0.0.0.0/0,::/0
      KONG_VERSION:                 2.0.4
    Mounts:
      /kong from custom-nginx-template-volume (rw)
      /kong_prefix/ from kong-kong-prefix-dir (rw)
      /tmp from kong-kong-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kong-kong-token-m46dn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kong-kong-prefix-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kong-kong-tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  custom-nginx-template-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kong-kong-default-custom-server-blocks
    Optional:  false
  kong-kong-token-m46dn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kong-kong-token-m46dn
    Optional:    false

I had the issue again without touching anything I was trying to understand.
In this case the certifcate was the default of kong and the routing was broken I had he default response when nothing match the route.

After deleting pods, everything works, so it seems sometime the kong ingress is losing his state

I can’t seem to find that error log line in Kong. Are you using a custom plugin with Kong?

I found a test where there is a part of the message: https://github.com/Kong/kong/blob/1fe4b0beaca859c5bc37fd8e7be6460668087744/spec/02-integration/05-proxy/06-ssl_spec.lua#L238

I fix my issue where workers lose the configuration by upgrade cpu and memory.
But the error log still presents.

@traines Any ideas? …

tl;dr

openssl version -d | awk '{print "lua_ssl_trusted_certificate = " substr($2,2,length($2)-2) "/cert.pem\n" "lua_ssl_verify_depth = 2" }' >> /tmp/kong.conf

You can’t exactly do that in Dockerized environments, so for the images we provide, set these environment variables:

CentOS:

KONG_LUA_SSL_TRUSTED_CERTIFICATE=/etc/pki/tls/cert.pem
KONG_LUA_SSL_VERIFY_DEPTH=2

Alpine:

KONG_LUA_SSL_TRUSTED_CERTIFICATE=/etc/ssl/cert.pem
KONG_LUA_SSL_VERIFY_DEPTH=2

20: unable to get local issuer certificate is yet another poorly-phrased OpenSSL error (search for “X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY” in https://www.openssl.org/docs/man1.0.2/man1/verify.html) that indicates it wasn’t able to follow the certificate chain to some trusted root (these are stored on the local filesystem, but it’s not like there’s an option to get them from somewhere remotely).

The error in question is coming from https://github.com/Kong/kong-plugin-zipkin/blob/v1.1.0/kong/plugins/zipkin/reporter.lua#L114 and indicates that the Zipkin plugin wasn’t able to verify the certificate presented by your Zipkin collector, though that’s not really the plugin’s fault. lua_ssl_trusted_certificate isn’t set by default (and lua_ssl_verify_depth should probably default to at least 2), so as far as OpenSSL is concerned, there are no trusted roots and it can’t possibly verify a trust chain.

That isn’t set by default because, well, there is no default, because OpenSSL desires to make our lives difficult. There is, however, an OpenSSL directory location that’s compiled into the openssl binary your OS provides:

openssl version -d       
OPENSSLDIR: "/etc/ssl"

That doesn’t give us the complete picture, since we need a file, not a directory. There is a default trust bundle location within that directory, so that’s what we should use: on a system where the OPENSSLDIR is /etc/ssl, we should set lua_ssl_trusted_certificate = /etc/ssl/cert.pem.

The trust chain must also go all the way to a trusted root, and most certificates are issued by intermediates rather than roots. lua_ssl_verify_depth controls how many intermediates OpenSSL will actually follow to protect against an (unlikely) DoS using an infinite cert chain. The actual math OpenSSL uses isn’t quite intuitive since it starts at 0 and doesn’t count the trusted root, so depth is effectively the number of intermediates. A single intermediate before the root is most common, but two intermediates isn’t uncommon. It’s probably quite safe to set this much higher, but depth 2 suffices for the vast majority of environments.

1 Like