Kong Helm Chart and External Postgres database

Can you provide logs from the init-migrations pod?

Offhand, the Postgres chart sets up the user and database automatically; the migration job alone will not. Double-check that you’ve created your user, database, and granted your user permissions to it.

Honestly, there is not much of any logging and that has been the challenging. It is like shooting in the dark. This is all I see Error from server (BadRequest): container "kong-migrations" in pod "kong-kong-init-migrations-m5grv" is waiting to start: PodInitializing

That is correct, a user and database is already created and that user is the owner of the database kong is supposed to use.

That job runs a basic init to confirm it can establish a connection:

kubectl logs JOB_POD -c wait-for-postgres should get you logs for it. Check kubectl describe pod JOB_POD for the initContainer’s command also–it may not have rendered correctly, though based on the values.yaml it should be able to.

Here is what kubectl describe POD_NAME gives me, there seem to be no error there.

Name:           kong-kong-init-migrations-dbblz
Namespace:      gateway
Priority:       0
Node:           worker3/192.168.1.23
Start Time:     Mon, 20 Apr 2020 16:48:16 -0400
Labels:         app.kubernetes.io/component=init-migrations
                app.kubernetes.io/instance=kong
                app.kubernetes.io/managed-by=Tiller
                app.kubernetes.io/name=kong
                app.kubernetes.io/version=2
                controller-uid=f01223cc-362c-4308-b1e3-48333ffd83ee
                helm.sh/chart=kong-1.5.0
                job-name=kong-kong-init-migrations
Annotations:    kuma.io/sidecar-injection: disabled
                sidecar.istio.io/inject: false
Status:         Pending
IP:             10.233.116.162
IPs:            <none>
Controlled By:  Job/kong-kong-init-migrations
Init Containers:
  wait-for-postgres:
    Container ID:  docker://b1666a1e088b405239448b02861cd7c2d829a8840bdb1d691243ba2ada04e13d
    Image:         busybox:latest
    Image ID:      docker-pullable://busybox@sha256:89b54451a47954c0422d873d438509dae87d478f1cb5d67fb130072f67ca5d25
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      set -u; until nc -zv $KONG_PG_HOST $KONG_PG_PORT -w1; do echo "waiting for db - trying ${KONG_PG_HOST}:${KONG_PG_PORT}"; sleep 1; done
    State:          Running
      Started:      Mon, 20 Apr 2020 16:48:17 -0400
    Ready:          False
    Restart Count:  0
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            0.0.0.0:8001, 0.0.0.0:8444 http2 ssl
      KONG_DATABASE:                postgres
      KONG_LOG_LEVEL:               info
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_HTTP_INCLUDE:      /kong/servers.conf
      KONG_NGINX_WORKER_PROCESSES:  1
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 postgres-postgresql.storage.svc
      KONG_PG_PASSWORD:             ********
      KONG_PG_PORT:                 5432
      KONG_PG_SSL:                  off
      KONG_PG_SSL_VERIFY:           off
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled,oidc
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_NGINX_DAEMON:            off
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9lkh (ro)
Containers:
  kong-migrations:
    Container ID:
    Image:         docker.pkg.github.com/bsakweson/dockerhub/kong:2.0.3-oidc
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations bootstrap
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            0.0.0.0:8001, 0.0.0.0:8444 http2 ssl
      KONG_DATABASE:                postgres
      KONG_LOG_LEVEL:               info
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_HTTP_INCLUDE:      /kong/servers.conf
      KONG_NGINX_WORKER_PROCESSES:  1
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 postgres-postgresql.storage.svc
      KONG_PG_PASSWORD:             ********
      KONG_PG_PORT:                 5432
      KONG_PG_SSL:                  off
      KONG_PG_SSL_VERIFY:           off
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled,oidc
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_NGINX_DAEMON:            off
    Mounts:
      /kong from custom-nginx-template-volume (rw)
      /kong_prefix/ from kong-kong-prefix-dir (rw)
      /tmp from kong-kong-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9lkh (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kong-kong-prefix-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kong-kong-tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  custom-nginx-template-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kong-kong-default-custom-server-blocks
    Optional:  false
  default-token-z9lkh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9lkh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  3m3s  default-scheduler  Successfully assigned gateway/kong-kong-init-migrations-dbblz to worker3
  Normal  Pulled     3m2s  kubelet, worker3   Container image "busybox:latest" already present on machine
  Normal  Created    3m2s  kubelet, worker3   Created container wait-for-postgres
  Normal  Started    3m2s  kubelet, worker3   Started container wait-for-postgres

I cannot get logs for a container if the pod is not initialized properly.

ping @traines

I honestly can say something just does not look right here. For what it is worth I am using version 1.5.0 of the helm chart. I also noticed that the batch job command uses [ "/bin/sh", "-c", "kong migrations up" ] whereas the docker documentation mandates it to be [ "/bin/sh", "-c", "kong migrations bootstrap" ] unless for kong versions < 0.15. Is that an oversight?

Can someone please share with me a values.yaml file that works for chart version 1.5.0, this chart uses kong version 2.0.3. @traines. Thanks in advance.

You can get logs for initContainers, it’s just that the logs command won’t do it automatically. The container has it be specified explicitly,

kubectl logs JOB_POD -c wait-for-postgres

Can you provide output from that?

I provided this long time ago. See reference. @traines

ping @traines

Can you show the complete command you are running? Again, what you’ve provided looks like what Kubernetes will return if you attempt to retrieve logs from an initializing pod without specifying the container. The command should look like:

kubectl logs JOB_POD -c wait-for-postgres

The -c wait-for-postgres is critical; you will not get useful information without it.

kubectl logs kong-kong-init-migrations-q64kf -c wait-for-postgres

waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432

Once more, can I have a sample external postgresql configuration that works? It looks as if database connection string is not constructed properly from environment variables.

ping @traines

1 Like

Can I suggest that instead of this

# If you would like to use a database, there are two options:
# - (recommended) Deploy and maintain a database and pass the connection
#   details to Kong via the `env` section.
# - You can use the below `postgresql` sub-chart to deploy a database
#   along-with Kong as part of a single Helm release.

Can we show an example configuration for external database. I think the actually use case of this chart will be using an already configured postgres/cassandra database rather than the dependent postgres chart. I ask for a sample section of this configuration several days ago. I am almost at the point of writing my own chart for this but that will be duplication. @hbagdi @traines

1 Like

Does the service hostname and port not match what you’d expect?

waiting for db - trying postgres-postgresql.storage.svc:5432

That’s not a Postgres connection string, just the hostname/port only. The init container is a very basic test to confirm that it can resolve the address and establish a TCP connection. You can mimic it by running a pod directly:

$ kubectl run -it --restart=Never --rm --image busybox:latest test           
If you don't see a command prompt, try pressing enter.                                         

/ # nc -zv -w1 example-postgresql.default.svc:5432
example.default.svc:5432 (10.19.251.158:5432) open
/ # nc -zv -w1 not-listening.default.svc:5432
nc: not-listening.default.svc:5432 (10.19.249.30:5432): Connection timed out
/ # nc -zv -w1 doesntexist.default.svc:5432
nc: bad address 'doesntexist.default.svc:5432'

Those show what you’ll get if the connection succeeds, if the connection fails, and when DNS resolution fails.

The timeout we use is rather aggressive; you may want to test with -w10 to see if that makes any difference. However, that shouldn’t be a factor unless the network quality is quite poor, which I wouldn’t expect intra-cluster.

We don’t provide an example configuration because there isn’t any valid one: the correct configuration is wholly dependent on what your particular database setup is, and it’s necessary to review the options https://docs.konghq.com/2.0.x/configuration/ to see what you’ll need.

It is exactly the same, I have proof tested that and it works on other services I use on this cluster. When I add .cluster.local to it I get this:

postgres-postgresql.storage.svc.cluster.local (10.x.x.x:5432) open,

However connection is still not established. That is the line in the logs, it restarts the pod after that. @traines

How exactly does it restart? Do you see that line repeated It should loop until success, exit the init container once that open line appears, and then proceed with the main container, e.g.

kubectl logs example-kong-init-migrations-x7886 -c wait-for-postgres 
nc: example-postgresql (10.19.245.143:5432): Connection timed out
waiting for db - trying example-postgresql:5432
nc: example-postgresql (10.19.245.143:5432): Connection timed out
waiting for db - trying example-postgresql:5432
example-postgresql (10.19.245.143:5432) open

The open line should never appear more than once, and you should see that the init container stops after it appears, and that you can run kubectl logs PODNAME -c kong-migrations after to see their progress (or any failures beyond a basic connection failure).

If the pod is in fact restarting, do you see anything in the kubectl describe events output for the pod or job? Kubernetes should log any external reasons for a pod restart, although we don’t define a deadline or other reasons for killing the pod–I suspect what you’re running into now is that the main migrations container is starting and exiting unsuccessfully for some other reason, e.g. bad auth credentials or missing database permissions.

I,m getting the same problem, migrating ECS kong to EKS kong but I cannot specify an existing database on RDS. Have you been able to resolve it ?

Did you get a fix for this?