Can you provide logs from the init-migrations pod?
Offhand, the Postgres chart sets up the user and database automatically; the migration job alone will not. Double-check that you’ve created your user, database, and granted your user permissions to it.
Honestly, there is not much of any logging and that has been the challenging. It is like shooting in the dark. This is all I see Error from server (BadRequest): container "kong-migrations" in pod "kong-kong-init-migrations-m5grv" is waiting to start: PodInitializing
That job runs a basic init to confirm it can establish a connection:
kubectl logs JOB_POD -c wait-for-postgres should get you logs for it. Check kubectl describe pod JOB_POD for the initContainer’s command also–it may not have rendered correctly, though based on the values.yaml it should be able to.
I honestly can say something just does not look right here. For what it is worth I am using version 1.5.0 of the helm chart. I also noticed that the batch job command uses [ "/bin/sh", "-c", "kong migrations up" ] whereas the docker documentation mandates it to be [ "/bin/sh", "-c", "kong migrations bootstrap" ] unless for kong versions < 0.15. Is that an oversight?
Can someone please share with me a values.yaml file that works for chart version 1.5.0, this chart uses kong version 2.0.3. @traines. Thanks in advance.
Can you show the complete command you are running? Again, what you’ve provided looks like what Kubernetes will return if you attempt to retrieve logs from an initializing pod without specifying the container. The command should look like:
kubectl logs JOB_POD -c wait-for-postgres
The -c wait-for-postgres is critical; you will not get useful information without it.
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
Once more, can I have a sample external postgresql configuration that works? It looks as if database connection string is not constructed properly from environment variables.
# If you would like to use a database, there are two options:
# - (recommended) Deploy and maintain a database and pass the connection
# details to Kong via the `env` section.
# - You can use the below `postgresql` sub-chart to deploy a database
# along-with Kong as part of a single Helm release.
Can we show an example configuration for external database. I think the actually use case of this chart will be using an already configured postgres/cassandra database rather than the dependent postgres chart. I ask for a sample section of this configuration several days ago. I am almost at the point of writing my own chart for this but that will be duplication. @hbagdi@traines
Does the service hostname and port not match what you’d expect?
waiting for db - trying postgres-postgresql.storage.svc:5432
That’s not a Postgres connection string, just the hostname/port only. The init container is a very basic test to confirm that it can resolve the address and establish a TCP connection. You can mimic it by running a pod directly:
$ kubectl run -it --restart=Never --rm --image busybox:latest test
If you don't see a command prompt, try pressing enter.
/ # nc -zv -w1 example-postgresql.default.svc:5432
example.default.svc:5432 (10.19.251.158:5432) open
/ # nc -zv -w1 not-listening.default.svc:5432
nc: not-listening.default.svc:5432 (10.19.249.30:5432): Connection timed out
/ # nc -zv -w1 doesntexist.default.svc:5432
nc: bad address 'doesntexist.default.svc:5432'
Those show what you’ll get if the connection succeeds, if the connection fails, and when DNS resolution fails.
The timeout we use is rather aggressive; you may want to test with -w10 to see if that makes any difference. However, that shouldn’t be a factor unless the network quality is quite poor, which I wouldn’t expect intra-cluster.
We don’t provide an example configuration because there isn’t any valid one: the correct configuration is wholly dependent on what your particular database setup is, and it’s necessary to review the options https://docs.konghq.com/2.0.x/configuration/ to see what you’ll need.
How exactly does it restart? Do you see that line repeated It should loop until success, exit the init container once that open line appears, and then proceed with the main container, e.g.
kubectl logs example-kong-init-migrations-x7886 -c wait-for-postgres
nc: example-postgresql (10.19.245.143:5432): Connection timed out
waiting for db - trying example-postgresql:5432
nc: example-postgresql (10.19.245.143:5432): Connection timed out
waiting for db - trying example-postgresql:5432
example-postgresql (10.19.245.143:5432) open
The open line should never appear more than once, and you should see that the init container stops after it appears, and that you can run kubectl logs PODNAME -c kong-migrations after to see their progress (or any failures beyond a basic connection failure).
If the pod is in fact restarting, do you see anything in the kubectl describe events output for the pod or job? Kubernetes should log any external reasons for a pod restart, although we don’t define a deadline or other reasons for killing the pod–I suspect what you’re running into now is that the main migrations container is starting and exiting unsuccessfully for some other reason, e.g. bad auth credentials or missing database permissions.