How to solve the init_by_lua error in kong

I have created a docker-compose file to spin up Postgres, kong-migration and kong in a container.All the containers were up and I was able to use kong for the first time. But yesterday onwards, I am getting the below error:

stack traceback: [C]: in function 'error' /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:16: in function 'check_state' /usr/local/share/lua/5.1/kong/init.lua:432: in function 'init' init_by_lua:3: in main chunk nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:16: Database needs bootstrapping or is older than Kong 1.0.

To start a new installation from scratch, run 'kong migrations bootstrap'.

To migrate from a version older than 1.0, migrated to Kong 1.5.0 first. If you still have 'apis' entities, you can convert them to Routes and Services using the 'kong migrations migrate-apis' command in Kong 1.5.0.
1 Like

Greetings @arijitmhptr, have you found any resolution in this error ?

1 Like

Stuck with this error since a long time…
I have an external postgress pod with pvc and pv …
and i have mentioned the volume mounts in the values.yaml of kong helm as -

  volumeMounts:
  - mountPath: /var/lib/postgresql/data
    name: postgres-pv-claim

in the env section…
Initially kong-init-migrations runs and the kong pod also comes to running state…
But when my k8s instance restarts or the pods are deleted the kong pod gets stuck in
kong-kong-74fc6c98d8-h2w6j 0/1 Init:1/2 0 1s
init state…
Also the logs for wait-for-db is as follows -

waiting for db
Error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:16: Database needs bootstrapping or is older than Kong 1.0.

To start a new installation from scratch, run 'kong migrations bootstrap'.

To migrate from a version older than 1.0, migrated to Kong 1.5.0 first.
If you still have 'apis' entities, you can convert them to Routes and Services
using the 'kong migrations migrate-apis' command in Kong 1.5.0.

kong image -

image:
  repository: kong
  tag: "2.4"

postgres image - postgres:10.1

Please help me resolve this blocker - @traines

What do the logs in the init-migrations container show? That suggests something broke during the bootstrap.

In any case I’d recommend just clearing the database–all configuration will be restored from the Kubernetes resources anyway. You’ll want to delete the release, run helm template to generate the Jobs and such, copy one of the migration Jobs to a temporary file, change the command to kong migrations reset -y, and run that Job. After, installing the release again should bring it up in a normal state.

Hey thanks for writing,
I tried almost all possible combinations for wait-for-db image & the change mentioned above as well.

On making these changes to kong chart -
[ "/bin/sh", "-c", "export KONG_NGINX_DAEMON=on; export KONG_PREFIX=mktemp -d; until kong start; do echo 'waiting for db'; sleep 15; done; kong stop; rm -fv '/kong_prefix//stream_rpc.sock'"]

Here i have changed sleep 1 to sleep 15 so it takes some sufficient time for the postgres pod to come up completely. Also added the rm -fv '/kong_prefix//stream_rpc.sock'" ( refering this issue Leftover socket file interferes with startup when wait-for-db initContainer is enabled · Issue #295 · Kong/charts · GitHub )

But Here again kong pod on cluster restart is going to init stage
NAME READY STATUS RESTARTS AGE
kong-kong- 0/1 Init:1/2 0 29m

The pod logs are the same -

waiting for db
Error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:16: Database needs bootstrapping or is older than Kong 1.0.

Also on clearing the database and running the migrations job explicitly -
the kong migrations pod goes to
kong-kong-init-migrations 0/1 CrashLoopBackOff

With error message - Database not bootstrapped, nothing to reset

Also below are the deployment configurations in values file -

  userDefinedVolumes:
  - name: "postgres-pv-claim2"
    persistentVolumeClaim:
      claimName: postgres-pv-claim2
  userDefinedVolumeMounts:
  - name: "postgres-pv-claim2"
    mountPath: "/var/lib/postgresql/data"

Tried giving preupgrade & postupgrade as false in values also … Same behavior.

Am i missing anything in the configuration or does it require more additional explicit settings ??

looping in - @traines @hbagdi

Have you changed the init-migrations container to run kong migrations reset in the template and not flipped it back to kong migrations bootstrap? You shouldn’t see that message unless you’re running reset.

Hey,
Yes i have made these changes in migrations.yml in kong/templates/migrations.yml

args: [ "kong", "migrations", "reset", "-y" ]

And the kong pod is still in init stage on cluster restart.

Right–to clarify, you’d only use the modified job with the reset command as a one-off to clear the database. Afterwards you should restore the original templates. Apologies for the confusion.