Kong k8s POD crashes after few hours

  1. Installed kong k8s version using on ubunku k8s master (AWS ec2)

helm install stable/kong

  1. It was working perfect post installation.
  2. After few hours (20 odd hrs), pod kong pod is crashed with error: ‘Error syncing pod’
  3. Logs are as below. This repeated twice. I had reinstalled and was able to reproduce same issue.

prefix directory /usr/local/kong not found, trying to create it

2019/04/12 07:06:13 [warn] postgres database ‘kong’ is missing migration: (response-transformer) 2016-05-04-160000_resp_trans_schema_changes

2019/04/12 07:06:13 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:172: [postgres error] the current database schema does not match this version of Kong. Please run kong migrations up to update/initialize the database schema. Be aware that Kong migrations should only run from a single node, and that nodes running migrations concurrently will conflict with each other and might corrupt your database schema!

stack traceback:

[C]: in function ‘assert’

/usr/local/share/lua/5.1/kong/init.lua:172: in function ‘init’

init_by_lua:3: in main chunk

nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:172: [postgres error] the current database schema does not match this version of Kong. Please run kong migrations up to update/initialize the database schema. Be aware that Kong migrations should only run from a single node, and that nodes running migrations concurrently will conflict with each other and might corrupt your database schema!

stack traceback:

[C]: in function ‘assert’

/usr/local/share/lua/5.1/kong/init.lua:172: in function ‘init’

init_by_lua:3: in main chunk

  1. Tried to recover the situation by running (scaling up job) migration job. Did not scale up or run.

Team, can you please check and advice. Stability is very critical for us to make dicision on moving forward with Kong API Gateway.

From the error message, it seems like you’re using Kong <= 0.15.0.
Any reason for not starting with Kong 1.0 or 1.1?

I followed helm chart related steps to setup with no variation. I also observed that version was 0.13.0. Was expecting it to be 1.1 - default as mentioned on helm bub.

Looks like some issue with helm chart script version conflicts.
So now switched to k8s yml and redone the setup.

It started well to start with (but previous setup with helm also did), would observe the same over weekend.

Could you share the version of the chart you’re using and version of Kong?

Could you provide a link the YAML you’re using?

Got the error again even with k8s, here are the logs -

2019/04/15 13:25:30 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:337: database needs bootstrap; run ‘kong migrations bootstrap’

stack traceback:

[C]: in function ‘error’

/usr/local/share/lua/5.1/kong/init.lua:337: in function ‘init’

init_by_lua:3: in main chunk

nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:337: database needs bootstrap; run ‘kong migrations bootstrap’

stack traceback:

[C]: in function ‘error’

/usr/local/share/lua/5.1/kong/init.lua:337: in function ‘init’

init_by_lua:3: in main chunk

Checked kong pg schema, does not show up any tables, not sure what’s happening.

Please refer ymls at - https://gitlab.com/bm-kong/kong

I suspected you didn’t have persistence storage for Postgres and that is indeed the case.
You’re using ReplicationController for your Postgres deployment, which only ensures that the Postgres Pod is running but doesn’t provide any persistence storage for Kong’s database.

At first, the migrations are run and Kong starts, but as soon as the Postgres pod is restarted or rescheduled, the database schema and data itself is gone and Kong cannot function anymore.

Please use a StatefulSet for your Postgres deployment. You can refer to our manifest for this purpose.