/usr/local/share/lua/5.1/kong/init.lua:337: in function 'init'

I have issue like this but something different or that is not solution
I dont’t know how to solve this problem.

I installed stable charts kong and add postgres chart in kong/charts

helm install --name kong
–set ‘ingressController.enabled=true’

./kong

that page said solution is bellow.
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresql

but stable chart of postgreSQL already have this
Shoud I change the name?
volumeMounts:
- name: data
mountPath: {{ .Values.persistence.mountPath }}
subPath: {{ .Values.persistence.subPath }}

my-kong-8474cb575c-6sxhf 0/1 Init:0/1 0 40s
my-kong-controller-5cff75cbb4-vtq5l 0/2 Init:0/1 0 40s
my-infra-postgresql-0 1/1 Running 0 40s

data-my-postgresql-0 Bound pvc-b3f63cd5-9325-11e9-97c6-fa163eb09adb 8Gi RWO ceph-block 85s

volumeMounts:
- mountPath: /bitnami/postgresql
  name: data

volumes:

  • name: data
    persistentVolumeClaim:
    claimName: data-my-postgresql-0

wait-for-db container log
database needs bootstrapping; run ‘kong migrations bootstrap’
Error: /usr/local/share/lua/5.1/kong/cmd/start.lua:50: nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:337: database needs bootstrap; run ‘kong migrations bootstrap’
stack traceback:
[C]: in function ‘error’
/usr/local/share/lua/5.1/kong/init.lua:337: in function ‘init’
init_by_lua:3: in main chunk
lua:3: in main chunk
Run with --v (verbose) or --vv (debug) for more details
waiting for db

postgresql-0 log
2019-06-20 06:36:41.743 GMT [228] LOG: database system was shut down at 2019-06-20 06:36:40 GMT
2019-06-20 06:36:41.750 GMT [1] LOG: database system is ready to accept connections

When you use Helm installation, it will automatically run the migrations for you.
Could you share the logs of the Job container that gets installed as part of the release?

** “nc -zv” is wrong command.
Is it right change?**

wait-for-postgres container log
command: [ “/bin/sh”, “-c”, “until nc -zv $KONG_PG_HOST $KONG_PG_PORT -w1; do echo ‘waiting for db’; sleep 1; done” ]
=> command: [ “/bin/sh”, “-c”, “until nslookup $KONG_PG_HOST; do echo ‘waiting for db’; sleep 1; done” ]

waiting for db
nc: invalid option – z
waiting for db
BusyBox v1.22.1 (2014-05-22 23:22:11 UTC) multi-call binary.
Usage: nc [-iN] [-wN] [-l] [-p PORT] [-f FILE|IPADDR PORT] [-e PROG]
Open a pipe to IP:PORT or FILE
-l Listen mode, for inbound connects
(use -ll with -e for persistent server)
-p PORT Local port
-w SEC Connect timeout
-i SEC Delay interval for lines sent
-f FILE Use file (ala /dev/ttyS0) instead of network
-e PROG Run PROG after connect

[ job working correctly after change of command ]

waiting for db
Server: 10.221.0.10
Address 1: 10.221.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can’t resolve ‘my-postgresql’
waiting for db
Server: 10.221.0.10
Address 1: 10.221.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-postgresql
Address 1: 10.221.145.198 my-postgresql.nms.svc.cluster.local

But I met another issue of "password authentication failed"
How can I fix it?

kong-migrations container log
Error: [PostgreSQL error] failed to retrieve server_version_num: FATAL: password authentication failed for user “kong”
Run with --v (verbose) or --vv (debug) for more details

migrations.yaml
- name: KONG_PG_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template “kong.postgresql.fullname” . }}
key: postgresql-password

_helper.tpl
{{- if .Values.postgresql.enabled }}

  • name: KONG_PG_HOST
    value: {{ template “kong.postgresql.fullname” . }}
  • name: KONG_PG_PORT
    value: “{{ .Values.postgresql.service.port }}”
  • name: KONG_PG_PASSWORD
    valueFrom:
    secretKeyRef:
    name: {{ template “kong.postgresql.fullname” . }}
    key: postgresql-password

That’s the problem.
Please use the latest version of busbox docker image.

Thank you for your comment

I already set the latest image but imagePullPolicy = “IfNotPresent”{
so I cahnged “Always”

The migration is completed successfully
But kong-controller pod CrashLoopBackOff
Is This another problem ?

ingress-controller container log
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59
/usr/local/Cellar/go/1.12.4/libexec/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
** panic: runtime error: invalid memory address or nil pointer dereference**
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xfd7ff2]
goroutine 157 [running]:
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1197580, 0x2002d00)
/usr/local/Cellar/go/1.12.4/libexec/src/runtime/panic.go:522 +0x1b5
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/parser.(*Parser).parseIngressRules(0xc000100b78, 0xc000242f70, 0x2, 0x2, 0x0, 0x0, 0x0)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/parser/parser.go:185 +0x132
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/parser.(*Parser).Build(0xc000100b78, 0xc000c49500, 0xc000350d20, 0xc000c49560)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/parser/parser.go:105 +0x79
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*KongController).syncIngress(0xc000100b00, 0x11f3280, 0xc00043b2c0, 0xc02c0f278b, 0x885abf40bb4fa)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/controller.go:111 +0x1dc
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).worker(0xc0004b4f60)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:112 +0x2e5
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0004447a8)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000389fa8, 0x3b9aca00, 0x0, 0x1, 0xc0000acf00)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).Run(0xc0004b4f60, 0x3b9aca00, 0xc0000acf00)
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59 +0x6c
created by github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*KongController).Start
/Users/harry/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:145 +0x10c

Which version of Ingress Controller are you using?

I checked 0.4.0, 0.5.0 and 0.3.0

  1. 0.4.0 and 0.5.0 occur same “panic error” message
  2. 0.3.0 did not occur Ingress Controller “panic error” message
    But admin-api, kong containers are not ready.
    And kong-controller restart repeatedly

kong-kong-cf4c4cf8-lltrs 0/1 Running 0 14m
kong-kong-controller-66bcb76df5-klcvb 1/2 Running 4 14m
kong-kong-init-migrations-dwtrq 0/1 Completed 0 14m
kong-postgresql-0 1/1 Running 0 14m

use 0.3.0
[kong log]
2019/06/26 00:05:00 [notice] 1#0: start worker processes
2019/06/26 00:05:00 [notice] 1#0: start worker process 36
2019/06/26 00:05:00 [notice] 1#0: start worker process 37
10.220.3.1 - - [26/Jun/2019:00:05:35 +0000] “GET /status HTTP/1.1” 200 205 “-” “kube-probe/1.13”
10.220.3.1 - - [26/Jun/2019:00:05:40 +0000] “GET /status HTTP/1.1” 200 205 “-” “kube-probe/1.13”

[ingress-controller log]
I0626 00:05:17.724135 7 controller.go:128] syncing Ingress configuration…
I0626 00:05:17.731865 7 kong.go:1027] creating Kong Upstream with name test.checker.8080
I0626 00:05:17.754316 7 kong.go:241] creating Kong Target 10.220.5.78:8080 for upstream e4538c0b-bdd0-4f61-baf4-db3c8f753a8e
I0626 00:05:17.866510 7 kong.go:113] syncing global plugins
W0626 00:05:17.867338 7 kong.go:335] there is no custom Ingress configuration for rule test/checker
I0626 00:05:17.868309 7 kong.go:401] Creating Kong Service name test.checker.8080
W0626 00:05:17.885778 7 kong.go:751] there is no custom Ingress configuration for rule test/checker

admin-api log
2019/06/26 00:08:13 [crit] 44#0: *26 [lua] balancer.lua:728: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2019/06/26 00:08:13 [crit] 38#0: *10 [lua] balancer.lua:728: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
10.220.2.1 - - [26/Jun/2019:00:08:28 +0000] “GET /status HTTP/1.1” 200 205 “-” “kube-probe/1.13”
10.220.2.1 - - [26/Jun/2019:00:08:58 +0000] “GET /status HTTP/1.1” 200 205 “-” “kube-probe/1.13”

I fix the issue by changing kubernetes 1.13.0 -> 1.15.0
Thank you for your support

Please refer this page