Multiple nginx master process in kong 1.4,1.5

Hi ,
I am seeing issue with kong multiple nginx master process when using kong 1.4 and 1.5 on kubenetes.
while in kong 2.0 and 1.2 issue doesnot exist , we have one master process only .

Can you elaborate on what you’re seeing, or show us what commands you’re running and their output?

I am running kong 1.4.3 inside a container and when it starts i can see multiple nginx master processed ideally it has be only one .

Can you provide what you’re using to determine that? What you describe is rather unusual, so I want to understand how you’re arriving at that conclusion. For example, if I start Kong, I get the following when running ps inside the container:

$ ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
kong           1  0.6  0.0  11844  3024 pts/0    Ss   22:35   0:00 /bin/bash
kong          51  0.1  0.0  26548  4068 pts/0    S+   22:35   0:00 perl /usr/local/openresty/bin/resty /usr/local/bin/kong start
kong          53  1.2  0.0  56988 15212 pts/0    S+   22:35   0:00 /usr/local/openresty/nginx/sbin/nginx -p /tmp/resty_MGmAykEyRg/ -c conf/nginx.conf
kong          71  1.8  0.1 713560 23656 pts/0    S+   22:35   0:00 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
kong          79  0.7  0.4 768132 72196 pts/0    S+   22:35   0:00 nginx: worker process
kong          80  0.7  0.4 768132 72196 pts/0    S+   22:35   0:00 nginx: worker process
kong          81  0.6  0.4 768132 72312 pts/0    S+   22:35   0:00 nginx: worker process
kong          82  0.7  0.4 768132 72332 pts/0    S+   22:35   0:00 nginx: worker process
kong          83 11.0  0.0  11844  3032 pts/1    Ss   22:36   0:00 /bin/bash
kong         100  0.0  0.0  51772  3448 pts/1    R+   22:36   0:00 ps aux

There shouldn’t be any way to spawn multiple master processes without some contrived configuration: the Docker image normally starts only one via its entrypoint and Kong has its own protections against running multiple instances on one machine. To spawn multiple masters inside a single container, you’d need to exec into the container and run KONG_PREFIX=/something-nonstandard kong start to start another process that doesn’t conflict with the first.

You can see multiple masters if you’re checking processes on the kubelet if multiple Kong replicas are scheduled on the same node. We don’t specify any affinity rules by default, so that’s allowed. https://github.com/Kong/charts/blob/kong-1.6.1/charts/kong/values.yaml#L407-L409 allows you to set them following something like the example at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#never-co-located-in-the-same-node

This is the problem :
[root@kong1 /]# ps -aef | grep nginx

kong 1 0 2 23:39 ? 00:00:01 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

kong 281 1 23 23:39 ? 00:00:00 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

kong 283 1 54 23:39 ? 00:00:01 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

root 285 243 6 23:39 ? 00:00:00 /usr/local/openresty/ nginx /sbin/ nginx -p /tmp/resty_VNgqkeplTN/ -c conf/ nginx .conf

kong 286 1 11 23:39 ? 00:00:00 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

kong 287 1 15 23:40 ? 00:00:00 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

kong 289 1 11 23:40 ? 00:00:00 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

kong 291 1 0 23:40 ? 00:00:00 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

kong 292 1 0 23:40 ? 00:00:00 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

root 294 201 0 23:40 pts/0 00:00:00 grep --color=auto nginx

kong 295 1 0 23:40 ? 00:00:00 nginx : master process /usr/local/openresty/ nginx /sbin/ nginx -p /usr/local/kong -c nginx .conf

i am using image provided by kong as 1.4-centos , just running that image is causing issue .
1.4,1.5 having the issue while 2.0 no such issue seen

We’re kinda stumped on this:

kong 1 0 2 23:39 ? 00:00:01 **nginx** : master process /usr/local/openresty/ **nginx** /sbin/ **nginx** -p /usr/local/kong -c **nginx** .conf
kong 281 1 23 23:39 ? 00:00:00 **nginx** : master process /usr/local/openresty/ **nginx** /sbin/ **nginx** -p /usr/local/kong -c **nginx** .conf
kong 283 1 54 23:39 ? 00:00:01 **nginx** : master process /usr/local/openresty/ **nginx** /sbin/ **nginx** -p /usr/local/kong -c **nginx** .conf

Not only are the multiple masters present, the extra two are children of PID 1 (second column is PPID) and started at the same time.

Can you walk us through, step-by-step, how you spawn your Kong Pods? If you use a simplified means of deploying one (e.g. with kubectl run on the image alone), do you see the same behavior as in your standard deployment process?

kubectl run -it --restart=Never --rm --image kong:1.5 master-test

There are two ways to reproduce it

1.1.Kong running inside pod with version 1.2.2-centos with cassandra as backend
1.2.Upgrade deployment with new image 1.4-centos with same keyspace as step 1.1.

issue is seen .

2.1.Kong running inside pod with version 1.4-centos with new keyspace of cassandra

shall I open bug for it

May be best to–with what’s presented I cannot replicate this, e.g.

$ kubectl run -it --restart=Never --env="KONG_DATABASE=off" --rm --image kong:1.5 master-test 
If you don't see a command prompt, try pressing enter.
2020/06/08 17:49:50 [notice] 1#0: signal 28 (SIGWINCH) received
2020/06/08 17:49:50 [notice] 26#0: signal 28 (SIGWINCH) received
2020/06/08 17:49:50 [notice] 27#0: signal 28 (SIGWINCH) received
2020/06/08 17:49:50 [notice] 1#0: signal 28 (SIGWINCH) received
2020/06/08 17:49:50 [notice] 26#0: signal 28 (SIGWINCH) received
2020/06/08 17:49:50 [notice] 27#0: signal 28 (SIGWINCH) received
$ kubectl exec -it master-test /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
/ $ ps -aef
PID   USER     TIME  COMMAND
    1 kong      0:00 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
   26 kong      0:00 nginx: worker process
   27 kong      0:00 nginx: worker process
   28 kong      0:00 /bin/sh
   33 kong      0:00 ps -aef

I can’t think offhand how you’d replicate that in a Dockerized environment without some very specific configuration and abnormal container management that doesn’t appear to be in place here.

It tried to run inside kubernetes with cassandra as backend .
i can help in reproduction of the issue , any particular logs required after issue is reproduced.

Can you set KONG_LOG_LEVEL=debug, start a new Pod (e.g. by scaling the Deployment replicas to 0 and then to 1), and use kubectl logs PODNAME -c proxy to retrieve the Kong container debug logs? Everything we care about should be in startup logs; waiting 30s after starting and collecting logs then should suffice.

Once you’ve done that, run the same ps command as earlier to collect process information from that run.

The ps info from earlier indicates that the additional processes were children of PID 1 (the original master), so hopefully the original master includes some log information indicating why it spawned them.

As part of testing for another issue, we’ve found a scenario that may lead to this: if a worker gets killed because it runs out of memory, it may get respawned into this weird state where it looks like or is a master.

@vinicius.mignot does that characterization match what you observed? I may be a little off, since I’m working from my memory of our call rather than seeing this symptom in practice.

@piyush_gupta what are your pod resource limits and requests, and do you see any restarts mentioned in the container log?

Yes, that’s what is happening in my tests. And if I keep pushing, all the Nginx processes become master processes.

@traines I see multiple restart on the the container , i put 1 CPU and 1Gi Memory on the pod ,shall i open the bug ticket for the issue ?

Probably check out 2.0–there was a change in 1.4.1 that removed a restriction on worker connections (see https://github.com/Kong/kong/blob/master/CHANGELOG.md#141 and https://github.com/Kong/kong/pull/5148) that we later reverted in https://github.com/Kong/kong/commit/798e270d7ed4fa3db0da76ec447676806842fa9f since it ate a bunch more memory than it should.

@traines shall i open the bug ticket on this issue in kong 1.4 ?

No–there won’t be any future 1.x releases, that change will only ever be applied to 2.x and onward.