Sometimes, my kong-ingress-controller pod is in CrashLoopBackOff
status.
NAME READY STATUS RESTARTS AGE
pod/kong-7f66b99bb5-747lm 1/1 Running 0 11d
pod/kong-ingress-controller-7b6d8fff97-dqhqx 2/3 CrashLoopBackOff 649 5d2h
pod/konga-85b66cffff-tkj8w 1/1 Running 0 11d
This happens with many frequency, and after to some minutes (usually more that 1 minute) my pod is running again. When this happen, I always check the logs and I get this connect: connection refused
message:
al tcp 127.0.0.1:8001: connect: connection refused
E0326 10:39:34.542954 6 controller.go:131] unexpected failure updating Kong configuration:
Get http://localhost:8001/certificates/xxxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
W0326 10:39:34.542987 6 queue.go:113] requeuing kube-system/cert-manager-webhook, err Get http://localhost:8001/certificates/xxxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
I0326 10:39:37.875666 6 controller.go:128] syncing Ingress configuration...
E0326 10:39:37.876260 6 kong.go:1142] Unexpected response searching a Kong Certificate: Get http://localhost:8001/certificates/xxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
E0326 10:39:37.876283 6 controller.go:131] unexpected failure updating Kong configuration:
Get http://localhost:8001/certificates/xxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
W0326 10:39:37.876291 6 queue.go:113] requeuing kube-system/heapster, err Get http://localhost:8001/certificates/xxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
I0326 10:39:41.209755 6 controller.go:128] syncing Ingress configuration...
E0326 10:39:41.210436 6 kong.go:1142] Unexpected response searching a Kong Certificate: Get http://localhost:8001/certificates/xxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
E0326 10:39:41.210449 6 controller.go:131] unexpected failure updating Kong configuration:
Get http://localhost:8001/certificates/xxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
W0326 10:39:41.210456 6 queue.go:113] requeuing kube-system/tiller-deploy, err Get http://localhost:8001/certificates/xxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
I0326 10:39:44.542301 6 controller.go:128] syncing Ingress configuration...
E0326 10:39:44.542809 6 kong.go:1142] Unexpected response searching a Kong Certificate: Get http://localhost:8001/certificates/xxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
E0326 10:39:44.542825 6 controller.go:131] unexpected failure updating Kong configuration:
Get http://localhost:8001/certificates/xxxxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
W0326 10:39:44.542831 6 queue.go:113] requeuing kube-system/metrics-server, err Get http://localhost:8001/certificates/xxxxxxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
I0326 10:39:47.875673 6 controller.go:128] syncing Ingress configuration...
E0326 10:39:47.876258 6 kong.go:1142] Unexpected response searching a Kong Certificate: Get http://localhost:8001/certificates/xxxx: dial tcp 127.0.0.1:8001: connect: connection refused
E0326 10:39:47.876275 6 controller.go:131] unexpected failure updating Kong configuration:
Get http://localhost:8001/certificates/xxxxxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
W0326 10:39:47.876299 6 queue.go:113] requeuing kong/kong-ingress-controller, err Get http://localhost:8001/certificates/xxxxxxxxx: dial tcp 127.0.0.1:8001: connect: connection refused
I’ve performed the port-forward operation in order to check locally the behavior and I get:
⟩ kubectl port-forward svc/kong-ingress-controller 8001:8001 -n kong
Forwarding from 127.0.0.1:8001 -> 8001
Forwarding from [::1]:8001 -> 8001
Handling connection for 8001
E0326 11:46:44.589571 30810 portforward.go:400] an error occurred forwarding 8001 -> 8001: error forwarding port 8001 to pod 7ee8c115b05036b8a54046763d9aec1f9d69c8ed97b32a5406014ccb630484d2, uid : exit status 1: 2019/03/26 10:46:44 socat[5205] E connect(5, AF=2 127.0.0.1:8001, 16): Connection refused
Handling connection for 8001
Handling connection for 8001
E0326 11:47:08.110395 30810 portforward.go:400] an error occurred forwarding 8001 -> 8001: error forwarding port 8001 to pod 7ee8c115b05036b8a54046763d9aec1f9d69c8ed97b32a5406014ccb630484d2, uid : exit status 1: 2019/03/26 10:47:08 socat[5473] E connect(5, AF=2 127.0.0.1:8001, 16): Connection refused
E0326 11:47:08.110407 30810 portforward.go:400] an error occurred forwarding 8001 -> 8001: error forwarding port 8001 to pod 7ee8c115b05036b8a54046763d9aec1f9d69c8ed97b32a5406014ccb630484d2, uid : exit status 1: 2019/03/26 10:47:08 socat[5474] E connect(5, AF=2 127.0.0.1:8001, 16): Connection refused
And I make a curl operation and I get this.
⟩ curl -i http://localhost:8001/
curl: (52) Empty reply from server
And I can’t use konga to connect to my kong-ingress-controller
When my pod is running again, all it’s works again.
I know the lifecycle pod dynamics, but why my pod/kong-ingress-controller-7b6d8fff97-dqhqx
does not have a stable behavior.
The CrashLoopBackOff
status is very frequent in my kong deployment.
Why happens this situation?