Container ingress-controller failed liveness probe

Hello guys

I am trying to complete the installation of kong for Kubernetes on AKS. Basically I want to reproduce this scenario:

Here is how I install it

I have tried with:

helm install kong/kong --generate-name --set ingressController.installCRDs=false --set admin.type=LoadBalancer --set proxy.type=LoadBalancer

and

kubectl apply -f https://bit.ly/k4k8s

The result is always the same

kubectl get all -n kong
NAME                                        READY   STATUS             RESTARTS   AGE
pod/kong-1595829690-kong-6789f4b45f-rjztq   2/3     CrashLoopBackOff   11         21m

NAME                                 TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
service/kong-1595829690-kong-proxy   LoadBalancer   10.0.88.120   51.116.135.128   80:32104/TCP,443:30928/TCP   21m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kong-1595829690-kong   0/1     1            0           21m

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/kong-1595829690-kong-6789f4b45f   1         1         0       21m

when I do a describe:

Events:
  Type     Reason     Age                   From                                        Message

----     ------     ----                  ----                                        -------

  Normal   Scheduled  18m                   default-scheduler                           Successfully assigned kong/kong-1595829690-kong-6789f4b45f-rjztq to aks-agentpool-41512647-vmss000003
  Normal   Pulling    18m                   kubelet, aks-agentpool-41512647-vmss000003  Pulling image "docker.io/istio/proxyv2:1.7.0-alpha.0"
  Normal   Pulled     18m                   kubelet, aks-agentpool-41512647-vmss000003  Successfully pulled image "docker.io/istio/proxyv2:1.7.0-alpha.0"
  Normal   Created    18m                   kubelet, aks-agentpool-41512647-vmss000003  Created container istio-init
  Normal   Started    18m                   kubelet, aks-agentpool-41512647-vmss000003  Started container istio-init
  Normal   Pulled     18m                   kubelet, aks-agentpool-41512647-vmss000003  Container image "kong:2.1" already present on machine
  Normal   Pulling    18m                   kubelet, aks-agentpool-41512647-vmss000003  Pulling image "docker.io/istio/proxyv2:1.7.0-alpha.0"
  Normal   Started    18m                   kubelet, aks-agentpool-41512647-vmss000003  Started container proxy
  Normal   Created    18m                   kubelet, aks-agentpool-41512647-vmss000003  Created container proxy
  Normal   Pulled     18m                   kubelet, aks-agentpool-41512647-vmss000003  Successfully pulled image "docker.io/istio/proxyv2:1.7.0-alpha.0"
  Normal   Created    18m                   kubelet, aks-agentpool-41512647-vmss000003  Created container istio-proxy
  Normal   Started    18m                   kubelet, aks-agentpool-41512647-vmss000003  Started container istio-proxy
  Warning  Unhealthy  18m (x3 over 18m)     kubelet, aks-agentpool-41512647-vmss000003  Readiness probe failed: HTTP probe failed with statuscode: 500
  Normal   Created    18m (x2 over 18m)     kubelet, aks-agentpool-41512647-vmss000003  Created container ingress-controller
  Normal   Pulled     18m (x2 over 18m)     kubelet, aks-agentpool-41512647-vmss000003  Container image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1" already present on machine
  Normal   Killing    18m                   kubelet, aks-agentpool-41512647-vmss000003  Container ingress-controller failed liveness probe, will be restarted
  Normal   Started    18m (x2 over 18m)     kubelet, aks-agentpool-41512647-vmss000003  Started container ingress-controller
  Warning  Unhealthy  18m (x4 over 18m)     kubelet, aks-agentpool-41512647-vmss000003  Liveness probe failed: HTTP probe failed with statuscode: 500
  Warning  BackOff    3m48s (x50 over 16m)  kubelet, aks-agentpool-41512647-vmss000003  Back-off restarting failed container

When I check logs out:


kubectl logs pod/kong-1595829690-kong-6789f4b45f-rjztq ingress-controller -n kong
-------------------------------------------------------------------------------

Kong Ingress controller
  Release:    0.9.1
  Build:      2caa524
  Repository: git@github.com:kong/kubernetes-ingress-controller.git

  Go:         go1.14.1
-------------------------------------------------------------------------------

W0727 06:33:17.215799       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0727 06:33:17.215947       1 main.go:492] Creating API client for https://10.0.0.1:443
d063877@ubuntu ~/Downloads/istio/istio-1.7.0-alpha.0 $ kubectl logs pod/kong-1595829690-kong-6789f4b45f-rjztq proxy -n kong
2020/07/27 06:01:53 [warn] 1#0: load balancing method redefined in /kong_prefix/nginx-kong.conf:61
nginx: [warn] load balancing method redefined in /kong_prefix/nginx-kong.conf:61
2020/07/27 06:01:53 [notice] 1#0: using the "epoll" event method
2020/07/27 06:01:53 [notice] 1#0: openresty/1.15.8.3
2020/07/27 06:01:53 [notice] 1#0: built by gcc 9.3.0 (Alpine 9.3.0) 
2020/07/27 06:01:53 [notice] 1#0: OS: Linux 4.15.0-1083-azure
2020/07/27 06:01:53 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2020/07/27 06:01:53 [notice] 1#0: start worker processes
2020/07/27 06:01:53 [notice] 1#0: start worker process 22
2020/07/27 06:01:53 [notice] 22#0: *1 [lua] cache.lua:374: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2020/07/27 06:01:53 [notice] 22#0: *1 [lua] cache.lua:374: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*

kubectl logs pod/kong-1595829690-kong-6789f4b45f-rjztq istio-proxy -n kong
2020-07-27T06:01:54.513325Z	info	FLAG: --concurrency="2"
2020-07-27T06:01:54.513350Z	info	FLAG: --disableInternalTelemetry="false"
2020-07-27T06:01:54.513355Z	info	FLAG: --domain="kong.svc.cluster.local"
2020-07-27T06:01:54.513357Z	info	FLAG: --help="false"
2020-07-27T06:01:54.513360Z	info	FLAG: --id=""
2020-07-27T06:01:54.513362Z	info	FLAG: --ip=""
2020-07-27T06:01:54.513364Z	info	FLAG: --log_as_json="false"
2020-07-27T06:01:54.513367Z	info	FLAG: --log_caller=""
2020-07-27T06:01:54.513369Z	info	FLAG: --log_output_level="default:info"
2020-07-27T06:01:54.513371Z	info	FLAG: --log_rotate=""
2020-07-27T06:01:54.513373Z	info	FLAG: --log_rotate_max_age="30"
2020-07-27T06:01:54.513376Z	info	FLAG: --log_rotate_max_backups="1000"
2020-07-27T06:01:54.513379Z	info	FLAG: --log_rotate_max_size="104857600"
2020-07-27T06:01:54.513381Z	info	FLAG: --log_stacktrace_level="default:none"
2020-07-27T06:01:54.513386Z	info	FLAG: --log_target="[stdout]"
2020-07-27T06:01:54.513389Z	info	FLAG: --meshConfig="./etc/istio/config/mesh"
2020-07-27T06:01:54.513391Z	info	FLAG: --mixerIdentity=""
2020-07-27T06:01:54.513394Z	info	FLAG: --outlierLogPath=""
2020-07-27T06:01:54.513396Z	info	FLAG: --proxyComponentLogLevel="misc:error"
2020-07-27T06:01:54.513399Z	info	FLAG: --proxyLogLevel="warning"
2020-07-27T06:01:54.513401Z	info	FLAG: --serviceCluster="kong-1595829690-kong.kong"
2020-07-27T06:01:54.513404Z	info	FLAG: --serviceregistry="Kubernetes"
2020-07-27T06:01:54.513407Z	info	FLAG: --stsPort="0"
2020-07-27T06:01:54.513409Z	info	FLAG: --templateFile=""
2020-07-27T06:01:54.513412Z	info	FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2020-07-27T06:01:54.513415Z	info	FLAG: --trust-domain="cluster.local"
2020-07-27T06:01:54.513438Z	info	Version 1.7.0-alpha.0-37119973c952151e269110170f2fda8c6a34fb5e-dirty-Modified
2020-07-27T06:01:54.513733Z	info	Obtained private IP [10.244.3.65]
2020-07-27T06:01:54.513807Z	info	Apply proxy config from env {"proxyMetadata":{"DNS_AGENT":""}}

2020-07-27T06:01:54.514560Z	info	Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: ./etc/istio/proxy
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
envoyAccessLogService: {}
envoyMetricsService: {}
parentShutdownDuration: 60s
proxyAdminPort: 15000
proxyMetadata:
  DNS_AGENT: ""
serviceCluster: kong-1595829690-kong.kong
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
  zipkin:
    address: zipkin.istio-system:9411

...

2020-07-27T06:01:55.300093Z	warning	envoy filter	[src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:55.301147Z	warning	envoy filter	[src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:55.307657Z	warning	envoy filter	[src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:55.308667Z	warning	envoy filter	[src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:57.014809Z	info	Envoy proxy is ready
[2020-07-27T06:01:57.676Z] "- - -" 0 UH "-" "-" 0 0 7 - "-" "-" "-" "-" "-" - - 10.0.0.1:443 10.244.3.65:47500 - -
2020-07-27T06:02:00.695289Z	error	Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/readyz
app URL path = /healthz
2020-07-27T06:02:06.558113Z	error	Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/livez
app URL path = /healthz

2020-07-27T06:33:30.695134Z	error	Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/readyz
app URL path = /healthz
2020-07-27T06:33:36.557807Z	error	Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/livez
app URL path = /healthz
[2020-07-27T06:33:30.808Z] "- - -" 0 UH "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - 10.0.0.1:443 10.244.3.65:41542 - -
2020-07-27T06:33:40.694965Z	error	Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/readyz
app URL path = /healthz
2020-07-27T06:33:46.557618Z	error	Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/livez
app URL path = /healthz

Here is the deployment:

kubectl get deployment.apps/kong-1595829690-kong -n kong -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kuma.io/gateway: enabled
    meta.helm.sh/release-name: kong-1595829690
    meta.helm.sh/release-namespace: kong
    traffic.sidecar.istio.io/includeInboundPorts: ""
  creationTimestamp: "2020-07-27T06:01:49Z"
  generation: 1
  labels:
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: kong-1595829690
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kong
    app.kubernetes.io/version: "2"
    helm.sh/chart: kong-1.8.0
  name: kong-1595829690-kong
  namespace: kong
  resourceVersion: "11494112"
  selfLink: /apis/apps/v1/namespaces/kong/deployments/kong-1595829690-kong
  uid: 67dacf6e-0e04-4a93-92b5-5097ed9a4ed4
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: app
      app.kubernetes.io/instance: kong-1595829690
      app.kubernetes.io/name: kong
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: app
        app.kubernetes.io/instance: kong-1595829690
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: kong
        app.kubernetes.io/version: "2"
        helm.sh/chart: kong-1.8.0
    spec:
      containers:
      - args:
        - /kong-ingress-controller
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: CONTROLLER_ELECTION_ID
          value: kong-ingress-controller-leader-kong
        - name: CONTROLLER_INGRESS_CLASS
          value: kong
        - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
          value: "true"
        - name: CONTROLLER_KONG_URL
          value: https://localhost:8444
        - name: CONTROLLER_PUBLISH_SERVICE
          value: kong/kong-1595829690-kong-proxy
        image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: ingress-controller
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - env:
        - name: KONG_ADMIN_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_ERROR_LOG
          value: /dev/stderr
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: /dev/stderr
        - name: KONG_ADMIN_LISTEN
          value: 127.0.0.1:8444 http2 ssl
        - name: KONG_CLUSTER_LISTEN
          value: "off"
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_KIC
          value: "on"
        - name: KONG_LUA_PACKAGE_PATH
          value: /opt/?.lua;/opt/?/init.lua;;
        - name: KONG_NGINX_HTTP_INCLUDE
          value: /kong/servers.conf
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PLUGINS
          value: bundled
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_PORTAL_API_ERROR_LOG
          value: /dev/stderr
        - name: KONG_PORT_MAPS
          value: 80:8000, 443:8443
        - name: KONG_PREFIX
          value: /kong_prefix/
        - name: KONG_PROXY_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_PROXY_ERROR_LOG
          value: /dev/stderr
        - name: KONG_PROXY_LISTEN
          value: 0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
        - name: KONG_STATUS_LISTEN
          value: 0.0.0.0:8100
        - name: KONG_STREAM_LISTEN
          value: "off"
        - name: KONG_NGINX_DAEMON
          value: "off"
        image: kong:2.1
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - /bin/sleep 15 && kong quit
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: metrics
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: proxy
        ports:
        - containerPort: 8000
          name: proxy
          protocol: TCP
        - containerPort: 8443
          name: proxy-tls
          protocol: TCP
        - containerPort: 9542
          name: metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: metrics
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /kong_prefix/
          name: kong-1595829690-kong-prefix-dir
        - mountPath: /tmp
          name: kong-1595829690-kong-tmp
        - mountPath: /kong
          name: custom-nginx-template-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kong-1595829690-kong
      serviceAccountName: kong-1595829690-kong
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: kong-1595829690-kong-prefix-dir
      - emptyDir: {}
        name: kong-1595829690-kong-tmp
      - configMap:
          defaultMode: 420
          name: kong-1595829690-kong-default-custom-server-blocks
        name: custom-nginx-template-volume
status:
  conditions:
  - lastTransitionTime: "2020-07-27T06:01:49Z"
    lastUpdateTime: "2020-07-27T06:01:49Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2020-07-27T06:11:50Z"
    lastUpdateTime: "2020-07-27T06:11:50Z"
    message: ReplicaSet "kong-1595829690-kong-6789f4b45f" has timed out progressing.
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  observedGeneration: 1
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

I would really appreciate your help.

Does it appear to be listening internally? E.g. the following is what it looks like on my test instance, with some of the initial exec changed to match your environment:

$ kubectl exec -it -n kong kong-1595829690-kong-6789f4b45f-rjztq -- /bin/sh  
/ $ netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
...
tcp        0      0 :::10254                :::*                    LISTEN      1/kong-ingress-cont

$ netstat -nt
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
...
tcp        0      0 ::ffff:10.16.5.8:10254  ::ffff:10.138.0.7:40700 TIME_WAIT   

Those show a listen (first command) and previous successful healthcheck connections (second).

Not sure why I only see a v6 listen on mine, but that may vary in your local environment. Internal v6 works fine for me on GKE, though I’m not sure if the same is true for Azure, and/or how we set up the listen in our code offhand.

If that’s listening on both v4 and v6, something odd is happening in your internal container networking–not sure what would prevent the connections from containers to others in their Pod over localhost.

If you only see listens on v6, that may indicate the issue–you’ll probably want to check with Azure also, to see what their IPv6 support is like, but if that’s the case and it causes breakage we should be able to investigate why the controller doesn’t attempt to create a v4 listen for the healthcheck.

Hi Travis

Sorry for the delay

I ran the installation again and I went into the pod as you did it.

santiago_ventura@Azure:~$ kubectl create namespace kong
namespace/kong created
santiago_ventura@Azure:~$ kubectl label namespace kong istio-injection=enabled
namespace/kong labeled
santiago_ventura@Azure:~$ kubectl apply -f https://bit.ly/k4k8s
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kong configured
customresourcedefinition.apiextensions.k8s.io/kongclusterplugins.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/tcpingresses.configuration.konghq.com configured
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole unchanged
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding unchanged
service/kong-proxy created
service/kong-validation-webhook created
deployment.apps/ingress-kong created
santiago_ventura@Azure:~$ kubectl get pods -n kong
NAME                           READY   STATUS     RESTARTS   AGE
ingress-kong-d4485d549-qwt95   0/3     Init:0/1   0          5s
santiago_ventura@Azure:~$ kubectl get pods -n kong
NAME                           READY   STATUS    RESTARTS   AGE
ingress-kong-d4485d549-qwt95   2/3     Running   2          71s

As you see now the pod is ingress-controller instead of kong.

santiago_ventura@Azure:~$ kubectl exec -it -n kong ingress-kong-d4485d549-qwt95 -- /bin/sh
Defaulting container name to proxy.
Use 'kubectl describe pod/ingress-kong-d4485d549-qwt95 -n kong' to see all of the containers in this pod.
/ $ netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:15000         0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15001           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      1/kong -c nginx.con
tcp        0      0 127.0.0.1:8444          0.0.0.0:*               LISTEN      1/kong -c nginx.con
tcp        0      0 0.0.0.0:15006           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      1/kong -c nginx.con
tcp        0      0 0.0.0.0:8100            0.0.0.0:*               LISTEN      1/kong -c nginx.con
tcp        0      0 0.0.0.0:15021           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15090           0.0.0.0:*               LISTEN      -
tcp        0      0 :::15020                :::*                    LISTEN      -
/ $ netstat -nt
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 127.0.0.1:41076         127.0.0.1:15020         ESTABLISHED
tcp        0      0 127.0.0.1:50712         127.0.0.1:15000         ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:57916         TIME_WAIT
tcp        0      0 127.0.0.1:51074         127.0.0.1:15090         ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:58150         TIME_WAIT
tcp        0      0 10.244.3.78:46802       10.0.150.237:15012      ESTABLISHED
tcp        0      0 127.0.0.1:50662         127.0.0.1:15000         ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:57978         TIME_WAIT
tcp        0      0 127.0.0.1:8100          127.0.0.1:58030         TIME_WAIT
tcp        0      0 127.0.0.1:8100          127.0.0.1:58212         TIME_WAIT
tcp        0      0 127.0.0.1:15090         127.0.0.1:51074         ESTABLISHED
tcp        0      0 127.0.0.1:57958         127.0.0.1:8100          TIME_WAIT
tcp        0      0 127.0.0.1:8100          127.0.0.1:58120         TIME_WAIT
tcp        0      0 127.0.0.1:15000         127.0.0.1:50662         ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:58054         TIME_WAIT
tcp        0      0 127.0.0.1:15000         127.0.0.1:50712         ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:57830         TIME_WAIT
tcp        0      0 127.0.0.1:41176         127.0.0.1:15020         ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:57888         TIME_WAIT
tcp        0      0 127.0.0.1:8100          127.0.0.1:57808         TIME_WAIT
tcp        0      0 10.244.3.78:46772       10.0.150.237:15012      ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:54408         ESTABLISHED
tcp        0      0 127.0.0.1:54408         127.0.0.1:8100          ESTABLISHED
tcp        0      0 127.0.0.1:8100          127.0.0.1:58192         TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45532 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45490 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45766 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45552 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45786 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45724 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45628 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45404 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45462 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45382 TIME_WAIT
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.4.107:36494 ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:15020  ::ffff:127.0.0.1:41176  ESTABLISHED
tcp        0      0 ::ffff:10.244.3.78:15020 ::ffff:10.244.3.1:45604 TIME_WAIT
tcp        0      0 ::ffff:127.0.0.1:15020  ::ffff:127.0.0.1:41076  ESTABLISHED
/ $ exit
santiago_ventura@Azure:~$ kubectl get pods -n kong
NAME                           READY   STATUS             RESTARTS   AGE
ingress-kong-d4485d549-qwt95   2/3     CrashLoopBackOff   7          9m50s

Anyway it continues crashing

santiago_ventura@Azure:~$ kubectl get pods -n kong
NAME                           READY   STATUS             RESTARTS   AGE
ingress-kong-d4485d549-qwt95   2/3     CrashLoopBackOff   11         21m

If it is a problem with Azure Kubernetes Service I can’t tell as I am not able to figure out yet.

Hello guys

It is working now. I downgraded istio release to 1.6.7. Before I had 1.7.0-alpha.0 that apparently has some issues.

kubectl get pod -n kong
NAME                           READY   STATUS    RESTARTS   AGE
ingress-kong-d4485d549-72q58   3/3     Running   0          17s

So thank you for your support.

I dont have the port 10254 running within the pod, how do i make it work???

Defaulted container "proxy" out of: proxy, ingress-controller
/ $
/ $
/ $ netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      1/kong -c nginx.con
tcp        0      0 127.0.0.1:8444          0.0.0.0:*               LISTEN      1/kong -c nginx.con
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      1/kong -c nginx.con
tcp        0      0 0.0.0.0:8100            0.0.0.0:*               LISTEN      1/kong -c nginx.con
/ $ netstat -nt
]Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 10.0.7.233:8100         10.0.18.150:48952       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48932       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48978       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48930       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:49004       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48966       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48964       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48976       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48986       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48988       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:48950       TIME_WAIT
tcp        0      0 10.0.7.233:8100         10.0.18.150:49002       TIME_WAIT

mine is also in the same condition,

NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
kong          ingress-kong-64d8cbf56b-cdcqc           1/2     CrashLoopBackOff   41         113m   10.0.7.233    ip-10-0-18-150   <none>           <none>

Im running the below deployment version,

ubuntu@ip-10-0-22-59:~$ kubectl get deployments --all-namespaces -o wide
NAMESPACE     NAME           READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS                 IMAGES                                            SELECTOR
kong          ingress-kong   0/1     1            0           120m   proxy,ingress-controller   kong:2.4,kong/kubernetes-ingress-controller:1.3   app=ingress-kong

@arkrishnan if the controller isn’t running (which is what the CrashLoop probably implies) it won’t listen on its readiness port. You should check its logs (kubect logs ingress-kong-64d8cbf56b-cdcqc -c ingress-controller); it should indicate there why it cannot start.

Unrelated to that, you’ll typically want to start a new thread if you have a troubleshooting question, even if you find an older thread with symptoms similar to yours. A lot of the basic symptoms exhibited by failing containers are the same across all problems, even if the root cause is completely different.

Creating a new thread helps us respond more effectively, since we can see that it has no replies yet.