Installing kong-ingress-controller to manage ingress on kubernetes

I am installing kong ingress controller on my AKS cluster, but I don’t want to have the postgres Statefulset service inside my cluster. Instead, I have a postgres database in my azure infrastructure, and I want connect it from my kong-ingress-controller deplopyment creating the postgres credentials like secrets in my aks cluster and store it in an environment variables.

I’ve create the secret

⟩ kubectl create secret generic az-pg-db-user-pass --from-literal=username='az-pg-username' --from-literal=password='az-pg-password' --namespace kong 
secret/az-pg-db-user-pass created

And in my kongwithingress.yaml file, I have the deployment manifest declarations, which I did want to present from this gist link in order to don’t fill the body question of a lot of yaml code lines.

This gist is based in this AKS deployment all in one, but removing postgres like Statefulset and Service due to the previous reasons, my objective is setup connection with my own azure managed postgres service

I’ve configured the az-pg-db-user-pass generic secret created in the kong-ingress-controller deployment and my kong deployment and my kong-migrations job presents in my whole gist script on order to create an environment variables such as follow:

KONG_PG_USERNAME
KONG_PG_PASSWORD

These environment variables has been created and referenced as a secrets in the kong-ingress-controller deployment and kong deployment and kong-migrations job which need access or connect with the postgres database

When I execute the kubectl apply -f kongwithingres.yaml command I get the following output:

The kong-ingress-controller deployment, kong deployment and kong-migrations job were created successfully.

⟩ kubectl apply -f kongwithingres.yaml 
namespace/kong unchanged
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com unchanged
serviceaccount/kong-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole unchanged
role.rbac.authorization.k8s.io/kong-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding unchanged
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I] 

But their respective pods have the CrashLoopBackOff status

NAME                                          READY   STATUS                  RESTARTS   AGE
pod/kong-d8b88df99-j6hvl                      0/1     Init:CrashLoopBackOff   5          4m24s
pod/kong-ingress-controller-984fc9666-cd2b5   0/2     Init:CrashLoopBackOff   5          4m24s
pod/kong-migrations-t6n7p                     0/1     CrashLoopBackOff        5          4m24s

I am checking the respective logs of each pod and I found this:

The pod/kong-d8b88df99-j6hvl:

⟩ kubectl logs pod/kong-d8b88df99-j6hvl -p -n kong 
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-j6hvl" not found

And in their describe information this pod is getting the environment variables and the image

⟩ kubectl describe pod/kong-d8b88df99-j6hvl -n kong
Name:               kong-d8b88df99-j6hvl
Namespace:          kong

Status:             Pending
IP:                 10.244.1.18
Controlled By:      ReplicaSet/kong-d8b88df99
Init Containers:
  wait-for-migrations:
    Container ID:  docker://7007a89ada215daf853ec103d79dca60ccc5fb3a14c51ac6c5c56655da6da62f
    Image:         kong:1.0.0
    Image ID:      docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations list
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 26 Feb 2019 16:25:01 +0100
      Finished:     Tue, 26 Feb 2019 16:25:01 +0100
    Ready:          False
    Restart Count:  6
    Environment:
      KONG_ADMIN_LISTEN:      off
      KONG_PROXY_LISTEN:      off
      KONG_PROXY_ACCESS_LOG:  /dev/stdout
      KONG_ADMIN_ACCESS_LOG:  /dev/stdout
      KONG_PROXY_ERROR_LOG:   /dev/stderr
      KONG_ADMIN_ERROR_LOG:   /dev/stderr
      KONG_PG_HOST:           zcrm365-postgresql1.postgres.database.azure.com
      KONG_PG_USERNAME:       <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:       <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Containers:
  kong-proxy:
    Container ID:   
    Image:          kong:1.0.0
    Image ID:       
    Ports:          8000/TCP, 8443/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      KONG_PG_USERNAME:              <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:              <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_HOST:                  zcrm365-postgresql1.postgres.database.azure.com
      KONG_PROXY_ACCESS_LOG:         /dev/stdout
      KONG_PROXY_ERROR_LOG:          /dev/stderr
      KONG_ADMIN_LISTEN:             off
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-gnkjq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gnkjq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                             Message
  ----     ------     ----                    ----                             -------
  Normal   Scheduled  8m44s                   default-scheduler                Successfully assigned kong/kong-d8b88df99-j6hvl to aks-default-75800594-1
  Normal   Pulled     7m9s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Container image "kong:1.0.0" already present on machine
  Normal   Created    7m8s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Created container
  Normal   Started    7m7s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Started container
  Warning  BackOff    3m34s (x26 over 8m38s)  kubelet, aks-default-75800594-1  Back-off restarting failed container

The pod/kong-ingress-controller-984fc9666-cd2b5:

 kubectl logs pod/kong-ingress-controller-984fc9666-cd2b5 -p -n kong 
Error from server (BadRequest): a container name must be specified for pod kong-ingress-controller-984fc9666-cd2b5, choose one of: [admin-api ingress-controller] or one of the init containers: [wait-for-migrations]
[I]

And their respective description

⟩ kubectl describe pod/kong-ingress-controller-984fc9666-cd2b5 -n kong
Name:               kong-ingress-controller-984fc9666-cd2b5
Namespace:          kong

Status:             Pending
IP:                 10.244.2.18
Controlled By:      ReplicaSet/kong-ingress-controller-984fc9666
Init Containers:
  wait-for-migrations:
    Container ID:  docker://8eb035f755322b3ac72792d922974811933ba9a71afb1f4549cfe7e0a6519619
    Image:         kong:1.0.0
    Image ID:      docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations list
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 26 Feb 2019 16:29:56 +0100
      Finished:     Tue, 26 Feb 2019 16:29:56 +0100
    Ready:          False
    Restart Count:  7
    Environment:
      KONG_ADMIN_LISTEN:      off
      KONG_PROXY_LISTEN:      off
      KONG_PROXY_ACCESS_LOG:  /dev/stdout
      KONG_ADMIN_ACCESS_LOG:  /dev/stdout
      KONG_PROXY_ERROR_LOG:   /dev/stderr
      KONG_ADMIN_ERROR_LOG:   /dev/stderr
      KONG_PG_HOST:           zcrm365-postgresql1.postgres.database.azure.com
      KONG_PG_USERNAME:       <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:       <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Containers:
  admin-api:
    Container ID:   
    Image:          kong:1.0.0
    Image ID:       
    Port:           8001/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:8001/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8001/status delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      KONG_PG_USERNAME:              <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:              <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_HOST:                  zcrm365-postgresql1.postgres.database.azure.com
      KONG_ADMIN_ACCESS_LOG:         /dev/stdout
      KONG_ADMIN_ERROR_LOG:          /dev/stderr
      KONG_ADMIN_LISTEN:             0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_PROXY_LISTEN:             off
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
  ingress-controller:
    Container ID:  
    Image:         kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.3.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Args:
      /kong-ingress-controller
      --kong-url=https://localhost:8444
      --admin-tls-skip-verify
      --default-backend-service=kong/kong-proxy
      --publish-service=kong/kong-proxy
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:10254/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:                      kong-ingress-controller-984fc9666-cd2b5 (v1:metadata.name)
      POD_NAMESPACE:                 kong (v1:metadata.namespace)
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kong-serviceaccount-token-rc4sp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kong-serviceaccount-token-rc4sp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                             Message
  ----     ------     ----                  ----                             -------
  Normal   Scheduled  12m                   default-scheduler                Successfully assigned kong/kong-ingress-controller-984fc9666-cd2b5 to aks-default-75800594-2
  Normal   Pulled     10m (x5 over 12m)     kubelet, aks-default-75800594-2  Container image "kong:1.0.0" already present on machine
  Normal   Created    10m (x5 over 12m)     kubelet, aks-default-75800594-2  Created container
  Normal   Started    10m (x5 over 12m)     kubelet, aks-default-75800594-2  Started container
  Warning  BackOff    2m14s (x49 over 12m)  kubelet, aks-default-75800594-2  Back-off restarting failed container
[I] 
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩ 

I unknown the reason by which teh CrashLoopBackOff status and their status respective is Waiting: PodInitiazing

How to can I debug this behavior?
Is possible that Kong cannot talk to the Postgres database?

My AKS cluster is on Azure and also my postgres database and they have communication as a services.

UPDATE

These are the logs of my container pods created:

⟩ kubectl logs pod/kong-ingress-controller-984fc9666-w4vvn -p -n kong -c ingress-controller


  
Error from server (BadRequest): previous terminated container "ingress-controller" in pod "kong-ingress-controller-984fc9666-w4vvn" not found
[I] 
⟩ kubectl logs pod/kong-d8b88df99-qsq4j -p -n kong -c kong-proxy
  
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-qsq4j" not found
[I] 
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩ 

You give a lot of information however some of the more critical bits seem to be missing you are just posting the error there.

Error from server (BadRequest): a container name must be specified for pod kong-ingress-controller-984fc9666-cd2b5, choose one of: [admin-api ingress-controller] or one of the init containers: [wait-for-migrations]
[I]

My bet is that you are probably not able to connect to the database successfully from your cluster.

In Azure postgres, you must check if you have SSL connections enabled. If they are ensure that you are setting the proper ENV variables on Kong deployment (you’re looking for something like PG_SSL: “require”)

Also on Azure postgres, connection security you need to whitelist the outbound cluster IP, or else it will just reject your connection at the firewall level.

Check the postgres server log in Azure to see if it actually sets up a connection successfully.

If you can exec into the Kong proxy, you should try “kong health” command as this also shows you the status to the database.

1 Like

Yes, you are right about of the SSL connection in Azure PostgreSQL, and I had to be in the whilelist outbound too.

In addition, I’ve identified other reasons by which I cannot perform connection with an external postgresql database.

We have to remember that I am using this gist kong installation process

My kong-ingress-controller deployment pods are CrashLoopBackOff and some times in Waiting: PodInitiazing because I don’t had in mind some things such as follow:

  • The main reason, is that, the kong-ingress-controller and kong have init-container called - wait-for-migrations which waits for the kong-migrations job before to be executed. Here, I can identify that is necessary perform my kong migrations

  • But my kong-migrations job was not working because I don’t had the KONG_DATABASE environment variable parameter to setup the connection.

  • Other reason by which my deployment was not working is because kong internally to connect with postgres maybe wait that the user environment variable defined in the container to be called KONG_PG_USER. I was called KONG_PG_USERNAME and it was other reason to fail the execution of my script. (I am not sure completely about this)

⟩ kubectl create -f kongwithingres.yaml  
namespace/kong created
secret/az-pg-db-user-pass created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
role.rbac.authorization.k8s.io/kong-ingress-role created
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I] 

By the way, to start with kong I’ve recommend install konga which is a front-end dashboard tool to manage kong and check the things that we can make via yaml files.
I suposse that you already know konga.

We have this konga.yaml script to be installed like deployment in our kubernetes clusters

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: konga
  namespace: kong
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: konga
    spec:
      containers:
      - env:
        - name: NODE_TLS_REJECT_UNAUTHORIZED
          value: "0"
        image: pantsel/konga:latest
        name: konga
        ports:
        - containerPort: 1337  

And, we can start the service locally on our machines, via kubectl port-forward command

⟩ kubectl port-forward pod/konga-85b66cffff-mxq85 1337:1337 -n kong
Forwarding from 127.0.0.1:1337 -> 1337
Forwarding from [::1]:1337 -> 1337

I’ve posted this question on stackoverflow, in where some people help me to identify the connection problem to postgresql. https://stackoverflow.com/questions/54889200/installing-kong-ingress-controller-to-manage-ingress-on-kubernetes/54980573#54980573