gRPC example not working behind ELB on EKS

Kong Version: 2.0.3

K8s Cluster: AWS EKS v1.15

We are hosting a gRPC service on our k8s clusters. Using proxy tweaks I was able to host that successfully with Nginx Ingress Controller in one of our environments.

When we got to know that Kong now supports proxying to gRPC we decided to use Kong as an API Gateway in our ad hoc environment. However, we couldn’t get any success with that. We suspected that the CloudFront we are using might have caused some issue so we removed that from the equation and pointed the DNS to the Load Balancer, still no luck.

There is an option to use stream_listen but there is no point of using that if Kong supports gRPC out of the box now.

Tried following the demo for grpcbin service from youtube and the documentations as well but that also fails. Not sure, what am I missing in the configuration to make gRPC work seamlessly within our cluster.

The error we see at Kong proxy container is as below:

unknown request scheme: HTTP while logging request, client: 10.120.15.77, server: kong, request: "PRI * HTTP/2.0"

The error we get while executing grpcurl is as below:

./grpcurl -v -d '{"greeting":"hello hbagdi"}' myDns:443  hello.HelloService.SayHello
Failed to dial target host "myDns:443": context deadline exceeded

Below is the configuration used for grpcbin and once it works I am expecting my actual service will also work. Do you need any further information to guide me on this?

grpcbin service yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    konghq.com/protocol: grpc
  labels:
    app: grpcbin
  name: grpcbin
  namespace: default    
spec:
  ports:
   - name: grpc
     port: 9001
     protocol: TCP
    targetPort: 9001
  selector:
   app: grpcbin
  sessionAffinity: None
  type: ClusterIP

grpcbin ingress yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    konghq.com/protocols: grpc
    kubernetes.io/ingress.class: kong-public
  generation: 1
  labels:
    app.kubernetes.io/instance: monitoring
    app.kubernetes.io/name: grpcbin
  name: grpcbin
  namespace: default
spec:
  rules:
  - host: myDns
    http:
      paths:
      - backend:
          serviceName: grpcbin
          servicePort: 9001
        path: /

I think the protocols are not configured correctly.
Please follow this guide byte-for-byte:

Thanks for responding Harry.

I tried the setup on a fresh EKS cluster byte by byte and the grpcbin example worked for me with that. The only difference I could find from my previous clusters was that this setup used nlb along with ingress version 0.9.1 and was having one validation webhook present whereas I had created the kong setup using the official helm chart which doesn’t have the validation webhook. (Helm generated values are given in the next reply due to body limit of 32000)

When I integrated the certificate from AWS Certificate Manager to Kong Proxy the example didn’t work after omitting “-insecure” from grpcurl.

After that, I also tried to host our gRPC service on this cluster and couldn’t get any further with that and the request doesn’t seem to be reaching kong-proxy as well.

I have included as much information as I could in this reply. Please let me know based on the below configurations, do you think there is something else that we can check or fix to host this service behind kong?

ingress.yaml for the service

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    konghq.com/protocols: grpc,grpcs
    kubernetes.io/ingress.class: kong
  name: cubicsvrgrpc
  namespace: cubicsvr
spec:
  rules:
  - host: cubicsvrgrpc-test.myDns.com
    http:
      paths:
      - backend:
          serviceName: cubicsvrgrpc
          servicePort: 443
        path: /

service.yaml for the service

apiVersion: v1
kind: Service
metadata:
  annotations:
    konghq.com/protocol: grpc
  labels:
    app.kubernetes.io/instance: cubicsvr
    app.kubernetes.io/name: cubicsvr
  name: cubicsvrgrpc
  namespace: cubicsvr
spec:
  ports:
  - name: http
    port: 443
    protocol: TCP
    targetPort: 2727
  selector:
    app.kubernetes.io/instance: cubicsvr
    app.kubernetes.io/name: cubicsvr

deployment.yaml for the service

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
  labels:
    app.kubernetes.io/instance: cubicsvr
    app.kubernetes.io/name: cubicsvr
    application: cobalt-server
  name: cubicsvr
  namespace: cubicsvr
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: cubicsvr
      app.kubernetes.io/name: cubicsvr
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/instance: cubicsvr
        app.kubernetes.io/name: cubicsvr
    spec:
      containers:
      - image: quay.io/myorganisation/cubicsvr:v9
        name: cubicsvr
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 2727
          name: grpc
          protocol: TCP
        resources:
          limits:
            cpu: "4"
            memory: 8G
          requests:
            cpu: "2"
            memory: 2G
      imagePullSecrets:
      - name: myImagePullSecret

Helm generated YAML for Kong

---
# Source: kong/templates/controller-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kong
  namespace: kong
---
# Source: kong/templates/config-custom-server-blocks.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kong-default-custom-server-blocks
data:
  servers.conf: |
    # Prometheus metrics and health-checking server
    server {
        server_name kong_prometheus_exporter;
        listen 0.0.0.0:9542; # can be any other port as well
        access_log off;
        location /status {
            default_type text/plain;
            return 200;
        }
        location /metrics {
            default_type text/plain;
            content_by_lua_block {
                 local prometheus = require "kong.plugins.prometheus.exporter"
                 prometheus:collect()
            }
        }
        location /nginx_status {
            internal;
            access_log off;
            stub_status;
        }
    }

---
# Source: kong/templates/config-custom-server-blocks.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kong-default-custom-server-blocks
  namespace: kong
data:
  servers.conf: |
    # Prometheus metrics and health-checking server
    server {
        server_name kong_prometheus_exporter;
        listen 0.0.0.0:9542; # can be any other port as well
        access_log off;
        location /status {
            default_type text/plain;
            return 200;
        }
        location /metrics {
            default_type text/plain;
            content_by_lua_block {
                 local prometheus = require "kong.plugins.prometheus.exporter"
                 prometheus:collect()
            }
        }
        location /nginx_status {
            internal;
            access_log off;
            stub_status;
        }
    }    
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
  name:  kong
rules:
  - apiGroups:
      - ""
    resources:
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
      - "networking.internal.knative.dev"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
      - "networking.internal.knative.dev"
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - "configuration.konghq.com"
    resources:
      - tcpingresses/status
    verbs:
      - update
  - apiGroups:
      - "configuration.konghq.com"
    resources:
      - kongplugins
      - kongclusterplugins
      - kongcredentials
      - kongconsumers
      - kongingresses
      - tcpingresses
    verbs:
      - get
      - list
      - watch
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name:  kong
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name:  kong
subjects:
  - kind: ServiceAccount
    name: kong
    namespace: kong
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name:  kong
  namespace: kong
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      - "kong-ingress-controller-leader-kong-public-kong-public"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name:  kong
  namespace: kong
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kong
subjects:
  - kind: ServiceAccount
    name: kong
    namespace: kong
---
# Source: kong/templates/service-kong-admin.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-admin
  namespace: kong
  annotations:
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-1.4.1
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "2"
spec:
  type: ClusterIP
  externalIPs:
  ports:
  - name: kong-admin
    port: 8001
    targetPort: 8001
    protocol: TCP
  selector:
    app.kubernetes.io/name: kong
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: "kong"
---
# Source: kong/templates/service-kong-proxy.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
  namespace: kong
  annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.global.certARN }}
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "kong-proxy-tls"
spec:
  type: LoadBalancer
  externalIPs:
  ports:
  - name: kong-proxy
    port: 80
    targetPort: 8000
    protocol: TCP
  - name: kong-proxy-tls
    port: 443
    targetPort: 8000
    protocol: TCP
  selector:
    app.kubernetes.io/name: kong
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: "kong"
---
# Source: kong/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kong
  namespace: kong
  annotations:
    kuma.io/gateway: enabled
    traffic.sidecar.istio.io/includeInboundPorts: ""
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: kong
      app.kubernetes.io/component: app
      app.kubernetes.io/instance: "kong"

  template:
    metadata:
      annotations:
    spec:
      serviceAccountName: kong
      initContainers:
      - name: wait-for-db
        image: "kong:2.0.3"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        command: [ "/bin/sh", "-c", "until kong start; do echo 'waiting for db'; sleep 1; done; kong stop" ]
        volumeMounts:
          - name: kong-prefix-dir
            mountPath: /kong_prefix/
          - name: kong-tmp
            mountPath: /tmp
          - name: custom-nginx-template-volume
            mountPath: /kong
      
      containers:
      - name: ingress-controller
        args:
        - /kong-ingress-controller
        - --publish-service=kong/kong-proxy
        - --ingress-class=kong-public
        - --election-id=kong-ingress-controller-leader-kong-public
        - --kong-url=http://localhost:8001
        
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace  
        image: "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.8.0"
        imagePullPolicy: IfNotPresent
      - name: "proxy"
        image: "kong:2.0.3"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        - name: KONG_NGINX_DAEMON
          value: "off"
        lifecycle:
          preStop:
            exec:
              command: [ "/bin/sh", "-c", "kong quit" ]
        ports:
        
        - name: admin
          containerPort: 8001
          protocol: TCP
        - name: proxy
          containerPort: 8000
          protocol: TCP
        - name: proxy-tls
          containerPort: 8443
          protocol: TCP
        - name: metrics
          containerPort: 9542
          protocol: TCP
        volumeMounts:
          - name: kong-prefix-dir
            mountPath: /kong_prefix/
          - name: kong-tmp
            mountPath: /tmp
          - name: custom-nginx-template-volume
            mountPath: /kong
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: metrics
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: metrics
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          {}
      securityContext:
        runAsUser: 1000
      tolerations:
        - operator: Exists
      volumes:
        - name: kong-prefix-dir
          emptyDir: {}
        - name: kong-tmp
          emptyDir: {}
        - name: custom-nginx-template-volume
          configMap:
            name: kong-default-custom-server-blocks
---
# Source: kong/templates/migrations.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: kong-init-migrations
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-1.4.1
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "2"
    app.kubernetes.io/component: init-migrations
spec:
  template:
    metadata:
      name: kong-init-migrations
      labels:
        app.kubernetes.io/name: kong
        helm.sh/chart: kong-1.4.1
        app.kubernetes.io/instance: "kong"
        app.kubernetes.io/managed-by: "Helm"
        app.kubernetes.io/version: "2"
        app.kubernetes.io/component: init-migrations
    spec:
      initContainers:
      - name: wait-for-postgres
        image: "busybox:latest"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        - name: KONG_NGINX_DAEMON
          value: "off"
        command: [ "/bin/sh", "-c", "set -u; until nc -zv $KONG_PG_HOST $KONG_PG_PORT -w1; do echo \"waiting for db - trying ${KONG_PG_HOST}:${KONG_PG_PORT}\"; sleep 1; done" ]
      tolerations:
              - operator: Exists      
      containers:
      - name: kong-migrations
        image: "kong:2.0.3"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        - name: KONG_NGINX_DAEMON
          value: "off"
        command: [ "/bin/sh", "-c", "kong migrations bootstrap" ]
        volumeMounts:
        - name: kong-prefix-dir
          mountPath: /kong_prefix/
        - name: kong-tmp
          mountPath: /tmp
        - name: custom-nginx-template-volume
          mountPath: /kong
      securityContext:
        runAsUser: 1000
      restartPolicy: OnFailure
      volumes:
      - name: kong-prefix-dir
        emptyDir: {}
      - name: kong-tmp
        emptyDir: {}
      - name: custom-nginx-template-volume
        configMap:
          name: kong-default-custom-server-blocks
---
# Source: kong/templates/migrations-post-upgrade.yaml
# Why is this Job duplicated and not using only helm hooks?
# See: https://github.com/helm/charts/pull/7362
apiVersion: batch/v1
kind: Job
metadata:
  name: kong-post-upgrade-migrations
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-1.4.1
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "2"
    app.kubernetes.io/component: post-upgrade-migrations
  annotations:
    helm.sh/hook: "post-upgrade"
    helm.sh/hook-delete-policy: "before-hook-creation"
spec:
  template:
    metadata:
      name: kong-post-upgrade-migrations
      labels:
        app.kubernetes.io/name: kong
        helm.sh/chart: kong-1.4.1
        app.kubernetes.io/instance: "kong"
        app.kubernetes.io/managed-by: "Helm"
        app.kubernetes.io/version: "2"
        app.kubernetes.io/component: post-upgrade-migrations
    spec:
      initContainers:
      - name: wait-for-postgres
        image: "busybox:latest"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        - name: KONG_NGINX_DAEMON
          value: "off"
        command: [ "/bin/sh", "-c", "set -u; until nc -zv $KONG_PG_HOST $KONG_PG_PORT -w1; do echo \"waiting for db - trying ${KONG_PG_HOST}:${KONG_PG_PORT}\"; sleep 1; done" ]
      tolerations:
              - operator: Exists      
      containers:
      - name: kong-post-upgrade-migrations
        image: "kong:2.0.3"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        - name: KONG_NGINX_DAEMON
          value: "off"
        command: [ "/bin/sh", "-c", "kong migrations finish" ]
        volumeMounts:
        - name: kong-prefix-dir
          mountPath: /kong_prefix/
        - name: kong-tmp
          mountPath: /tmp
        - name: custom-nginx-template-volume
          mountPath: /kong
      securityContext:
        runAsUser: 1000
      restartPolicy: OnFailure
      volumes:
      - name: kong-prefix-dir
        emptyDir: {}
      - name: kong-tmp
        emptyDir: {}
      - name: custom-nginx-template-volume
        configMap:
          name: kong-default-custom-server-blocks
---
# Source: kong/templates/migrations-pre-upgrade.yaml
# Why is this Job duplicated and not using only helm hooks?
# See: https://github.com/helm/charts/pull/7362
apiVersion: batch/v1
kind: Job
metadata:
  name: kong-pre-upgrade-migrations
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-1.4.1
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "2"
    app.kubernetes.io/component: pre-upgrade-migrations
  annotations:
    helm.sh/hook: "pre-upgrade"
    helm.sh/hook-delete-policy: "before-hook-creation"
spec:
  template:
    metadata:
      name: kong-pre-upgrade-migrations
      labels:
        app.kubernetes.io/name: kong
        helm.sh/chart: kong-1.4.1
        app.kubernetes.io/instance: "kong"
        app.kubernetes.io/managed-by: "Helm"
        app.kubernetes.io/version: "2"
        app.kubernetes.io/component: pre-upgrade-migrations
    spec:
      initContainers:
      - name: wait-for-postgres
        image: "busybox:latest"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        - name: KONG_NGINX_DAEMON
          value: "off"
        command: [ "/bin/sh", "-c", "set -u; until nc -zv $KONG_PG_HOST $KONG_PG_PORT -w1; do echo \"waiting for db - trying ${KONG_PG_HOST}:${KONG_PG_PORT}\"; sleep 1; done" ]
      tolerations:
              - operator: Exists      
      containers:
      - name: kong-upgrade-migrations
        image: "kong:2.0.3"
        imagePullPolicy: IfNotPresent
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8001"
        - name: KONG_DATABASE
          value: "postgres"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"
        - name: KONG_PG_DATABASE
          value: {{ .Values.kong.pg_database }}
        - name: KONG_PG_HOST
          value: {{ .Values.kong.pg_host }}
        - name: KONG_PG_PASSWORD
          value: {{.Values.kong.pg_password }}
        - name: KONG_PG_PORT
          value: "5432"
        - name: KONG_PG_USER
          value: {{.Values.kong.pg_user }}
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
        - name: KONG_NGINX_DAEMON
          value: "off"
        command: [ "/bin/sh", "-c", "kong migrations up" ]
        volumeMounts:
        - name: kong-prefix-dir
          mountPath: /kong_prefix/
        - name: kong-tmp
          mountPath: /tmp
        - name: custom-nginx-template-volume
          mountPath: /kong
      securityContext:
        runAsUser: 1000
      restartPolicy: OnFailure
      volumes:
      - name: kong-prefix-dir
        emptyDir: {}
      - name: kong-tmp
        emptyDir: {}
      - name: custom-nginx-template-volume
        configMap:
          name: kong-default-custom-server-blocks

Why are you skipping -insecure flag?

I wanted to ignore “-insecure” because I wanted to use the certificate provided by AWS but that’s not an issue anymore because I understand the grpcbin service is already using x509 certificate.

Right now my biggest problem is to host my grpc service which gets exposed with port 2727 on the pod. For this I have provided my Kong manifest along with deployment, service and ingress manifests.

Considering the urgency I have again used nginx ingress controller along with existing Kong Gateway till the time I find an appropriate solution.