Kong ingress issue with GRPC service

Hi,

I have a grpc service and I am trying to expose it through Kong ingress but when I external services are trying to connect to it I see the call reaching the ingress and they get dropped here with the following error.

2020/06/08 12:51:21 [info] 22#0: *3645413 client sent invalid request while reading client request line, client: 10.126.0.67, server: kong, request: “PRI * HTTP/2.0”

10.126.0.67 - - [08/Jun/2020:12:51:21 +0000] “PRI * HTTP/2.0” 400 12 “-” “-”

Are you sending the requests over HTTPS, and do you have HTTP/2 enabled?

gRPC normally uses HTTP/2 as transport, and by extension requires HTTPS to function. That log line suggests that you may have HTTPS, but do not have HTTP/2 enabled. It is normally enabled by default using the http2 parameter in the proxy TLS listen, e.g. as shown in kong.conf’s proxy_listen option.

Some older distribution methods on Kubernetes may not set that listen parameter–in particular the Helm chart prior to 1.3.0 did not. Can you share what method you’re using to deploy Kong on Kubernetes and your proxy_listen (or equivalent KONG_PROXY_LISTEN environment variable) configuration?

Hi, Yes we are sending it over https an and we have http2 enabled using the environment variable as you can see in the pod manifest and from inside the pod itself.

bash-5.0$ env | grep LISTEN
KONG_ADMIN_LISTEN=0.0.0.0:8444 http2 ssl
KONG_STATUS_LISTEN=0.0.0.0:8100
KONG_STREAM_LISTEN=off
KONG_PROXY_LISTEN=0.0.0.0:8000, 0.0.0.0:8443 http2 ssl

Here the live pod definition and the ingress

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 10.140.24.28/32
  creationTimestamp: "2020-06-04T16:25:19Z"
  generateName: kong-internal-kong-84949dc7f6-
  labels:
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: kong-internal
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kong
    app.kubernetes.io/version: "2"
    helm.sh/chart: kong-1.5.0
    pod-template-hash: 84949dc7f6
  name: kong-internal-kong-84949dc7f6-6ln6k
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: kong-internal-kong-84949dc7f6
    uid: 2cbd3b83-a4e3-4625-8353-203fd46fc824
  resourceVersion: "27380521"
  selfLink: /api/v1/namespaces/default/pods/kong-internal-kong-84949dc7f6-6ln6k
  uid: 0b9906ac-c451-4656-b948-7e42d44cac1b
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: workloadType
            operator: In
            values:
            - general-purpose
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values:
              - kong
          topologyKey: kubernetes.io/hostname
        weight: 100
  containers:
  - args:
    - /kong-ingress-controller
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: CONTROLLER_ELECTION_ID
      value: kong-ingress-controller-leader-kong-internal
    - name: CONTROLLER_INGRESS_CLASS
      value: kong-internal
    - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
      value: "true"
    - name: CONTROLLER_KONG_URL
      value: https://localhost:8444
    - name: CONTROLLER_PUBLISH_SERVICE
      value: default/kong-internal-kong-proxy
    image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 10254
        scheme: HTTP
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    name: ingress-controller
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 10254
        scheme: HTTP
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      limits:
        cpu: 153m
        memory: 153Mi
      requests:
        cpu: 153m
        memory: 153Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kong-internal-kong-token-cvjdk
      readOnly: true
  - env:
    - name: KONG_ADMIN_ACCESS_LOG
      value: /dev/stdout
    - name: KONG_ADMIN_ERROR_LOG
      value: /dev/stderr
    - name: KONG_ADMIN_GUI_ACCESS_LOG
      value: /dev/stdout
    - name: KONG_ADMIN_GUI_ERROR_LOG
      value: /dev/stderr
    - name: KONG_ADMIN_LISTEN
      value: 0.0.0.0:8444 http2 ssl
    - name: KONG_DATABASE
      value: "off"
    - name: KONG_HEADERS
      value: "off"
    - name: KONG_LOG_LEVEL
      value: info
    - name: KONG_LUA_PACKAGE_PATH
      value: /opt/?.lua;/opt/?/init.lua;;
    - name: KONG_LUA_SSL_TRUSTED_CERTIFICATE
      value: /etc/ssl/cert.pem
    - name: KONG_LUA_SSL_VERIFY_DEPTH
      value: "2"
    - name: KONG_NGINX_DAEMON
      value: "off"
    - name: KONG_NGINX_HTTP_INCLUDE
      value: /kong/servers.conf
    - name: KONG_NGINX_WORKER_PROCESSES
      value: "2"
    - name: KONG_PLUGINS
      value: bundled
    - name: KONG_PORTAL_API_ACCESS_LOG
      value: /dev/stdout
    - name: KONG_PORTAL_API_ERROR_LOG
      value: /dev/stderr
    - name: KONG_PREFIX
      value: /kong_prefix/
    - name: KONG_PROXY_ACCESS_LOG
      value: /dev/stdout
    - name: KONG_PROXY_ERROR_LOG
      value: /dev/stderr
    - name: KONG_PROXY_LISTEN
      value: 0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
    - name: KONG_REAL_IP_RECURSIVE
      value: "on"
    - name: KONG_STATUS_LISTEN
      value: 0.0.0.0:8100
    - name: KONG_STREAM_LISTEN
      value: "off"
    - name: KONG_TRUSTED_IPS
      value: 0.0.0.0/0,::/0
    - name: KONG_VERSION
      value: 2.0.4
    image: kong:2.0.4
    imagePullPolicy: IfNotPresent
    lifecycle:
      preStop:
        exec:
          command:
          - /bin/sh
          - -c
          - kong quit
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /status
        port: metrics
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    name: proxy
    ports:
    - containerPort: 8444
      name: admin-tls
      protocol: TCP
    - containerPort: 8000
      name: proxy
      protocol: TCP
    - containerPort: 8443
      name: proxy-tls
      protocol: TCP
    - containerPort: 8100
      name: status
      protocol: TCP
    - containerPort: 9542
      name: metrics
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /status
        port: metrics
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      limits:
        cpu: 300m
        memory: 500Mi
      requests:
        cpu: 300m
        memory: 500Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /kong_prefix/
      name: kong-internal-kong-prefix-dir
    - mountPath: /tmp
      name: kong-internal-kong-tmp
    - mountPath: /kong
      name: custom-nginx-template-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kong-internal-kong-token-cvjdk
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: gke-barney-barney-general-purpose-062328ba-qf9w
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    runAsUser: 1000
  serviceAccount: kong-internal-kong
  serviceAccountName: kong-internal-kong
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: kong-internal-kong-prefix-dir
  - emptyDir: {}
    name: kong-internal-kong-tmp
  - configMap:
      defaultMode: 420
      name: kong-internal-kong-default-custom-server-blocks
    name: custom-nginx-template-volume
  - name: kong-internal-kong-token-cvjdk
    secret:
      defaultMode: 420
      secretName: kong-internal-kong-token-cvjdk
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-06-04T16:25:19Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-06-04T16:25:58Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-06-04T16:25:58Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-06-04T16:25:19Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://3d10f5601052626e7563de1e289f5f0ba85ee54833284b9d555ebf40601b94f6
    image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.0
    imageID: docker-pullable://kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller@sha256:858c3738a57f09f3c04e4216e5f924f151ea4991265c8f1a6468c7085dd0443c
    lastState:
      terminated:
        containerID: docker://b2710f89839629aba6a30c70bdeac35d50e5dfc05e5b6af1615fc525d786b08e
        exitCode: 255
        finishedAt: "2020-06-04T16:25:21Z"
        reason: Error
        startedAt: "2020-06-04T16:25:21Z"
    name: ingress-controller
    ready: true
    restartCount: 2
    state:
      running:
        startedAt: "2020-06-04T16:25:38Z"
  - containerID: docker://7dafce2cf0e984e2d9987ca124aa8189084dc1bf057bf2fc834edc43f75ba6c6
    image: kong:2.0.4
    imageID: docker-pullable://kong@sha256:32a09516a4fad6a7d42a90f7f754970555027a73e349b980a72c7120e00488b4
    lastState: {}
    name: proxy
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2020-06-04T16:25:20Z"
  hostIP: 10.126.0.36
  phase: Running
  podIP: 10.140.24.28
  podIPs:
  - ip: 10.140.24.28
  qosClass: Guaranteed
  startTime: "2020-06-04T16:25:19Z"
    `indent preformatted text by 4 spaces`


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    konghq.com/protocols: grpc,grpcs
    kubernetes.io/ingress.class: kong-internal
  creationTimestamp: "2020-06-04T19:31:02Z"
  generation: 1
  labels:
    workloadName: prom-operator-thanos
    workloadScope: metric
    workloadStack: observability
  name: prom-operator-thanos
  namespace: default
  resourceVersion: "27492085"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/prom-operator-thanos
  uid: 13acb3aa-fb81-40d6-ab3a-252681c282c8
spec:
  rules:
  - host: thanos.xxxxxxxxxxx
    http:
      paths:
      - backend:
          serviceName: prom-operator-thanos
          servicePort: grpc
        path: /
  tls:
  - hosts:
    - thanos.xxxxxxxx
    secretName: kubecertbot.wildcard.xxxxxxxxx
status:
  loadBalancer:
    ingress:
    - ip: xxxxxx

I see it in the nginx-kong.conf

server {
server_name kong;
listen 0.0.0.0:8000;
listen 0.0.0.0:8443 ssl http2;

error_page 400 404 408 411 412 413 414 417 494 /kong_error_handler;
error_page 500 502 503 504                     /kong_error_handler;

access_log /dev/stdout;
error_log  /dev/stderr info;

ssl_certificate     /kong_prefix/ssl/kong-default.crt;
ssl_certificate_key /kong_prefix/ssl/kong-default.key;
ssl_session_cache   shared:SSL:10m;
ssl_certificate_by_lua_block {
    Kong.ssl_certificate()
}

# injected nginx_proxy_* directives
real_ip_header X-Real-IP;
real_ip_recursive on;
set_real_ip_from  0.0.0.0/0;
set_real_ip_from  ::/0;

rewrite_by_lua_block {
    Kong.rewrite()
}

access_by_lua_block {
    Kong.access()
}

header_filter_by_lua_block {
    Kong.header_filter()
}

body_filter_by_lua_block {
    Kong.body_filter()
}

log_by_lua_block {
    Kong.log()

location / {
default_type ‘’;

    set $ctx_ref                    '';
    set $upstream_te                '';
    set $upstream_host              '';
    set $upstream_upgrade           '';
    set $upstream_connection        '';
    set $upstream_scheme            '';
    set $upstream_uri               '';
    set $upstream_x_forwarded_for   '';
    set $upstream_x_forwarded_proto '';
    set $upstream_x_forwarded_host  '';
    set $upstream_x_forwarded_port  '';
    set $kong_proxy_mode            'http';

    proxy_http_version    1.1;
    proxy_set_header      TE                $upstream_te;
    proxy_set_header      Host              $upstream_host;
    proxy_set_header      Upgrade           $upstream_upgrade;
    proxy_set_header      Connection        $upstream_connection;
    proxy_set_header      X-Forwarded-For   $upstream_x_forwarded_for;
    proxy_set_header      X-Forwarded-Proto $upstream_x_forwarded_proto;
    proxy_set_header      X-Forwarded-Host  $upstream_x_forwarded_host;
    proxy_set_header      X-Forwarded-Port  $upstream_x_forwarded_port;
    proxy_set_header      X-Real-IP         $remote_addr;
    proxy_pass_header     Server;
    proxy_pass_header     Date;
    proxy_ssl_name        $upstream_host;
    proxy_ssl_server_name on;
    proxy_pass            $upstream_scheme://kong_upstream$upstream_uri;
}

bash-5.0$ kong version
2.0.4
bash-5.0$ /usr/local/openresty/nginx/sbin/nginx -v
nginx version: openresty/1.15.8.3
bash-5.0$ /usr/local/openresty/nginx/sbin/nginx -V
nginx version: openresty/1.15.8.3
built by gcc 9.2.0 (Alpine 9.2.0)
built with OpenSSL 1.1.1f 31 Mar 2020
TLS SNI support enabled
configure arguments: --prefix=/usr/local/openresty/nginx --with-cc-opt=’-O2 -I/tmp/build/usr/local/kong/include’ --add-module=…/ngx_devel_kit-0.3.1rc1 --add-module=…/echo-nginx-module-0.61 --add-module=…/xss-nginx-module-0.06 --add-module=…/ngx_coolkit-0.2 --add-module=…/set-misc-nginx-module-0.32 --add-module=…/form-input-nginx-module-0.12 --add-module=…/encrypted-session-nginx-module-0.08 --add-module=…/srcache-nginx-module-0.31 --add-module=…/ngx_lua-0.10.15 --add-module=…/ngx_lua_upstream-0.07 --add-module=…/headers-more-nginx-module-0.33 --add-module=…/array-var-nginx-module-0.05 --add-module=…/memc-nginx-module-0.19 --add-module=…/redis2-nginx-module-0.15 --add-module=…/redis-nginx-module-0.3.7 --add-module=…/rds-json-nginx-module-0.15 --add-module=…/rds-csv-nginx-module-0.09 --add-module=…/ngx_stream_lua-0.0.7 --with-ld-opt=’-Wl,-rpath,/usr/local/openresty/luajit/lib -L/tmp/build/usr/local/kong/lib -Wl,–disable-new-dtags,-rpath,/usr/local/kong/lib’ --with-pcre-jit --with-http_ssl_module --with-http_realip_module --with-http_stub_status_module --with-http_v2_module --add-module=/work/lua-kong-nginx-module --add-module=/work/lua-kong-nginx-module/stream --with-stream_realip_module --with-stream_ssl_preread_module --with-pcre=/work/pcre-8.44 --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module

It looks very similar to Kong http2 support

That older post shouldn’t be relevant in current versions–the change linked from it is part of the service mesh code, which was removed in 2.0.0. On earlier versions, that code changed parts of the TLS implementation; without it, we use the standard NGINX TLS and HTTP/2 implementation, and the directives should be all you need.

What’s the full path inbound to this proxy instance? The kong-internal naming throughout the Pod suggests that there’s likely something else in front of the Kong instance, and if so, it may terminate TLS at the edge and forward the request onward over plaintext HTTP. For HTTP/2 and gRPC, it should instead deliver it over HTTPS.

Handling gRPC traffic on a plainext HTTP listen with HTTP/2 enabled should be possible, but is a bit non-standard and will be more complicated to set up properly, so I’d recommend structuring the request path such that it uses HTTPS throughout if at all possible.

Sorry for the late reply. The error seemed to be on the client sending the request (our case Thanos query) which wasn’t supporting a mix grpc and gpcs stores. We have to temporary moved to use grpc till the PR to use a mix of types of stores will be merged.