Kong proxy sending HTTP/1 request to upstream k8s gRPC service

Hello, I’ve followed the k8s GRPC setup on the konghq blog post (but using grpc instead grpcs.

However I am running into an issue where my grpc request is being converted to
$ grpcurl -import-path ./proto -proto health.proto -d '{"service": "my.Service"}' example.com:443 grpc.health.v1.Health/Check

This is the client response for the health check:

ERROR:
  Code: Unknown
  Message: : HTTP status code 0; transport: missing content-type field

And this is the corresponding the kong proxy logs:

[warn] 1097#0: *163548 a client request body is buffered to a temporary file /kong_prefix/client_body_temp/0000000024, client: 192.168.x.144, server: kong, request: "POST /grpc.health.v1.Health/Check HTTP/2.0", host: "example.com:443"
[debug] 1097#0: *163548 [lua] init.lua:973: balancer(): setting address (try 1): 192.168.x.198:6574
[debug] 1097#0: *163548 [lua] init.lua:1002: balancer(): enabled connection keepalive (pool=192.168.x.198|6574, pool_size=60, idle_timeout=60, max_requests=100)
[error] 1097#0: *135442 upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 192.168.x.198, server: kong, request: "POST /grpc.health.v1.Health/Check HTTP/2.0", upstream: "http://192.168.x.x:6574/grpc.health.v1.Health/Check", host: "example.com.io:443"
[error] 1097#0: *135442 readv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.x.198, server: kong, request: "POST /grpc.health.v1.Health/Check HTTP/2.0", upstream: "http://192.168.x.x:6574/grpc.health.v1.Health/Check", host: "example.com.io:443"
192.168.x.x - - [12/Jan/2022:19:31:38 +0000] "POST /grpc.health.v1.Health/Check HTTP/2.0" 009 0 "-" "grpcurl/1.8.1 grpc-go/1.37.0"
[info] 1097#0: *135578 client closed connection while waiting for request, client: 192.168.x.144, server: 0.0.0.0:8000

It seems that the proxy is expecting a response header from the client:
upstream sent no valid HTTP/1.0 header while reading response header from upstream

Kong install:

helm upgrade -n kong kong-ingress kong/kong -f values.yaml --set ingressController.installCRDs=false

#values.yaml
env:
  nginx_http_client_max_body_size: 50m
  nginx_http_client_body_buffer_size: 50m
  log_level: debug

Ingress, deplayment, and service config:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  namespace: webapp
  annotations:
    konghq.com/protocols: grpc
    kubernetes.io/ingress.class: kong
    kubernetes.io/tls-acme: "true"
    cert-manger.io/cluster-issuer: letsencrypt-prod
    cert-manager.io/acme-challenge-type: http01
spec:
  tls:
    - secretName: webapp-secret-prod
      hosts:
        - example.com
  defaultBackend:
    service:
      name: webapp
      port:
        number: 6574
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: webapp
                port:
                  number: 6574
---
apiVersion: v1
kind: Namespace
metadata:
  name: webapp
  labels:
    app: webapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webapp
  name: webapp
  namespace: webapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - image: my.image.com/v1
        name: webapp-image
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    konghq.com/protocols: grpc
  labels:
    app: webapp
  name: webapp
  namespace: webapp
spec:
  ports:
  - name: grpc
    port: 6574
    targetPort: 6574
  selector:
    app: webapp

Versions

Kubernetes: 1.21
Kong: 2.6.0
Nginx (inside kong proxy):

 nginx -V
nginx version: openresty/1.19.9.1
built by gcc 10.3.1 20210424 (Alpine 10.3.1_git20210424)
built with OpenSSL 1.1.1l  24 Aug 2021
TLS SNI support enabled
configure arguments: --prefix=/usr/local/openresty/nginx --with-cc-opt='-O2 -I/tmp/build/usr/local/kong/include' --add-module=../ngx_devel_kit-0.3.1 --add-module=../echo-nginx-module-0.62 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.32 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.08 --add-module=../srcache-nginx-module-0.32 --add-module=../ngx_lua-0.10.20 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.15 --add-module=../rds-csv-nginx-module-0.09 --add-module=../ngx_stream_lua-0.0.10 --with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/tmp/build/usr/local/kong/lib -Wl,--disable-new-dtags,-rpath,/usr/local/kong/lib' --with-pcre-jit --with-http_ssl_module --with-http_realip_module --with-http_stub_status_module --with-http_v2_module --add-module=/work/lua-kong-nginx-module --add-module=/work/lua-kong-nginx-module/stream --with-stream_realip_module --with-stream_ssl_preread_module --with-pcre=/work/pcre-8.44 --with-pcre-opt=-g --with-stream --with-stream_ssl_module

For the Service, the annotation is konghq.com/protocol: grpc (singular protocol versus the plural protocols used for the Ingress annotation).

Fantastic this worked for me, what a burden that was! Curious if there’s a way for kong to emit warnings on unused konghq.com/* annotations