Kong + Istio + gRPC: server closed the stream without sending trailers

Hi, I have been trying to get Kong working alongside Istio for routing to a gRPC service (Hello World). For context, I am using the official Kong helm chart at 1.3.0 and Istio 1.4.3. Based on the Kong/Istio blogpost, it seems like I would simply need to enable sidecar injection for the Kong proxy deployment. Kong works and routes correctly to the gRPC service without the sidecar, however, once it is injected, I see the error “rpc error: code = Internal desc = server closed the stream without sending trailers”.

Some other details of my setup:

Some findings:

  • There are no errors on the gRPC server-side
  • I see a log entry ’ [02/Mar/2020:23:12:32 +0000] “POST /helloworld.Greeter/SayHello HTTP/2.0” 200 18 “-” “grpc-go/1.26.0” ’ which seems to imply the request/response was ok
  • Looking at debug level logs in the kong envoy sidecar, I see this entry: ’ [Envoy (Epoch 0)] [2020-03-02 05:42:37.055][26][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:206] [C102] response complete ’ which suggests an http1 connection was used. I do not see this log on the server-side envoy sidecar nor when I call the server from another pod within the mesh
  • Enabling PILOT_ENABLE_PROTOCOL_SNIFFING_FOR_INBOUND in Istio Pilot and renaming the service port name to something random (so that it default to TCP) appears to get rid of this error, but Istio routing rules would no longer apply in this case.

It seems like the issue is that the http2 connection is being downgraded to http1 in the Kong envoy sidecar. Is there some reason for this or some configuration I am missing? Happy to provide more info if needed. I have poked around quite a bit with no success.

I’m not a ninja with Istio and Envoy so what I say here might be wrong but this is how I would go about:

  1. TCPdump the traffic coming out of Kong and make sure that the HTTP host header is set correctly so that envoy can detect and route traffic correctly.
  2. Figure out why envoy is downgrading the connection.

@hbagdi Thanks for the reply and your first suggestion ultimately leads to the issue. After digesting this thread: Kong Ingress controller is unable to route request to Istio virtual services it seems the issue was that both the hostname and service port must match what is used to dial Kong.

1 Like

Hello,

Is there any update on this ? It looks like i also getting the same problem

I’m using istio 1.6 and kong 1.5.1 with kong-ingress controller 0.9.

This is what i get from kong proxy log when trying to access grpc on kong + istio envoy sidecar

[ingress-kong-58cf79fcd9-2qrbk proxy] 2020/06/10 09:44:37 [warn] 25#0: *18589 [lua] reports.lua:73: log(): [reports] unknown request scheme: http while log
ging request, client: 10.148.34.1, server: kong, request: "PRI * HTTP/2.0"                                                                                  
[ingress-kong-58cf79fcd9-2qrbk proxy] 10.148.34.1 - - [10/Jun/2020:09:44:37 +0000] "PRI * HTTP/2.0" 400 12 "-" "-"  
[ingress-kong-58cf79fcd9-2qrbk proxy] 2020/06/10 09:44:38 [warn] 25#0: *18599 [lua] reports.lua:73: log(): [reports] unknown request scheme: http while log
ging request, client: 10.148.0.62, server: kong, request: "PRI * HTTP/2.0"                                                                                  
[ingress-kong-58cf79fcd9-2qrbk proxy] 10.148.0.62 - - [10/Jun/2020:09:44:38 +0000] "PRI * HTTP/2.0" 400 12 "-" "-"  
[ingress-kong-58cf79fcd9-2qrbk proxy] 2020/06/10 09:44:40 [warn] 25#0: *18614 [lua] reports.lua:73: log(): [reports] unknown request scheme: http while log
ging request, client: 10.148.0.62, server: kong, request: "PRI * HTTP/2.0"                                                                                  
[ingress-kong-58cf79fcd9-2qrbk proxy] 10.148.0.62 - - [10/Jun/2020:09:44:40 +0000] "PRI * HTTP/2.0" 400 12 "-" "-"  
[ingress-kong-58cf79fcd9-2qrbk proxy] 10.148.0.62 - - [10/Jun/2020:09:44:42 +0000] "PRI * HTTP/2.0" 400 12 "-" "-"  
[ingress-kong-58cf79fcd9-2qrbk proxy] 2020/06/10 09:44:42 [warn] 25#0: *18636 [lua] reports.lua:73: log(): [reports] unknown request scheme: http while log
ging request, client: 10.148.0.62, server: kong, request: "PRI * HTTP/2.0"

You can safely ignore those logging lines. Those do not have any effect on Kong’s correctness or performance. The bug causing this is fixed in the current dev branch as well.

Thank you for the reply @hbagdi. I still got HTTP 400 return when trying to connect to grpc service on a injected namespace with istio envoy sidecar. Kong also running with istio envoy sidecar.

Here is my ingress manifest. I also already apply some annotation like @yndai did

This is my ingress manifest https://gist.github.com/tonnyadhi/283a99a7a4587145bff1bed2044abddb

I also apply this for host header https://gist.github.com/tonnyadhi/1db60f2ba53a77e356bad56a52d891ac

And this is my service https://gist.github.com/tonnyadhi/ecf2d6cb67bb22aa22ccf166312d07a2

  1. The annotation configuration.konghq.com: do-not-preserve-host should be present on service object as well to override the host_header.
  2. host_header: grpcbin.grpc-sample.svc.cluster.local:80, this should be host_header: grpcbin.grpc-sample.svc.cluster.local
1 Like