Grpc request block at the proxy level

Hi,

I’m trying to setup thanos communication between a thanos sidecar (in cluster A) and my thanos querier (in observability cluster).
The querier can’t join the sidecar it seems the request, it blocks at the proxy level.
I don’t understand what I miss in the setup.
I have another grpc service which does the same network path and works like a charm with an ingress setup very similar.

Log from the querier:

level=warn ts=2020-06-04T19:46:13.545761641Z caller=storeset.go:440 component=storeset msg="update of store node failed" err="getting metadata: fetching store info from thanos.barney.hvbrt.io:443: rpc error: code = DeadlineExceeded desc = latest connection error: connection closed" address=thanos.barney.hvbrt.io:443

From kong proxy:

2020/06/04 19:47:06 [info] 22#0: *133611 client sent invalid request while reading client request line, client: 10.126.0.243, server: kong, request: "PRI * HTTP/2.0"
10.126..243 - - [04/Jun/2020:19:47:06 +0000] "PRI * HTTP/2.0" 400 12 "-" "-"

kong version: 2.0.4
ingress version: 0.9.0

the kong configuration:

env:
  version: 2.0.4
  database: "off"
  headers: "off"
  stream_listen: "off"
  nginx_daemon: "off"
  nginx_worker_processes: "2"
  admin_error_log: /dev/stderr
  admin_access_log: /dev/stdout
  admin_gui_error_log: /dev/stderr
  admin_gui_access_log: /dev/stdout
  proxy_error_log: /dev/stderr
  proxy_access_log: /dev/stdout
  portal_api_error_log: /dev/stderr
  portal_api_access_log: /dev/stdout
  nginx_http_include: /kong/servers.conf
  trusted_ips: 0.0.0.0/0,::/0
  real_ip_recursive: "on"
  proxy_listen: 0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
  admin_listen: 0.0.0.0:8444 http2 ssl
  status_listen: 0.0.0.0:8100
  prefix: /kong_prefix/
  lua_package_path: /opt/?.lua;/opt/?/init.lua;;
  plugins: bundled
  log_level: info
  lua_ssl_trusted_certificate: /etc/ssl/cert.pem
  lua_ssl_verify_depth: 2

the thanos sidecar ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    konghq.com/protocols: grpc,grpcs
    kubernetes.io/ingress.class: kong-internal
  creationTimestamp: "2020-06-04T19:31:02Z"
  generation: 1
  labels:
    workloadName: prom-operator-thanos
    workloadScope: metric
    workloadStack: observability
  name: prom-operator-thanos
  namespace: default
  resourceVersion: "27492085"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/prom-operator-thanos
  uid: 13acb3aa-fb81-40d6-ab3a-252681c282c8
spec:
  rules:
  - host: thanos.barney.hvbrt.io
    http:
      paths:
      - backend:
          serviceName: prom-operator-thanos
          servicePort: grpc
        path: /
  tls:
  - hosts:
    - thanos.barney.hvbrt.io
    secretName: kubecertbot.wildcard.barney.hvbrt.io

Have you tried manually running the grpc query to see if that goes through or not?
I’d also make sure if grpc and grpcs are not getting mixed up.

Finally the issue is on the Thanos side.

The issue is when Thanos querier have mixed endpoint with grpc and grpcs, the TLS initialization is not done correctly, there is a pr to manage mixed grpc + grpcs

@Damien_hvbrt we’re not using King but we’re experiencing a similar problem. Can you advise as to what you fixed in thanos querier or sidecar to address this and can you provide a link to the PR? I’ve tried searching but I am not having any luck finding it.