Request header or cookie too large HTTP1.1 494

Scenario:

A nginx service that returns 204 - We use it for a healthcheck endpoint

  server {
      listen 80;
      server_name _;

      location / {
      return 204;
      }
 }

A ingress of type kong with path: /health binded to the nginx service.

The kong-proxy is using an AWS ALB.

A ClusterIP service called apigw for the kong-ingress

If I call https://apigw.mydomain.com/health I correctly get 204 No content.

From inside the cluster and call http://apigw.namespace.svc.custer.local/health I’ve the following

* Mark bundle as not supporting multiuse
< HTTP/1.1 494 Unknown
< content-type: text/plain; charset=utf-8
< content-length: 35
< date: Fri, 12 Jun 2020 13:48:11 GMT
< server: envoy
< x-kong-response-latency: 1
< x-envoy-upstream-service-time: 882
< x-kong-upstream-latency: 881
< x-kong-proxy-latency: 0
< via: kong/2.0.2
< x-envoy-upstream-healthchecked-cluster: kong-ingress.internal
< 
Request header or cookie too large
* Connection #0 to host apigw.internal.svc.cluster.local left intact

This is happening only with the nginx “healthcheck” service, any other service/ingress is working fine called from the internal apigw service.

Kong proxy container is logging lots of:

kong-ingress-ff845f7d4-dhwf6 proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
kong-ingress-ff845f7d4-m5ctq proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
kong-ingress-ff845f7d4-dhwf6 proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
kong-ingress-ff845f7d4-prxd6 proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
kong-ingress-ff845f7d4-66flv proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
kong-ingress-ff845f7d4-prxd6 proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
kong-ingress-ff845f7d4-m5ctq proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
kong-ingress-ff845f7d4-m5ctqkong-ingress-ff845f7d4-dhwf6  kong-ingress-ff845f7d4-66flvproxyproxy   127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"
proxy 127.0.0.1 - - [12/Jun/2020:13:52:34 +0000] "GET /health HTTP/1.1" 494 35 "-" "curl/7.66.0"

I’ve already tried to improve the nginx buffers, disable server_tokens and other solutions proposed here, nothing seems to work.

Looks it was a problem with ports. I change the nginx to listen on 8080 - and change ports and targetPort form k8s-resources.

It’s working fine.