Kong Helm Proxy Ingress Controller 400 Bad Request


While installing Kong via the helm chart, I get an error any time I try to enable the ingress controller for the proxy. I am turning on the ingress controller so that it can request a cert from cert manager (which is functioning properly). With the ingress controller off, everything works as expected. With it on, I get a 400 Bad Request The plain HTTP request was sent to HTTPS port error.

I tried:

  1. Changing the container port (and overrideServiceTargetPort) from 8443 to 8000 and 80 in the tls section. This resulted in Error code: SSL_ERROR_RX_RECORD_TOO_LONG using https or a bad request error using http
  2. Adding the "konghq.com/protocol":"https" annotation to the proxy. This results in a bad request error for both http and https.
  3. Turning off http in the proxy.
  4. Turning off TLS in the ingress controller.
  5. Some changes to the admin api based on errors I was seeing in the proxy logs. Right now the proxy logs just show the 400s without any errors.
  6. Changing node ports

I think the issue is that the ingress controller is terminating the TLS connection and passing an unsecured connection to the kong proxy, just on the wrong port. This is fine, but I can’t seem to find the correct port in the proxy to pass the connection to.

One further oddity is that sometimes, immediately after applying changes to the helm chart, there is a brief second where if navigate to Kong on https before everything is loaded, it will actually properly connect. All subsequent tries fail, though. I also can’t reliably get it to connect this way

Edit: This is using GKE, so the AWS LB annotations don’t apply here (and I can’t find anything similar)…

Kong: 2.8

  # Enable creating a Kubernetes service for the proxy
  enabled: true
  type: LoadBalancer
  # To specify annotations or labels for the proxy service, add them to the respective
  # "annotations" or "labels" dictionaries below.
  annotations: #{"konghq.com/protocol":"https"}
  # If terminating TLS at the ELB, the following annotations can be used
  #{"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "*",}
  # "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:REGION:ACCOUNT:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "kong-proxy-tls",
  # "service.beta.kubernetes.io/aws-load-balancer-type": "elb"
    enable-metrics: "true"

    # Enable plaintext HTTP listen for the proxy
    enabled: true
    servicePort: 80
    containerPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []

    # Enable HTTPS listen for the proxy
    enabled: true
    servicePort: 443
    containerPort: 8443
    # Set a target port for the TLS port in proxy service
    #overrideServiceTargetPort: 8000
    # Set a nodePort which is available if service type is NodePort
    #nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    - http2

  # Define stream (TCP) listen
  # To enable, remove "[]", uncomment the section below, and select your desired
  # ports and parameters. Listens are dynamically named after their servicePort,
  # e.g. "stream-9000" for the below.
  # Note: although you can select the protocol here, you cannot set UDP if you
  # use a LoadBalancer Service due to limitations in current Kubernetes versions.
  # To proxy both TCP and UDP with LoadBalancers, you must enable the udpProxy Service
  # in the next section and place all UDP stream listen configuration under it.
  stream: []
    #   # Set the container (internal) and service (external) ports for this listen.
    #   # These values should normally be the same. If your environment requires they
    #   # differ, note that Kong will match routes based on the containerPort only.
    # - containerPort: 9000
    #   servicePort: 9000
    #   protocol: TCP
    #   # Optionally set a static nodePort if the service type is NodePort
    #   # nodePort: 32080
    #   # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
    #   # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
    #   parameters: []

  # Kong proxy ingress settings.
  # Note: You need this only if you are using another Ingress Controller
  # to expose Kong outside the k8s cluster.
    # Enable/disable exposure using ingress.
    enabled: true
    ingressClassName: kong
    # Ingress hostname
    # TLS secret name.
    tls: kong-proxy-cert
    hostname: kong-test.domain
    # Map of ingress annotations.
    annotations: {"kubernetes.io/tls-acme": "true", "cert-manager.io/cluster-issuer": "letsencrypt-cluster-issuer"}
    # Ingress path.
    path: /
    # Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix)
    pathType: ImplementationSpecific

  # Optionally specify a static load balancer IP.
  # loadBalancerIP:


Every time I match the protocols, by either changing the backend port in the ingress controller to 80 or by setting konghq.com/protocol":"https", I get past the initial http to https port error, but then the proxy returns a standard 400 bad request error. The strange thing is that I only get the new 400 error when trying to use the hostname specified in the ingress. If I curl the proxy service name (as specified in the backend of the ingress) directly from a pod or even the external IP for the load balancer, I am able to get a typical 404 response from the proxy, but the 400 bad request error still occurs any time I supply a hostname with the request (when the ingress controller for the proxy is on and only while using the “host” given to the ingress controller). Doing a curl directly to the proxy service name from an internal pod works, but it gives me a 400 bad request error as soon as I add the -H option and supply the ingress hostname.

I was able to get around this problem by adding this annotation to the proxy ingress annotation section.

"konghq.com/preserve-host": "false"

Making the change manually in the database didn’t work. It was only once I updated the helm chart with the above annotation that everything started working.