Kong Ingress Controller NLB does not work with Preserving Client IP Address

Summary

SUMMARY_GOES_HERE

Kong Ingress Controller Helm version: 1.15.0

Kong or Kong Enterprise version

Kubernetes version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Environment

  • Cloud provider or hardware configuration: EKS
  • OS (e.g. from /etc/os-release): Linux

What happened

Hello everyone! I am using EKS and Kong Ingress Controller, and Enabled Preserving Client IP Address. I have follow this Article Preserving Client IP Address - v1.1.x | Kong - Open-Source API Management and Microservice Management.

This is my values.yaml file:

autoscaling:
  enabled: "true"
env:
  database: postgres
  pg_database: kong
  pg_host: kong-database.devpanel.svc.cluster.local
  pg_password: xxxx
  pg_user: kong
  prefix: /kong_prefix/
  proxy_listen: 0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
  real_ip_header: proxy_protocol
  trusted_ips: 0.0.0.0/0,::/0
ingressController:
  enabled: "true"
  installCRDs: "false"
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
nodeSelector:
  groupType: on-demand
proxy:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

This is pod created into the EKS cluster:

Name:         kong-controlle-kong-7996ccf966-5knk5
Namespace:    devpanel
Priority:     0
Node:         ip-10-0-14-46.us-west-2.compute.internal/10.0.14.46
Start Time:   Fri, 26 Mar 2021 08:55:10 +0000
Labels:       app.kubernetes.io/component=app
              app.kubernetes.io/instance=kong-controlle
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=kong
              app.kubernetes.io/version=2.3
              helm.sh/chart=kong-1.15.0
              pod-template-hash=7996ccf966
Annotations:  kubernetes.io/psp: eks.privileged
Status:       Running
IP:           10.0.8.214
IPs:
  IP:           10.0.8.214
Controlled By:  ReplicaSet/kong-controlle-kong-7996ccf966
Init Containers:
  wait-for-db:
    Container ID:  docker://91f6006bbcd14e7f45220aef87d9e785dbeb81bae38f09bea0ef7b0dbcb6fee2
    Image:         kong:2.3
    Image ID:      docker-pullable://kong@sha256:b6df904a47c82dd0701dc13f65b6266908cbeb3bbeec8e0579cfbcc6fd4e791e
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      until kong start; do echo 'waiting for db'; sleep 1; done; kong stop; rm -fv '/kong_prefix//stream_rpc.sock'
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Mar 2021 08:55:11 +0000
      Finished:     Fri, 26 Mar 2021 08:55:12 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            127.0.0.1:8444 http2 ssl
      KONG_CLUSTER_LISTEN:          off
      KONG_DATABASE:                postgres
      KONG_KIC:                     on
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_WORKER_PROCESSES:  2
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 kong-database.devpanel.svc.cluster.local
      KONG_PG_PASSWORD:             xxxx
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PORT_MAPS:               80:8000, 443:8443
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
      KONG_REAL_IP_HEADER:          proxy_protocol
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_TRUSTED_IPS:             0.0.0.0/0,::/0
    Mounts:
      /kong_prefix/ from kong-controlle-kong-prefix-dir (rw)
      /tmp from kong-controlle-kong-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kong-controlle-kong-token-b9p7g (ro)
Containers:
  ingress-controller:
    Container ID:  docker://444c550f798c33909a5ae8239e84acfe3493e167c47c93959118390b056df145
    Image:         kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:1.1
    Image ID:      docker-pullable://kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller@sha256:4a4a03c9628b9cf499b85cc34dc35ea832ba0f801b9462fe73b6a8d294a07cf0
    Port:          <none>
    Host Port:     <none>
    Args:
      /kong-ingress-controller
    State:          Running
      Started:      Fri, 26 Mar 2021 08:55:13 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   256Mi
    Liveness:   http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:                               kong-controlle-kong-7996ccf966-5knk5 (v1:metadata.name)
      POD_NAMESPACE:                          devpanel (v1:metadata.namespace)
      CONTROLLER_ELECTION_ID:                 kong-ingress-controller-leader-kong
      CONTROLLER_INGRESS_CLASS:               kong
      CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY:  true
      CONTROLLER_KONG_ADMIN_URL:              https://localhost:8444
      CONTROLLER_PUBLISH_SERVICE:             devpanel/kong-controlle-kong-proxy
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-controlle-kong-token-b9p7g (ro)
  proxy:
    Container ID:   docker://909b26117247e4446bfac4fef14bb1a0bc6baaba7f09a785fd9e757c9aad398e
    Image:          kong:2.3
    Image ID:       docker-pullable://kong@sha256:b6df904a47c82dd0701dc13f65b6266908cbeb3bbeec8e0579cfbcc6fd4e791e
    Ports:          8000/TCP, 8443/TCP, 8100/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Fri, 26 Mar 2021 08:55:14 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:status/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:status/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            127.0.0.1:8444 http2 ssl
      KONG_CLUSTER_LISTEN:          off
      KONG_DATABASE:                postgres
      KONG_KIC:                     on
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_WORKER_PROCESSES:  2
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 kong-database.devpanel.svc.cluster.local
      KONG_PG_PASSWORD:             xxxx
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PORT_MAPS:               80:8000, 443:8443
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
      KONG_REAL_IP_HEADER:          proxy_protocol
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_TRUSTED_IPS:             0.0.0.0/0,::/0
      KONG_NGINX_DAEMON:            off

But I got this error when I watched the proxy container:

��/5�" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:03 [error] 23#0: *12902 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:03 [error] 23#0: *12906 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:05 [error] 23#0: *12921 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:07 [error] 23#0: *12941 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:07 [error] 23#0: *12942 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:07 [error] 23#0: *12946 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12952 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12953 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12958 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12959 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:10 [error] 23#0: *12967 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:10 [error] 23#0: *12970 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:12 [error] 23#0: *12991 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:12 [error] 23#0: *12992 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:13 [error] 23#0: *12998 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:13 [error] 23#0: *12999 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:13 [error] 23#0: *13000 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:14 [error] 23#0: *13005 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:17 [error] 23#0: *13040 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13055 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13057 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13060 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13061 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:19 [error] 23#0: *13066 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:23 [error] 23#0: *13109 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:24 [error] 23#0: *13114 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:25 [error] 23#0: *13128 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:27 [error] 23#0: *13144 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:27 [error] 23#0: *13147 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:28 [error] 23#0: *13153 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:28 [error] 23#0: *13154 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:29 [error] 23#0: *13161 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:30 [error] 23#0: *13174 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:32 [error] 23#0: *13194 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:32 [error] 23#0: *13198 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:33 [error] 23#0: *13201 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:33 [error] 23#0: *13203 broken header: "��Q@xޤ=�
A����V�MmQi/`�k��V��� (4'�,BL|�:]1��qۡ0��@ev�J)���y0 �/�0�+�,̨̩��	��

Expected behavior

Can you tell me where I did the wrong configuration?

Not sure what’s going on here. That “broken header” error would indicate that it’s actually malformed or wasn’t added despite the use of service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *

Defining proxy_listen in env directly is abnormal, and may be causing the issue–it looks like those should line up with the other generated configuration (at the Service and Deployment level) that we’d add by default, but as it’s not really intended usage, I’d recommend moving proxy_protocol into the proxy HTTP parameters and proxy HTTPS parameters instead to see if that fixes it.

If that doesn’t make any difference, does the generated target group have the expected proxy_protocol_v2.enabled configuration present?

If so, you can try inspecting the actual traffic sent (for plaintext HTTP at least) with a tcpdump sidecar container:

  - name: tcpdump
    securityContext:
      runAsUser: 0
    image: corfr/tcpdump
    command:
      - /bin/sleep
      - infinity 

With that you can kubectl exec -it PODNAME -c tcpdump and run tcpdump -npi any -w /tmp/debug.pcap port 8000 to see what’s actually being sent over the wire–that won’t solve the issue but may help shed light on exactly what’s being sent/why it’s malformed.

Hello @traines . At first, I am very appriciate. I realize that Kong Ingress Controller has created ELB Target Groups without enabled Proxy Protocol V2.

I tried to enabled Proxy Protocol V2 by hand and it worked. I have not seen the problem anymore. However, I have built an IAC script, so Has KongHQ has any way to enable this configuration that I can use to add to my script?

The controller doesn’t create the target groups directly: its configuration declares a LoadBalancer type Service, and the AWS cloud provider creates an ELB that satisfies it.

When that Service has an service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: * annotation, AWS should create a target group that has the PROXY protocol setting enabled. If it doesn’t, something went wrong in the AWS cloud provider code.

If you destroy the Service and create it again, do you see any errors listed kubectl describe svc <service name> output? That should contain status information from the cloud provider.

Name:                     kong-controlle-kong-proxy
Namespace:                devpanel
Labels:                   app.kubernetes.io/instance=kong-controlle
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=kong
                          app.kubernetes.io/version=2.3
                          enable-metrics=true
                          helm.sh/chart=kong-1.15.0
Annotations:              meta.helm.sh/release-name: kong-controlle
                          meta.helm.sh/release-namespace: devpanel
                          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
                          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
                          service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector:                 app.kubernetes.io/component=app,app.kubernetes.io/instance=kong-controlle,app.kubernetes.io/name=kong
Type:                     LoadBalancer
IP Families:              <none>
IP:                       172.20.229.231
IPs:                      <none>
LoadBalancer Ingress:     af91eed4d362543b0b834f60f7a68e7d-dcf57ffd1b3ae717.elb.us-west-1.amazonaws.com
Port:                     kong-proxy  80/TCP
TargetPort:               8000/TCP
NodePort:                 kong-proxy  32546/TCP
Endpoints:                10.0.11.201:8000,10.0.11.63:8000
Port:                     kong-proxy-tls  443/TCP
TargetPort:               8443/TCP
NodePort:                 kong-proxy-tls  31481/TCP
Endpoints:                10.0.11.201:8443,10.0.11.63:8443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:

No errors

Would check with AWS then–their LoadBalancer creation code thinks that it’s handled that annotation correctly, but didn’t actually configure the proxy protocol setting.