Configure proxy_protocol AWS, Kuberntes and Kong

I’ve been looking into getting X-Forwarded enabled using AWS TCP ELB, which requires me to setup and use proxy_protocol and set an annotation (service.beta.kubernetes.io/aws-load-balancer-proxy-protocol) on the service.

As far as I can gather configuring kong-proxy to use proxy_protcol involves setting KONG_PROXY_LISTEN_SSL = ‘0.0.0.0:8443 ssl proxy_protocol’

When issueing a request through the ELB now results in error:

* TCP_NODELAY set
* Connected to x.com (x.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* stopped the pause stream!
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number

KONG CONFIG

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kong-proxy
  namespace: kong
spec:
  template:
    metadata:
      labels:
        name: kong-proxy
    spec:
      containers:
        - name: kong-proxy
          image: kong:1.1.0
          env:
            - name: KONG_PROXY_LISTEN_SSL
              value: '0.0.0.0:8443 ssl proxy_protocol'
            - name: KONG_SSL_CERT
              value: "/certs/tls.crt"
            - name: KONG_SSL_CERT_KEY
              value: "/certs/tls.key"
            - name: KONG_ADMIN_LISTEN
              value: "0.0.0.0:8001"
          ports:
          - containerPort: 8000
            protocol: TCP
          - containerPort: 8443
            protocol: TCP
          - containerPort: 8001
            protocol: TCP
          volumeMounts:
            - name: api-cert
              readOnly: true
              mountPath: "/certs"
      volumes:
        - name: api-cert
          secret:
            secretName: api-cert
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: kong-proxy-external
  name: kong-proxy-external
  namespace: kong
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    name: kong-proxy
    tier: frontend
  sessionAffinity: None
  type: LoadBalancer

It seems like either the load balancer is not actually following the proxy protocol or Kong’s deployment didn’t get updated.

Is it possible for you to login to AWS and check the properties of the ELB and make sure that it is using proxy protocol on port 443?

Load Balancer settings

{
    "PolicyDescriptions": [
        {
            "PolicyName": "k8s-proxyprotocol-enabled",
            "PolicyTypeName": "ProxyProtocolPolicyType",
            "PolicyAttributeDescriptions": [
                {
                    "AttributeName": "ProxyProtocol",
                    "AttributeValue": "true"
                }
            ]
        }
    ]
}

{
    "LoadBalancerDescriptions": [
        {
            "ListenerDescriptions": [
                {
                    "Listener": {
                        "Protocol": "TCP",
                        "LoadBalancerPort": 443,
                        "InstanceProtocol": "TCP",
                        "InstancePort": 30953
                    },
                    "PolicyNames": []
                }
            ],
            "Policies": {
                "AppCookieStickinessPolicies": [],
                "LBCookieStickinessPolicies": [],
                "OtherPolicies": [
                    "k8s-proxyprotocol-enabled"
                ]
            },
            "BackendServerDescriptions": [
                {
                    "InstancePort": 30953,
                    "PolicyNames": [
                        "k8s-proxyprotocol-enabled"
                    ]
                }
            ],
        }
    ]
}

When curl-ing the service

curl https://kong-proxy/ -vvv
* TCP_NODELAY set
* Connected to kong-proxy (x.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* stopped the pause stream!
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number

results in this log message

kong-proxy-666c7dfd79-6g8rr kong-proxy-666c7dfd79-ft4vz kong-proxy x.x.36.0 - - [14/Aug/2019:14:37:39 +0000] "PROXY TCP4 x.x.192.134 x.x.192.134 44810 30953" 400 12 "-" "-"

I should also note that the kong-proxy runs within a Kubernets cluster

Is this environment variable still valid, it does not seem to take effect if I look at /usr/local/kong/nginx-kong.conf

server {
    server_name kong;
    listen 0.0.0.0:8000;
    listen 0.0.0.0:8443 ssl;
    error_page 400 404 408 411 412 413 414 417 494 /kong_error_handler;
    error_page 500 502 503 504 /kong_error_handler;

That was deprecated long time ago and is now removed.
Use KONG_PROXY_LISTEN instead please:

I changed the name of the envvar to KONG_PROXY_LISTEN and now the server directive is as follows:

server {
    server_name kong;
    listen 0.0.0.0:8443 ssl proxy_protocol;
    error_page 400 404 408 411 412 413 414 417 494 /kong_error_handler;
    error_page 500 502 503 504 /kong_error_handler;

The kong-proxy answers as expected but I do not see real-ip in the logs:

x.x.195.177 - - [15/Aug/2019:13:35:10 +0000] "GET /api/ HTTP/1.1" 401 26 "-" "curl/7.54.0"

Am I missing something or do I need to create a nginx template as suggested here https://docs.konghq.com/1.1.x/logging/ to add $remote_addr to the log?

I don’t think $remote_addr is same as real ip.
The IP you will see in $remote_addr will be the IP of your load balancer.
I’m not too sure about this so please take that with a grain of salt.

Sorry for the confusion, I meant real_ip, but the way forward is to create a custom_nginx.template and add a set the log_format?

Yes.

I’m not sure about this hack but worth a try: override the remote_addr variable with real_ip and then you would have it in your logs.

Also, you might not need a custom template and could get away with using Nginx directive injection feature of Kong to inject the log_format directive.
I’ve not tested this though.

Do you know where I can find documentation on that?

If I go with a custom template would it be advisable to start the container with

kong start --nginx-conf /etc/kong/nginx.template

and bypass docker-entrypoint.sh?