AKS, LinkerD and Kong - Issue with Client IP Address

Hi there,

I am trying to setup Kong in my K8S environment. My environment is running the following:

  • Azure Kubernetes Service
  • LinkerD Service Mesh
  • Nginx Ingress

I wish to replace the Nginx Ingress with Kong but am running into an issue with trying to use the IP-Restriction plugin.

I have been through various posts on here as well as lots of documentation but everything I do results in the same problem. After hitting my service via the Ingress, I receive a response of

"message":"Your IP address is not allowed"

As some background, when my ingress is using nginx with the nginx.ingress.kubernetes.io/whitelist-source-range annotation, everything works as expected.

I have setup two kong plugins; ip-restriction and request-transformer to add the l5-dst-override header. These plugins are applied to my ingress. When I look at the proxy container in the Kong pod, the request is always coming from 127.0.0.1.

I am installing Kong using the following helm command:

helm upgrade --namespace my-namespace --install --set ingressController.installCRDs=false,proxy.externalTrafficPolicy=Local,env.real_ip_recursive="on",env.trusted_ips="0.0.0.0/0\,::/0" --wait my-kong-chart-name kong/kong

As you can see from the install, I have tried setting externalTrafficPolicy to Local as well as setting real_ip_recursive' and trusted_ips`.

This is driving me insane now and I am sure it is something simple. Can anyone help?

How are you directing traffic to the Kong proxy? Is it set up to create its own LoadBalancer Service with an internet-facing load balancer, or is it behind the existing NGINX instance? If Kong is bound to its own external load balancer, is that load balancer HTTP-aware or does it operate at a lower layer (TCP or IP, usually)?

Linkerd will add an additional complication to the mix if Kong’s proxy is not at the edge, as it injects proxies at the Pod level. Seeing 127.0.0.1 specifically suggests that’s quite possibly the case, since traffic internal to a Pod does normally show up as localhost.

You ultimately won’t want to trust 0.0.0.0/0 (or ::/0), as that will allow anything to send trusted X-Forwared-For information (and masquerade as any IP they choose, defeating the purpose of the restriction plugin). However, if you’ve trusted that and are still not seeing any allowed traffic, that indicates that you’re probably not receiving any HTTP-based forwarding information. https://github.com/linkerd/linkerd2/issues/4219 indicates that Linkerd cannot send it, but it should still forward on existing headers.

Kong is creating its own LoadBalancer after running the helm command. It creates a service of type LoadBalancer which then creates the load balancer on Azure, assigning a public IP to the service. When installing nginx, the exact same process happens. I believe that the Azure load balancer is L4 if that answers your question? It is also worth mentioning that the same load balancer is used for the Kong service and the nginx service, just with different public IP addresses and routes. This is Azures behaviour and not implemented by me. The load balancer is internet facing and there is nothing else in front of it.

I was thinking that LinkerD is adding an extra complication so I disabled it for the namespace in question and redeployed Kong as well as the upstream services. I checked that there were no LinkerD proxies in any of the pods. The behaviour was exactly the same. nginx has no problems receiving the client IP in the exact same configuration so I believe that LinkerD is not an issue.

I agree that I do not want to trust the open IP configuration, I am just using that to attempt to get this working. I will of course secure that down to the load balancer IP once I know it works as expected.

One thing I have noticed is that despite setting environment variables in the helm command (real_ip_recursive, trusted_ips and real_ip_header), I do not see these set in the proxy container of the Kong pod. Is this the correct way to set these variables? I have also tried passing a values.yaml file to the helm command containing the variables as per the following link, but they still do not get set.

An update to this issue. I have manually deployed Kong into a test namespace along with my upstream service. I got rid of the three environment variables and disabled TLS (cert-bot) on the ingress.

The IP restriction then worked as expected. I then added TLS back onto the ingress and everything works with https. So, it appears that LinkerD is the issue here. I need to work out how to setup LinkerD to allow the client IP to be persisted through to the upstream service. Thanks again for the pointers!

It turns out I need some further help :frowning:

I need to add an annotation to the kong ingress controller. How would I go about doing this?

To the controller itself? Annotations are handled the same way across all Kubernetes resources, so depending on what exactly you need to modify (presumably either the Deployment or the Service), you’ll edit the manifest and add something like:

metadata:
  annotations:
    example.com/exampleannotation: "foo"

In my case I was able to add an annotation by using the podAnnotations: {} within the helm values.yaml. I got this all working in the end by using that property and instructing LinkerD to skip port 8443 on the ingress controller. That appears to be the port Kong uses for SSL traffic.