Kong not sending x-forwarded-for header to upstream

We are using Kong proxy 2.2.1 and kong-ingress-controller 0.9.1 on EKS. Our EKS Nodes, Kube-proxy and PODs use CGNAT CIDR range of 100.x.x.x

We are having issues with getting client’s correct IP in our services. It appears that Kong is not sending x-forwarded-for header to upstream
As a result upstream is unable to determine the client’s IP address correctly. It does however send x-forwarded-by header

Here is how the request flow for our setup looks like:

Client Browser -> AWS ALB -> Kong NLB -> Kong POD -> Service1 (Tomcat) -> Service2 (Netty)

In this example
168.2xx.xx.xxx → actual client ip
100.64.148.206 → IP of kube-proxy
100.64.96.127 → IP of Kong Pod
100.64.91.26 → IP of Service1 POD

Service 1 passes along all the “x-forwarded-*” headers to Service 2 as received.
Service 1 seems to get the correct client IP but service 2 is unable to do so. I am suspecting it must be due to missing x-forwarded-for header.

Related Kong logs

[notice] 25#0: *2364795 [kong] handler.lua:271 [my-plugin] :kong.request.get_forwarded_host:qa.mysite.com, client: 168.2xx.xx.xxx, server: kong, request: "POST /token/api/uri HTTP/1.1", host: "qa.mysite.com"
[notice] 25#0: *2364795 [kong] handler.lua:275 [my-plugin] :Header: x-forwarded-for: 168.2xx.xx.xxx, client: 168.2xx.xx.xxx, server: kong, request: "POST /token/api/uri HTTP/1.1", host: "qa.mysite.com"

Related Service 1 logs

[https-jsse-nio-8443-exec-9] com.mysite.service1.filter.LoggingFilter - RemoteIP: 168.2xx.xx.xxx scheme: https URL: /token/api/uri QS: null
[https-jsse-nio-8443-exec-9] com.mysite.service1.filter.LoggingFilter - x-forwarded-by: 100.64.148.206, 100.64.96.127
[https-jsse-nio-8443-exec-9] com.mysite.service1.filter.LoggingFilter - x-forwarded-proto: https
[https-jsse-nio-8443-exec-9] com.mysite.service1.filter.LoggingFilter - x-forwarded-host: qa.mysite.com
[https-jsse-nio-8443-exec-9] com.mysite.service1.filter.LoggingFilter - x-forwarded-port: 443
[https-jsse-nio-8443-exec-9] com.mysite.service1.filter.LoggingFilter - x-forwarded-path: /token/api/uri
[https-jsse-nio-8443-exec-9] com.mysite.service1.filter.LoggingFilter - x-real-ip: 168.2xx.xx.xxx

Related Service 2 logs

[boundedElastic-8] com.mysite.service2.filter.LoggingFilter - RemoteIP: 100.64.91.26 URL: /token/api/uri QS: {}
[boundedElastic-8] com.mysite.service2.filter.LoggingFilter - x-forwarded-proto: https
[boundedElastic-8] com.mysite.service2.filter.LoggingFilter - x-forwarded-port: 443
[boundedElastic-8] com.mysite.service2.filter.LoggingFilter - x-forwarded-host: qa.mysite.com
[boundedElastic-8] com.mysite.service2.filter.LoggingFilter - x-forwarded-by: 100.64.148.206, 100.64.96.127
[boundedElastic-8] com.mysite.service2.filter.LoggingFilter - x-forwarded-path: /token/api/uri

Our Kong deployment is configured to set following env variables

- name: KONG_TRUSTED_IPS
  value: "10.0.0.0/8,100.0.0.0/8"
- name: KONG_REAL_IP_RECURSIVE
  value: "on"
- name: KONG_REAL_IP_HEADER
  value: "X-Forwarded-For"

What am I missing here?

Service 1 logging the original client IP would indicate that information is getting sent upstream of Kong, and Kong logging the original client IP indicates that the trust configuration for it is correct.

Tomcat’s logs are showing a non-standard x-forwarded-by that chopped off the original client IP. Do you possibly have some configuration there to transform and manipulate X-Forwarded-For?

Thanks for the response.

Service1 is not transforming or manipulating X-Forwarded-For header. The service1 logs are from a Logging filter that gets invoked before any application logic is executed. So the non-standard header x-forwarded-by is received by service1 and not generated by it. It is then passed along as is to service 2.

Kong is somehow setting the x-real-ip header even though the nginx real ip header name is configured to be X-Forwarded-For

Looking some more at the underlying implementation here, real_ip_header only deals with the inbound request, i.e. what header Kong looks for to retrieve previous forwarding information. That much does appear to be working–X-Real-IP would be the kube-proxy IP at service1 otherwise.

Upstream, Kong will always send both X-Real-IP and X-Forwarded-For.

X-Forwarded-For should always be present with at least the hop immediately before Kong (kube-proxy):

X-Forwarded-By doesn’t appear anywhere in the Kong codebase, so it’d need to come from either:

  • Something in between Kong and Service1 (possibly Service1’s Pod contains multiple containers?)
  • A non-standard plugin that manipulates the upstream_x_forwarded_for variable.

To differentiate between whether that changes within Kong or after, my standard approach is to temporarily modify the Kong Deployment to include a tcpdump container, which will be able to see traffic from other containers within the Pod, and configure an unencrypted HTTP upstream if necessary. You can exec into that and run something like

tcpdump -npi any -As0 host 100.64.91.26

(possibly using the Service IP depending on how you’ve configured the Kong service) to see what’s being sent before any logging. If X-Forwarded-By is present there, some plugin is setting it and clearing out the standard upstream_x_forwarded_for variable. If not, that change is happening upstream.

Thanks for the pointers.

It was the Tomcat’s RemoteIPValve in Service1 that was stripping out X-Forwarded-For and adding X-Forwarded-By header.

I made necessary adjustments to my code and now able to see the correct remote client IP.

1 Like