We have kong running on ECS in a docker container, behind an elastic load balancer. We’ve been getting 502 responses back from Kong. Our setup is the following. I
clients -> elastic load balancer -> kong ecs -> kong docker containers -> microservice load balancer -> microserice ecs -> microservice containers
When checking the cloudwatch logs I found this: 2017/12/14 09:35:56 [error] 53#0: *273045 upstream prematurely closed connection while reading response header from upstream, client: …, server: kong, request: “POST /v1/user/settings HTTP/1.1”, upstream: “https://x:443/user/settings”, host: …
As a test I started using http routes instead, and the errors I’m getting now are:
2017/12/18 12:07:58 [error] 53#0: *38590 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.0.32.239, server: kong, request: “POST /v1/user/settings HTTP/1.1”, upstream: “http://x:80/user/settings”, host: …
I’m a bit at a loss, and I’m not really sure where the problem is. I’m using Kong 0.11.2.
It doesn’t seem to hit our microservices when this error occurs, so that makes me think it must be something on the kong level.
I’ve been thinking that maybe it’s related to Connection: keep-alive, and somehow our microservices not honoring this or closing the connections. Is there a way for me to make Kong omit Connection: keep-alive when it’s make the upstream requests as a test?
Any other ideas? I’m not seeing much more in the logs except for these errors. I have the following setup:
The route is configured like this (we’re using kongfig)
- name: "user_settings"
- name: jwt
- name: jwt
We’re using postgres as the datastore