Upstream-timeout plugin not working - timeout still at 60 seconds

Hello,
We recently upgrade our kong to the 3.10.0 enterprise version

We are trying to increase the default kong timeout of 60 seconds using the upstream-timeout plugin as shown below:

However, even with this the requests are still timing out at 60 seconds. Could you please help here? Are we missing anything?

1 Like

@gkn103 Could you try asking our developer site AI agent your question in text form and see if you get helpful guidance.

Hi Rick,
The AI wasnt very helpful and it asked me to reach out to the Kong Support:

I’ve tried to reproduce this issue on the same version of Kong, but it seems to work in my environment. I wonder if this error is coming from another source, not Kong. Can you verify there are no other proxies involved or the upstream itself isn’t timing out after 60 seconds? You can also add a file-log or http-log plugin to try to see if the 504 is coming from Kong or not.

Hello @Brent_Yarger , Thank you for your response.

I tried to change the timeout value under the plugin to 50 seconds, still the timeout is happening at 60 seconds. Doesnt it mean that for some reason, the upstream plugin is not even getting considered?

Below is the log from the gateway pod when the call is made:

172.29.101.159 - - [25/Mar/2026:18:00:05 +0000] "GET /metrics HTTP/1.1" 200 14822 "-" "Datadog Agent/7.55.2"

2026/03/25 18:00:05 [info] 2669#0: *66848 client 172.29.101.159 closed keepalive connection

2026/03/25 18:00:10 [error] 2669#0: *66784 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.29.101.110, server: kong, request: "GET /lead-status/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status HTTP/1.1", upstream: "https://3.226.77.72:443/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status", host: "api-ndc-qa.i.mercedes-benz.com", request_id: "5bcd4a2cf44a033d3ece3556d77bf60f"

2026/03/25 18:00:19 [info] 2669#0: *66784 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while reading response header from upstream, client: 172.29.101.110, server: kong, request: "GET /lead-status/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status HTTP/1.1", upstream: "https://3.225.14.243:443/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status", host: "api-ndc-qa.i.mercedes-benz.com", request_id: "5bcd4a2cf44a033d3ece3556d77bf60f"

2026/03/25 18:00:19 [notice] 2669#0: *66784 [lua] request_logger.lua:48: log(): ONEAPILOG_START {"passed_authentication":true,"service":"lead-status-kong-gateway-svc","target_url":"https://mlms-lead-status-qa-v2.s.dipm.np.aws.mbride.net:443/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status","application_id":"dc66a941-7935-4797-9ed7-abab7c7c0d7d","service_id":"7a820955-2d73-5353-bd07-cb10cfa1b5bc","cache":{"introspection":{"dur":592,"src":"lookup"},"consumer":{"dur":1,"src":"redis"}},"response_status_code":499,"method":"GET","environment":"testing","request_uri":"http://api-ndc-qa.i.mercedes-benz.com:80/lead-status/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status","latencies":{"total":60001,"kong":50651,"plugin":593,"upstream":9350},"introspection_duration":433,"api_version_short_name":"retrieve_lead_status_kong_api-v1","introspection_success":true} ONEAPILOG_END while logging request, client: 172.29.101.110, server: kong, request: "GET /lead-status/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status HTTP/1.1", upstream: "https://3.225.14.243:443/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status", host: "api-ndc-qa.i.mercedes-benz.com", request_id: "5bcd4a2cf44a033d3ece3556d77bf60f"

172.29.101.110 - - [25/Mar/2026:18:00:19 +0000] "GET /lead-status/leads/87cf2578-e869-4fd0-88f8-8f1311bc43a0/status HTTP/1.1" 499 0 "-" "PostmanRuntime/7.3.0" kong_request_id: "5bcd4a2cf44a033d3ece3556d77bf60f"

2026/03/25 18:00:19 [error] 2669#0: *65960 send() failed (111: Connection refused), context: ngx.timer

2026/03/25 18:00:19 [error] 2669#0: *65960 [kong] statsd_logger.lua:89 failed to send data to 172.29.10

Also, i added the file-log plugin and here’s the output:
And it looks it is kong that is timing out at 60 seconds.
What do you think?

Kong is set to retry 5 times in your example above. Can you set the service retries parameter to zero? When Kong is configured with upstream-timeout and retries, it will timeout after the upstream-timeout configured time but then it will retry based on the service “retries” configuration.

Hi @Brent_Yarger ,

I changed the retries to 0.

Observations:

When i keep the timeout value to 50 seconds, the api call times out at 50 seconds saying “upstream timeout out” - Expected behaviour.

When i keep the timeout value to 300 seconds, the api call times out at 60 seconds with the 504.

Adding the screenshot for the error when the value is set to 50 seconds:

Yes this is exactly the behavior I’d expect if the upstream or some other component has a 60 second timeout. In this case I’m fairly certain that html message for the 60 second timeout is coming from something else, not Kong. I’ve verified this works as expected in my own environment as well.

Hi Brent, We spoke to our AWS infra team and they tried changing the load balancer timeout from 60 seconds to 300s. Still the requests are timing out at 60s.

Is there any defenitive test we can do at our end to completely disprove that the timeout is happening at kong end? That is the only way we could get the infra team to go deeper into troubleshooting the issue. Right now they are pushing back saying that the issue is at kong end itself.

Thank you in advance.