Kong session handling

Hi Team , we are facing issue with the kong sticky session .

setup : kong running on kubernetes and upstream application also runs on kubernetes in different namespaces
kong accessible via node port and application accessible via clusterip

so have few questions
1.is kong pod make session to application pod or to application service endpoint
2. is keep alive timeout in kong control the dns refresh interval at kong

kong establish session with application , but when application pod are deleted , kong does not refresh itself and still refers to the old ip of application pod .

workaround that we see is to restart kong pod that correct the entries in kong
so does any one faced similar issue

When you say session, which are you referring to specifically? It sounds like you’re maybe talking about the TCP session/connection between Kong and the upstream application exposed by a K8S Service, but I’m not entirely sure.

If using the ingress controller, the controller should keep track of Service Pods going offline and coming online, and will update Kong target entries in the corresponding upstream automatically (it watches for Service status changes). That does take a small amount of time, but it usually updates in fairly short order. If it’s not, you may want to check the controller logs to see if it’s having issues pushing config to Kong.

If you do not use the controller and use K8S Service hostnames in your Kong service, Kubernetes’ service discovery system is responsible for introducing and removing Pod IPs. It’s DNS-based and will also have some lag time before everything acknowledges changes based on record TTLs.

Existing open connections may be reused also, though the specifics of how that plays out in practice are a bit too complex to get into at the start.

Can you confirm whether the controller is in use, and provide logs from both the controller (if in use) and Kong (at log_level: debug) around the time you’re seeing this issue, and show how you’re detecting that it’s present?

Thanks for the reply , we are not using controller , we using K8S service as hostname as kong service hostname and seeing the issue with kong holds the session with old pod while new pod is created in upstream

The ingress controller can handle that directly since it watches for updates from the API server, but Kong cannot–it doesn’t have the ability to watch and must rely on DNS and/or connection resets on the existing connections.

You may want to review the suggestion/info in https://github.com/Kong/charts/pull/116 and https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/ regarding how Pods handle termination. That specific change won’t help here (it’s for adjusting how Kong terminates, for gracefully clients sending requests to it, and you’d need to modify how the upstream Pods terminate to gracefully handle inbound requests to Kong), but it’s similar to the changes you’d need to make on your own Pods/applications.

Keepalive behavior is tunable to a degree and you can send a Connection: close header to avoid them entirely (note that you can use a serverless plugin instance as an alternative to a full custom plugin). I’d recommend exploring termination policy first, however, since keepalive changes also affect performance when the upstream Pod is running normally, not just when it terminates–sending Connection: close in particular is pretty drastic, since it forces a new TCP connection for each request, which can increase overall request latency considerably.