Hello to all the community!
I would like to know if one of you has encountered the following issue, and hopefully has solved it !
I’m deploying Kong Docker image as a front-end of an AWS EKS cluster (using NodePort). An ELB is deployed in front of this EKS cluster.
The issue is that it looks the AWS ELB (or indeed the corresponding Target Group) has a health check mechanism to check that the “upstream service” (i.e. Kong !) is alive. But as only the TLS Proxy port is opened to this LB (I don’t want to expose http port), then the health check results in the opening of a TCP connection on Kong, then (maybe) by the start of a TLS handshake (I don’t know exactly), and finally by the immediately closing of the connection.
… and such a sequence generates a log in Kong:
2019/06/14 13:09:01 [info] 39#0: *5857549 client closed connection while SSL handshaking, client: 10.13.68.199, server: 0.0.0.0:8443
The issue is that the ELB is spread overs several AZ, meaning that several “servers” are running this health check in parallel… resulting in many (tens ?) of such logs being generated per second, and then… millions, billions, trillions of logs at the end !
Well, no way to see the “useful” logs within this flow: on the stdout, you just have the time to see them a few milli-seconds, and they are gone and replaced by new
client closed connection while SSL handshaking!
I tried to configure the health check of the Target Group, but no way: strangely, none of the parameter of the health check can be configured (I cannot update them in the AWS UI: they are in grey!).
Does anyone has any idea on how to either get rid of these logs (that looks to be generated by Nginx, and not directly by Kong – any answer stating that the log level might be set to
error is not a good answer as I would like to get the debug logs !) or to configure properly the AWS health check, or ?..
Thanks in advance!