Stopping logs generated by the AWS ELB health check

Hello to all the community!

I would like to know if one of you has encountered the following issue, and hopefully has solved it !

I’m deploying Kong Docker image as a front-end of an AWS EKS cluster (using NodePort). An ELB is deployed in front of this EKS cluster.

The issue is that it looks the AWS ELB (or indeed the corresponding Target Group) has a health check mechanism to check that the “upstream service” (i.e. Kong !) is alive. But as only the TLS Proxy port is opened to this LB (I don’t want to expose http port), then the health check results in the opening of a TCP connection on Kong, then (maybe) by the start of a TLS handshake (I don’t know exactly), and finally by the immediately closing of the connection.

… and such a sequence generates a log in Kong:

2019/06/14 13:09:01 [info] 39#0: *5857549 client closed connection while SSL handshaking, client: 10.13.68.199, server: 0.0.0.0:8443

The issue is that the ELB is spread overs several AZ, meaning that several “servers” are running this health check in parallel… resulting in many (tens ?) of such logs being generated per second, and then… millions, billions, trillions :stuck_out_tongue: of logs at the end !

Well, no way to see the “useful” logs within this flow: on the stdout, you just have the time to see them a few milli-seconds, and they are gone and replaced by new client closed connection while SSL handshaking!

I tried to configure the health check of the Target Group, but no way: strangely, none of the parameter of the health check can be configured (I cannot update them in the AWS UI: they are in grey!).

Does anyone has any idea on how to either get rid of these logs (that looks to be generated by Nginx, and not directly by Kong – any answer stating that the log level might be set to error is not a good answer :wink: as I would like to get the debug logs !) or to configure properly the AWS health check, or ?..

Thanks in advance!

You can create a route such as /health in Kong (of course make sure that the route doesn’t collide with any of your existing routes) and then add a request-termination plugin on that route to send back a 200 always.

Now, you can configure your ELB to have the following healthcheck for Kong’s proxy port:
HTTPS GET /health

Thanks @hbagdi for your reply. Unfortunately, ELB’s health check configuration only allows to select “TCP” as protocol => no way to configure any http endpoint :frowning:

image

Alright, next I see two options:

  • Make Kong listen on plain HTTP port, open that port up only to the subnet in which ELB is running (public most probably), and then don’t open up port 80 on the ELB. So ELB will be able to Talk on port 80 for health-check but there won’t be a HTTP port available to external world.
  • Use L4 proxying (stream_listen) in kong, open up the port and then make ELB healthcheck that port.

Can you confirm a solution? I believe the below options work when you are routing via a application load balancer, and have control over the health check settings.

However, with a network load balancer you are limited to tcp, and are unable to define the path or expected response.

Previously, I was using an alb and was able to suppress the logs. I have now convert to an nlb and so far have failed.

You can use status_listen in new versions of Kong and turn of error_log for the status endpoint:

1 Like

Fantastic!! Thanks so much. I was about to fall back to a more radical and hacky solution.

@michael.bowers could you please help me, how to get rid of this logs while using aws nlb load balancer

Yep. Do a few things:

In you kong.conf configuration, do the following:

  1. enable status_listen, example "status_listen=0.0.0.0:8100
  2. set status_access_log=off
  3. set status_error_log=off

Then, for your nlb target group, do:
“HealthCheckPort”: “8100”,
“HealthCheckProtocol”: “TCP”

Be sure to enable port 8100 traffic for your security group.

Let me know if that doesn’t work.

3 Likes

@michael.bowers Thanks ofr that detailed answer. I tried everything you said, but my health checks for AWS NLB fails. Is that because status_listen is internal only and cant be exposed through service. If yes, then I am wondering how the NLB will each it. Maybe I am missing something here?