How do I deploy Kong with HTTP Load Balancer on GKE

I am trying to setup Kong on GKE with HTTP Load Balancing. GKE allow creation of HTTP Load Balancer with Ingress Resource. It’s documented here.
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer

Problem is HTTP Load Balancer setup default health check rule which expects HTTP 200 status to the GET requests on / path. Kong doesn’t provide that out of the box. Does Kong provide any other health check endpoint that I can use with GKE Ingress?

You can setup a request termination plugin to return HTTP 200 on the / path.

Does Kong provide any other health check endpoint that I can use with GKE Ingress?

You can use /status endpoint on the Admin API to healthcheck Kong as well.

Try with /status in heath check configuration but still the backend is unhealty
i am trying to expose the proxy service using the gce ingress controller to expose outside of the k8s cluster.

Is there any other way to configure the same

This has known to be a little clunky because GCE Ingress doesn’t allow for any other path other than /status.
One way to do it will be to use status_listen endpoint in Kong to get this done.
The manifest from the next branch of controller repository might work.

@traines has more experience with this one and comment more intelligently…

I don’t think the status listen will work–as best I recall, the issue is that the GKE ingress controller essentially ignores readiness/liveness configuration entirely, and performs its check against / on all services.

It’s possible to work around this for the proxy by initially spawning Kong with an admin API listen only and configuring a route matching it with the request-terminator plugin, but I still wouldn’t recommend it. Doing so requires that you duplicate configuration in some fashion, either by creating the Ingress resource for GKE and then configuring Kong routes manually or creating two sets of Ingresses, one for GKE which points to Kong and one for Kong’s controller which points to the actual Service. Using our controller only is preferable in the majority of circumstances.

What do you want to use the GKE controller for? The most common reason I’ve encountered is that users want to use Google’s WAF, which requires using their HTTP load balancer. For those cases we’ve recommended using our controller to spawn a TCP load balancer at the K8S level and then spawning a GCP HTTP load balancer outside of K8S, targeting it at the TCP load balancer address.

Spawning an HTTP load balancer to satisfy Kong’s LoadBalancer Service directly would be preferable, but as best we know GKE doesn’t provide this option.

2 Likes

Google WAF doesn’t support API protocols such as REST, SOAP, GraphQL, or gRPC anyways. I can’t get the point of for what it should be installed behind the API gateway.

Hi Sorry for the delay, but we have requirement like to set the header , so in TCP/IP load Balancer we can not set the header , and the TCP/IP load Balancer is not global its regional resource.

And we have the setup like multiple kong gateway spawn in different region consider now EU and US ,and to achive this using the Global view we need a Global LoadBalancer and set the headers in HTTP load balancer and route the traffic to nearest gateway.

And your absolutely right like in GCP it have default health check on / , it will ignore any other health check that configure in readiness probe.

So is there any way in kong to configure the health check other than the 8001 , i mean can it is possible to set the health check on 8000 port(proxy).

You can simply have a / route in kong that uses request-termination plugin and returns back a 200.

If the port for health-check can be changed, then use a custom server block to achieve this.
Example: https://github.com/Kong/charts/blob/9bb1ca2daa3e329edd6e601aaf80d6888189d284/charts/kong/templates/config-custom-server-blocks.yaml

Hi @traines , Yes I have tried with spawn a TCP load balancer at the K8S level and then spawning a GCP HTTP load balancer outside of K8S manually, and then targeting it at the backend of both cluster.
and it was actually working fine for me.


request is routed according to nearest gateway.


© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ