I am trying to setup Kong on GKE with HTTP Load Balancing. GKE allow creation of HTTP Load Balancer with Ingress Resource. It’s documented here.
Problem is HTTP Load Balancer setup default health check rule which expects HTTP 200 status to the GET requests on / path. Kong doesn’t provide that out of the box. Does Kong provide any other health check endpoint that I can use with GKE Ingress?
Try with /status in heath check configuration but still the backend is unhealty
i am trying to expose the proxy service using the gce ingress controller to expose outside of the k8s cluster.
This has known to be a little clunky because GCE Ingress doesn’t allow for any other path other than /status.
One way to do it will be to use status_listen endpoint in Kong to get this done.
The manifest from the next branch of controller repository might work.
@traines has more experience with this one and comment more intelligently…
I don’t think the status listen will work–as best I recall, the issue is that the GKE ingress controller essentially ignores readiness/liveness configuration entirely, and performs its check against / on all services.
It’s possible to work around this for the proxy by initially spawning Kong with an admin API listen only and configuring a route matching it with the request-terminator plugin, but I still wouldn’t recommend it. Doing so requires that you duplicate configuration in some fashion, either by creating the Ingress resource for GKE and then configuring Kong routes manually or creating two sets of Ingresses, one for GKE which points to Kong and one for Kong’s controller which points to the actual Service. Using our controller only is preferable in the majority of circumstances.
What do you want to use the GKE controller for? The most common reason I’ve encountered is that users want to use Google’s WAF, which requires using their HTTP load balancer. For those cases we’ve recommended using our controller to spawn a TCP load balancer at the K8S level and then spawning a GCP HTTP load balancer outside of K8S, targeting it at the TCP load balancer address.
Spawning an HTTP load balancer to satisfy Kong’s LoadBalancer Service directly would be preferable, but as best we know GKE doesn’t provide this option.
Google WAF doesn’t support API protocols such as REST, SOAP, GraphQL, or gRPC anyways. I can’t get the point of for what it should be installed behind the API gateway.
Hi Sorry for the delay, but we have requirement like to set the header , so in TCP/IP load Balancer we can not set the header , and the TCP/IP load Balancer is not global its regional resource.
And we have the setup like multiple kong gateway spawn in different region consider now EU and US ,and to achive this using the Global view we need a Global LoadBalancer and set the headers in HTTP load balancer and route the traffic to nearest gateway.
And your absolutely right like in GCP it have default health check on / , it will ignore any other health check that configure in readiness probe.
So is there any way in kong to configure the health check other than the 8001 , i mean can it is possible to set the health check on 8000 port(proxy).
Hi @traines , Yes I have tried with spawn a TCP load balancer at the K8S level and then spawning a GCP HTTP load balancer outside of K8S manually, and then targeting it at the backend of both cluster.
and it was actually working fine for me.
I am using GKE(Google managed kubernetes) along with kong ingress controller by default KONG ingress ctlr creates the tcp loadbalancer but I need the HTTP load balancer to be created, how can I do that in kong ingress.
Hi @Naresh_Khatiwada i was facing the same issue that you detailed above. But i managed to get it working using a combination of the above and exposing the 8100 status port via the Kong service.