How do I deploy Kong with HTTP Load Balancer on GKE

I am trying to setup Kong on GKE with HTTP Load Balancing. GKE allow creation of HTTP Load Balancer with Ingress Resource. It’s documented here.
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer

Problem is HTTP Load Balancer setup default health check rule which expects HTTP 200 status to the GET requests on / path. Kong doesn’t provide that out of the box. Does Kong provide any other health check endpoint that I can use with GKE Ingress?

You can setup a request termination plugin to return HTTP 200 on the / path.

Does Kong provide any other health check endpoint that I can use with GKE Ingress?

You can use /status endpoint on the Admin API to healthcheck Kong as well.

Try with /status in heath check configuration but still the backend is unhealty
i am trying to expose the proxy service using the gce ingress controller to expose outside of the k8s cluster.

Is there any other way to configure the same

This has known to be a little clunky because GCE Ingress doesn’t allow for any other path other than /status.
One way to do it will be to use status_listen endpoint in Kong to get this done.
The manifest from the next branch of controller repository might work.

@traines has more experience with this one and comment more intelligently…

I don’t think the status listen will work–as best I recall, the issue is that the GKE ingress controller essentially ignores readiness/liveness configuration entirely, and performs its check against / on all services.

It’s possible to work around this for the proxy by initially spawning Kong with an admin API listen only and configuring a route matching it with the request-terminator plugin, but I still wouldn’t recommend it. Doing so requires that you duplicate configuration in some fashion, either by creating the Ingress resource for GKE and then configuring Kong routes manually or creating two sets of Ingresses, one for GKE which points to Kong and one for Kong’s controller which points to the actual Service. Using our controller only is preferable in the majority of circumstances.

What do you want to use the GKE controller for? The most common reason I’ve encountered is that users want to use Google’s WAF, which requires using their HTTP load balancer. For those cases we’ve recommended using our controller to spawn a TCP load balancer at the K8S level and then spawning a GCP HTTP load balancer outside of K8S, targeting it at the TCP load balancer address.

Spawning an HTTP load balancer to satisfy Kong’s LoadBalancer Service directly would be preferable, but as best we know GKE doesn’t provide this option.

2 Likes

Google WAF doesn’t support API protocols such as REST, SOAP, GraphQL, or gRPC anyways. I can’t get the point of for what it should be installed behind the API gateway.

Hi Sorry for the delay, but we have requirement like to set the header , so in TCP/IP load Balancer we can not set the header , and the TCP/IP load Balancer is not global its regional resource.

And we have the setup like multiple kong gateway spawn in different region consider now EU and US ,and to achive this using the Global view we need a Global LoadBalancer and set the headers in HTTP load balancer and route the traffic to nearest gateway.

And your absolutely right like in GCP it have default health check on / , it will ignore any other health check that configure in readiness probe.

So is there any way in kong to configure the health check other than the 8001 , i mean can it is possible to set the health check on 8000 port(proxy).

You can simply have a / route in kong that uses request-termination plugin and returns back a 200.

If the port for health-check can be changed, then use a custom server block to achieve this.
Example: https://github.com/Kong/charts/blob/9bb1ca2daa3e329edd6e601aaf80d6888189d284/charts/kong/templates/config-custom-server-blocks.yaml

Hi @traines , Yes I have tried with spawn a TCP load balancer at the K8S level and then spawning a GCP HTTP load balancer outside of K8S manually, and then targeting it at the backend of both cluster.
and it was actually working fine for me.


request is routed according to nearest gateway.

I am using GKE(Google managed kubernetes) along with kong ingress controller by default KONG ingress ctlr creates the tcp loadbalancer but I need the HTTP load balancer to be created, how can I do that in kong ingress.

@traines I am trying to expose my kong proxy using GKE https load balancer and it is failing on health check, tried different way no luck.

Could you please walk me through to make it work?

Try this one

Create BackendConfig CRD with all required parameter

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: http-hc-config-proxy
namespace: default
spec:
healthCheck:
timeoutSec: 10
healthyThreshold: 1
unhealthyThreshold: 10
type: HTTP
requestPath: /status
port: 8100

Add this line in kong service defination:’

annotations
cloud.google.com/backend-config: ‘{“default”: “http-hc-config-proxy”}’

Create Ingress Resource using GCE as a Ingress Class and Make sure Path and Path type should be like below one

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: “gce”
spec:
rules:

  • http:
    paths:
    • path: /*
      pathType: ImplementationSpecific
      backend:
      service:
      name: kong5-kong-proxy
      port:
      number: 80

I tried exactly same way still backend is unhealthy due to hc fail for /status on port 8100

Hi @Naresh_Khatiwada i was facing the same issue that you detailed above. But i managed to get it working using a combination of the above and exposing the 8100 status port via the Kong service.

I used manifests similar to the examples below:

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: backend-config
spec:
  healthCheck:
    healthyThreshold: 1
    port: 8100
    requestPath: /status
    timeoutSec: 10
    type: HTTP
    unhealthyThreshold: 10

apiVersion: v1
kind: Service
metadata:
  annotations:
    beta.cloud.google.com/backend-config: '{"ports": {"80":"backend-config"}}'
spec:
  type: NodePort
  ports:
    - name: kong-proxy
      port: 80
      protocol: TCP
      targetPort: 8000
    - name: kong-proxy-status
      port: 8100
      protocol: TCP
      targetPort: 8100
  selector:
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: kong-app
    app.kubernetes.io/name: kong
  

I hope this helps.

Thank you this was really helpful :point_up: