Kong on GKE with Cloud SQL Postgres Database

Hi,

We want to use kong as an API Gateway to manage traffic from the internet as well as within the GCP Services. We plan to deploy Kong on GKE with Database as the Cloud SQL Postgres DB.
We have tried configuring with this repository (https://github.com/Kong/kong-dist-kubernetes)
But we are facing issues in connection to the cloud SQL Postgres DB.
Could you tell us if the approach is right ? If not, could you guide us to configure the same ?

Thanks,
Sneha

Per the README there, those manifests are deprecated and are no longer maintained. The current equivalent is the set of example manifests covered in https://github.com/Kong/kubernetes-ingress-controller/tree/master/docs/deployment

Those all use the ingress controller along with Kong; if you do not wish to use it and do not want to edit it out of those manifests by hand, I recommend using the Helm chart.

The Postgres settings are the same regardless; you’ll configure that using the settings in https://github.com/Kong/kong/blob/master/kong.conf.default#L785-L792

When writing manifests directly, those are set using KONG_PG_HOST and similar environment variables in the proxy container configuration, or by using the kong.conf name (e.g. pg_host) under the env section of values.yaml with Helm.

If you’re not able to connect at all (versus seeing an authentication failure or some other post-connection error) with the proper hostname/port configured, you may need to check your network configuration. https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine has documentation for the GCP/GKE-level configuration you should create alongside the Kong configuration.

Hello traines,

Just going through the Helm chart method you mentioned for deploying kong. I am trying to figure out once having kong deployed how we can enable certain things like the admin console. It mentions there by default it is disabled and to enable it we need to use admin.enabled and env.admin_listen parameters but I can’t see a way to change these parameters to the deployment.

are we to use helm to enable features and redeploy?

helm upgrade release --set admin.enabled=true --set env.admin_listen=true

Tried using that but it did not work with the error below.

Error: “helm upgrade” requires 2 arguments

Usage: helm upgrade [RELEASE] [CHART] [flags]

which looks more like a syntax issue?

https://github.com/Kong/charts/blob/master/charts/kong/values.yaml#L68-L117 is what you’re looking for to configure the admin API. The first admin.enabled setting sets it to listen for requests originating outside the Kong container, and the various other enabled settings below control what Service resources it creates and whether it creates an Ingress resource.

You don’t need to set the listen environment variable once that’s enabled: we set that up for you based on the port and HTTP/HTTPS configuration under the admin section in values.yaml.

The upgrade command requires you specify the chart, as shown in https://helm.sh/docs/helm/helm_upgrade/#helm-upgrade

The chart location in their example is ./redis; it will be kong/kong for you (it accepts either a local path as shown in their example or chartRegistry/chartName for a remote repository).

Hey traines,

In that case when I’m looking at our gke env I do not see the admin console exposed I can only see the proxy being exposed.
We want the admin port exposed as well so we can connect using Konga tool and configure routes and services.

I also tried using the helm upgrade command yesterday once I got an understanding on the syntax but it seems there is something I am missing? because the port 8001 does not get exposed. only port 80 and 443 are getting exposed.

Command used.

helm install test-kong --set ingressController.installCRDs=false --set admin.type=LoadBalancer --set proxy.type=LoadBalancer --set autoscaling.enabled=true --set autoscaling.minReplicas=2 --set autoscaling.maxReplicas=5 --set autoscaling.targetCPUUtilizationPercentage=60 kong/kong

That should handle it, though you’ll often want to instead enable the admin Ingress to access it through the proxy–it will allow you to apply security restrictions via the proxy and reduce cost (since GCP charges per LoadBalancer).

If neither of those options work, can you share kubectl get svc -o wide | grep "-admin" after performing that upgrade, and kubectl describe svc SVCNAME assuming one appears in the prior command? It should be enabled by default once admin.enabled: true is set.