Install using Helm on google cloud and cloudsql proxy postgres


#1

Hey,

I am trying to figure out how to install Kong ingress controller in Google GCE Cluster.

Most of the instructions seem to be rather straightforward, but when it comes to providing postgresql database, I do not seem to be able to find a way to load cloudsql proxy as sidecar to Kong containers…


#2

So, to explain in a more detail, We are on a Google Cloud Engine and I am trying to install Kong ingress controller in GKE to use it mainly as an API gateway.

Kong has a dependency on Postgres. We use Postgres internally as well and we use Google CloudSQL proxy to connect to Google CloudSQL instances.

Looking at the helm chart documentation of stable/kong chart, I noticed that there was no way to add sidecar containers to the kong helm chart, which means I had to figure out alternative ways to achieve the same goal.

What I ended up doing was installing gcloud-sqlproxy as a Cluster Service and pointing Kong helm chart to that service for postgres database connection.

Basically (and for the sake of recording the process for those that find themselves in a similar situation in the future):

  1. Set up Google CloudSQL Postgres database instance
  2. Set up CloudSQL proxy as a cluster service (using Helm)
  3. Installed Kong using Helm chart and using roughtly following values:
postgresql:
  enabled: false
env:
  pg_host: kong-postgres-gcloud-sqlproxy
  pg_user: gcloud-sqlproxy-user
  pg_password: gcloud-sqlproxy-password

This appears to be a working process.

I still have few missing bits and pieces to be able to access Kong from outside of the cluster, but the services and pods are running an migration seems to have run as well, so I assume the database connection is working just fine.

There is still one bit that bothers me with this setup – the database credentials are deployed “in the open”.
Meaning that username and password are set in pod env as plain text and I have no way of specifying that password and username might come from a pre-existing secret.


#3

Was hoping to find some tips on this thread about plain-old installing using Helm chart. But I see you’re using the Kong Ingress, which due to its ongoing development I’m a bit hesitant to go with.


#4

Hello @Roland_Tepp,

Than you for providing more details around your issue, it certainly helps understand the problem at hand better.

So, to explain in a more detail, We are on a Google Cloud Engine and I am trying to install Kong ingress controller in GKE to use it mainly as an API gateway.

Awesome to hear, and happy to help out!

Looking at the helm chart documentation of stable/kong chart, I noticed that there was no way to add sidecar containers to the kong helm chart, which means I had to figure out alternative ways to achieve the same goal.

There are a couple of issues here, first, Kong Ingress Controller is not part of stable/kong chart yet. There are efforts ongoing to integrate Ingress Controller into that chart currently. You can deploy Kong as an application using this chart but not as an Ingress Controller. You could use the chart to deploy Kong and then deploy Ingress Controller separately, but that will entail some work and maintenance.
While we get the chart out, I suggest you use our deployment file to provision Kong Ingress in your k8s cluster.

I still have few missing bits and pieces to be able to access Kong from outside of the cluster, but the services and pods are running an migration seems to have run as well, so I assume the database connection is working just fine.

Good to hear. Injecting a SQL sidecar using a Helm chart is something needs more thought before we can support it in our chart. @shashiranjan is the original author and current maintainer of the chart and can comment on it better.

There is still one bit that bothers me with this setup – the database credentials are deployed “in the open”.
Meaning that username and password are set in pod env as plain text and I have no way of specifying that password and username might come from a pre-existing secret.

This could be a limitation of the Kong chart. There is an env section of values which takes in Kong configuration values as environment variables. Kubernetes can provide secrets as values to environment variables in a pod spec template but I’m not sure how it can be done via values in Helm. Thoughts @shashiranjan?


#5

There is still one bit that bothers me with this setup – the database credentials are deployed “in the open”.
Meaning that username and password are set in pod env as plain text and I have no way of specifying that password and username might come from a pre-existing secret.

Postgres password is indeed pulled from secret. But currently Kong do store them as a plain text in environment. It is a known problem and we working on it.


#6

@Roland_Tepp did @hbagdi or @shashiranjan answer your question? If so would you mind marking the check box on their answers to mark question as solved? Thanks!


#7

Yes, I’m sorry, I’ve been extremely busy these past weeks so I did not have time to respond earlier.

Well, that is a dissapointment. And that also explained some difficulties I was having before I rolled back the Kong installation and used much simpler GCE ingress controller (for now)

We may still revisit using Kong as ingress controller for publishing our public API but for now this is somewhat out of scope for now…

You might want to take a look at how Keycloak chart is defined. It is very nice to work with for the most part.


#8

Sorry about that, we’re working to get the Helm chart in place.

Thanks for the pointer to Keycloak chart!