Install using Helm on google cloud and cloudsql proxy postgres

Hey,

I am trying to figure out how to install Kong ingress controller in Google GCE Cluster.

Most of the instructions seem to be rather straightforward, but when it comes to providing postgresql database, I do not seem to be able to find a way to load cloudsql proxy as sidecar to Kong containers…

1 Like

So, to explain in a more detail, We are on a Google Cloud Engine and I am trying to install Kong ingress controller in GKE to use it mainly as an API gateway.

Kong has a dependency on Postgres. We use Postgres internally as well and we use Google CloudSQL proxy to connect to Google CloudSQL instances.

Looking at the helm chart documentation of stable/kong chart, I noticed that there was no way to add sidecar containers to the kong helm chart, which means I had to figure out alternative ways to achieve the same goal.

What I ended up doing was installing gcloud-sqlproxy as a Cluster Service and pointing Kong helm chart to that service for postgres database connection.

Basically (and for the sake of recording the process for those that find themselves in a similar situation in the future):

  1. Set up Google CloudSQL Postgres database instance
  2. Set up CloudSQL proxy as a cluster service (using Helm)
  3. Installed Kong using Helm chart and using roughtly following values:
postgresql:
  enabled: false
env:
  pg_host: kong-postgres-gcloud-sqlproxy
  pg_user: gcloud-sqlproxy-user
  pg_password: gcloud-sqlproxy-password

This appears to be a working process.

I still have few missing bits and pieces to be able to access Kong from outside of the cluster, but the services and pods are running an migration seems to have run as well, so I assume the database connection is working just fine.

There is still one bit that bothers me with this setup – the database credentials are deployed “in the open”.
Meaning that username and password are set in pod env as plain text and I have no way of specifying that password and username might come from a pre-existing secret.

Was hoping to find some tips on this thread about plain-old installing using Helm chart. But I see you’re using the Kong Ingress, which due to its ongoing development I’m a bit hesitant to go with.

Hello @Roland_Tepp,

Than you for providing more details around your issue, it certainly helps understand the problem at hand better.

So, to explain in a more detail, We are on a Google Cloud Engine and I am trying to install Kong ingress controller in GKE to use it mainly as an API gateway.

Awesome to hear, and happy to help out!

Looking at the helm chart documentation of stable/kong chart, I noticed that there was no way to add sidecar containers to the kong helm chart, which means I had to figure out alternative ways to achieve the same goal.

There are a couple of issues here, first, Kong Ingress Controller is not part of stable/kong chart yet. There are efforts ongoing to integrate Ingress Controller into that chart currently. You can deploy Kong as an application using this chart but not as an Ingress Controller. You could use the chart to deploy Kong and then deploy Ingress Controller separately, but that will entail some work and maintenance.
While we get the chart out, I suggest you use our deployment file to provision Kong Ingress in your k8s cluster.

I still have few missing bits and pieces to be able to access Kong from outside of the cluster, but the services and pods are running an migration seems to have run as well, so I assume the database connection is working just fine.

Good to hear. Injecting a SQL sidecar using a Helm chart is something needs more thought before we can support it in our chart. @shashiranjan is the original author and current maintainer of the chart and can comment on it better.

There is still one bit that bothers me with this setup – the database credentials are deployed “in the open”.
Meaning that username and password are set in pod env as plain text and I have no way of specifying that password and username might come from a pre-existing secret.

This could be a limitation of the Kong chart. There is an env section of values which takes in Kong configuration values as environment variables. Kubernetes can provide secrets as values to environment variables in a pod spec template but I’m not sure how it can be done via values in Helm. Thoughts @shashiranjan?

1 Like

There is still one bit that bothers me with this setup – the database credentials are deployed “in the open”.
Meaning that username and password are set in pod env as plain text and I have no way of specifying that password and username might come from a pre-existing secret.

Postgres password is indeed pulled from secret. But currently Kong do store them as a plain text in environment. It is a known problem and we working on it.

@Roland_Tepp did @hbagdi or @shashiranjan answer your question? If so would you mind marking the check box on their answers to mark question as solved? Thanks!

Yes, I’m sorry, I’ve been extremely busy these past weeks so I did not have time to respond earlier.

Well, that is a dissapointment. And that also explained some difficulties I was having before I rolled back the Kong installation and used much simpler GCE ingress controller (for now)

We may still revisit using Kong as ingress controller for publishing our public API but for now this is somewhat out of scope for now…

You might want to take a look at how Keycloak chart is defined. It is very nice to work with for the most part.

1 Like

Sorry about that, we’re working to get the Helm chart in place.

Thanks for the pointer to Keycloak chart!

Hi Kong Devs,
excited to be using Kong. We have a similar requirement of wanting to use a cloudsql proxy sidecar. I second @Roland_Tepp’s suggestion of taking inspiration from the Keycloak chart. It has a nice way of integrating a google cloud sql sidecar.
I was wondering if you guys had the time to work on this?
Thanks,
Rajiv

Hello @rajiv.abraham,

I just looked into this briefly and it seems like a sidecar injection logic using Helm values.

It should be not too complicated to add this functionality to Kong’s upstream helm chart, would it be possible for you to PR that?

Following is the relevant code that you can start looking at:

Hi @hbagdi

I would really want to contribute but I’m absolutely new to helm, k8 etc and also a one-man dev team trying to make everything work with tight deadlines :(.
I looked a bit at what you highlighted and realized I’ll need some time to ramp up to understand what’s happening.
If this is easy for you and you decide to do it, I’d be super super grateful. If you guys don’t have the time, I’d completely understand. Let me know what you think.

Hi @hbagdi
I’m guessing you guys don’t have the bandwidth :). I’ll learn up on k8 and helm. Thank you for highlighting the change. That makes things easier.

Hi @hbagdi

I updated deployment.yaml to take in extraContainers (and extraVolumes as GCE Proxy needs that too). When I ran helm install , the migrations failed. Can you help me out figure out what values to provide and where?
This is what I provide in values.yaml right now. Please note, it’s mentioned as keycloak-db-secret but it’s just the same database user password for kong also.

env:
  database: postgres
  pg_user: postgres
  pg_database: kong
  pg_host: 127.0.0.1
  pg_password:
    valueFrom:
      secretKeyRef:
        key: dbPassword
        name: keycloak-db-secret

postgresql:
  enabled: false
  postgresqlUsername: postgres
  postgresqlDatabase: kong
  service:
    port: 5432

The migrations failed. I can’t find the exact error log but it was something like:
Connection Refused(could not determine version number or something)
When I do a describe on the job, I get:

Name:           kong-kong-init-migrations
Namespace:      kong
Selector:       controller-uid=f78ebb62-83d4-11e9-84d9-42010aa20063
Labels:         app=kong
                chart=kong-0.11.2
                component=init-migrations
                heritage=Tiller
                release=kong
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Fri, 31 May 2019 14:50:32 -0400
Pods Statuses:  0 Running / 0 Succeeded / 1 Failed
Pod Template:
  Labels:  app=kong
           component=init-migrations
           controller-uid=f78ebb62-83d4-11e9-84d9-42010aa20063
           job-name=kong-kong-init-migrations
           release=kong
  Containers:
   kong-migrations:
    Image:      kong:1.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
      -c
      kong migrations bootstrap
    Environment:
      KONG_NGINX_DAEMON:      off
      KONG_ADMIN_ACCESS_LOG:  /dev/stdout
      KONG_ADMIN_ERROR_LOG:   /dev/stderr
      KONG_DATABASE:          postgres
      KONG_PG_DATABASE:       kong
      KONG_PG_HOST:           127.0.0.1
      KONG_PG_PASSWORD:       <set to the key 'dbPassword' in secret 'keycloak-db-secret'>  Optional: false
      KONG_PG_USER:           postgres
      KONG_PROXY_ACCESS_LOG:  /dev/stdout
      KONG_PROXY_ERROR_LOG:   /dev/stderr
    Mounts:                   <none>
  Volumes:                    <none>
Events:                       <none>

I would appreciate any help you can provide.

@rajiv.abraham

I was out of office last week so apologies for the delayed response.

The error you are seeing is because of connectivity issues between Kong and the Postgres Deployment that you’ve.

From the values, it seems like you already have a Postgres deployment and service running and are trying to get Kong to use the existing Postgres deployment.

Please ensure that the existing Postgres deployment and Kong deployment are in the same namespace.

Hi @hbagdi
Thank you for replying as soon as you could.

My Postgres deployment is on Google Cloud SQL. I’m adding the sidecar so that the Google Cloud Proxy Sidecar can run on the same pod as kong at 127.0.0.1:5432 and it will route it to my postgres instance on Google Cloud SQL. It’s not a service but an additional container in the kong pod.

So, they should be in the same namespace as you suggest?

I wonder if the problem is that the kong pod(which contains the google cloud proxy sidecar) runs only after the migrations happen and so the migration pod/job cannot find any psql connection at 127.0.0.1:5432 and hence fails?

Oh, and this is the error I’m getting for the migration.

Error: [PostgreSQL error] failed to retrieve server_version_num: connection refused

The error you have posted suggests that Kong migration container can’t connect to Postgres DB.
I think you will need to add the sidecar container to the migration job as well as to Kong deployment.