Kubernetes support

Hi,

A couple of things related to the Kubernetes deployment files in GitHub:

  1. postgres.yaml: the data is persisted in a volume type emptyDir. Probably not a good example, or at least put a comment about it not being recommended.
  2. For Charts: not sure if I followed how runMigrations is supposed to be used. For a brand new deployment, are we supposed to run helm once with runMigrations true and run again with false from thereon until an upgrade that requires migrations?

Regards,
Luiz

As for #2, just found out that there are two run migrations options:

a. kong start —run-migrations: run migrations as keep running.
b. kong migrations up: this one runs the migrations and exits.

The helm charts is using a, which leaves it running so can run helm once even with runMigrations=true. However that may bring another problem: what if the replica set is more than one? Is Kong prepared to run migrations in an atomic way? All instances of the replica set will run in parallel.

Regards,
Luiz

Hi,

I cannot help you with the K8S-s specifics of your initial question, but I can clear up the confusion about the migrations question: do not use kong start --run-migrations.

It is more of a development setup convenience (for single nodes) that should be considered harmful in production setups, precisely because of your guess:

Is Kong prepared to run migrations in an atomic way?

It is not. It shall be with PostgreSQL at some point, but will likely not with Cassandra.

This flag should be removed rather sooner than later, and it mostly still there for legacy reasons from a time when kong start automatically handled migrations.

Thanks for your reply. The kong start --migrations working in an atomic way would actually be beneficial with Kubernetes Helm deployments due to it’s awkward way to handle deployment sequencing of events. Without it we have to run twice: kong migrations up and kong start. Note that I’m assuming the atomic way implementation would allow multiple instances to be started in parallel where one would run the migrations and the others would block.

The way to are trying to handle migrations prior to starting multiple parallel copies of Kong with Helm is:

  1. Create a Job to run kong migrations up.
  2. In the Kong deployment description, add an initContainer section and use groundnuty/k8s-wait-for image to wait for the job above to finish before starting multiple instances of Kong.

Regards,
Luiz

postgres.yaml: the data is persisted in a volume type emptyDir. Probably not a good example, or at least put a comment about it not being recommended.

@Luiz_Omori please check this Helm chart PR, https://github.com/kubernetes/charts/pull/3150, It handles Persistence. Eventually we will use this chart as official Kong support for K8s.

For Charts: not sure if I followed how runMigrations is supposed to be used. For a brand new deployment, are we supposed to run helm once with runMigrations true and run again with false from thereon until an upgrade that requires migrations?

Above PR also run a Job to handle migration for initial deployment.

1 Like

@shashiranjan Thanks. I took a look at your changes and:

  1. migrations.yaml: not sure if really necessary, but ours has an initContainers section to wait for the Postgres service to be up and running before the migrations start.
  2. deployment.yaml: humm, again not sure here. Given the nature of K8s resource creation where the command to create a resource returns before the resource is actually created (asynchronous), and in particular before a job is finished, how is that deployment.yaml making sure that the migration is finished before all Kong instances are started? In our case, we used again initContainers. Please let me know if that is not necessary. It was a bit of a pain. We are also very curious how other teams have addressed this, if in fact is an issue…
  3. Persistence: well, if it were me I would put a stronger statement in the README/pre-requisites. The only case persistence=false would make sense is for quick demos. Note that in K8s even if you restart a pod within the same host/node, the container data will be wiped out. That behaviour differs from pure Docker. I would also mention that this change has to be made in the values.yaml.
  4. Do you really need to check-in requirements.lock?
  5. One cool thing we found recently is that you can use helm upgrade --install instead of helm install and helm upgrade. This way you don’t care if that’s the first install or update. Useful for CI/CD.

Regards,
Luiz

1 Like

migrations.yaml: not sure if really necessary, but ours has an initContainers section to wait for the Postgres service to be up and running before the migrations start.

Migrations.yaml handles the migration explicitly so that deployment can start the any number of replicas. In worst case Job will fail and restart till it terminates successfully.

deployment.yaml: humm, again not sure here. Given the nature of K8s resource creation where the command to create a resource returns before the resource is actually created (asynchronous), and in particular before a job is finished, how is that deployment.yaml making sure that the migration is finished before all Kong instances are started? In our case, we used again initContainers. Please let me know if that is not necessary.

Again in worst case Kong pods will fail till it able to reach backing DB and confirm that all migrations are up-to-date. Deployment is configured with Liveliness/Readiness probe.

OK, the migrations job will be restarted if Postgres, in our case, is not ready. That would work.

But, for the second part, what happens if the kong migrations job is started and before completed 3 “normal” Kongs as defined in deployment.yaml start? Are you implying that they will detect an incorrect DB schema and stop?

Regards,
Luiz

Yes, Kong pods will fail and restart, till migration is up-to-date in Kong datastore.

I’m sitting here wondering if this discussion is still applicable when one is using Kong EE?

As far as I understand it, it should work the same (take it with a grain of salt, I’m new and not that technical)

Kubernetes itself is undergoing change at a rapid pace which is a big part of why I asked. I’ll try and come back with updates.