Migration experiences from Kong 1.1.2 to 2.0.1 on Kubernetes

I did the suggested upgrade path according to kong/UPGRADE.md at master · Kong/kong · GitHub

My setup

  • Kubernetes 1.13.x in GKE
  • Postgres - Google Cloud SQL
  • Old kong-ingress-controller: 0.4.0
    • new 0.7.1
  • Old Kong (proxy) 1.1.2
    • new 2.0.1

Migration experiences

Migration was rather straightforward. However I needed to figure out how to make sure I can rollback to old configuration easily if something bad happens…
So I ended up creating parallel load-balancer (with new external-ip) where I would route traffic by DNS load-balancing. The new Kong was behind this side.

  1. After starting migration kong migrations up with 2.0.1 image, and bringing kong-ingress-controller 0.7.1 (and all) up. Everything was OK.

  2. While starting to run both old and new in parallel - Everything was OK!

  3. When shutting down the old kong & ingress-controller. I noticed the following:

    • Errors in ingress-controller logs, I see that new ingress-controller tried to recreate duplicate Kube resources.

    • Proxy operation worked fine though!

    • But I believe configuring/managing Kong with Kube manifests stopped functioning at this point (at least for the existing resources)

  4. Something had to be done to fix this

    • I am not sure if kong migrations finnish would have solved this, but I didn’t dare to try. (Harder to rollback after that)

My solution

In our case it was rather easy to just recreate the Kong database (only Kube resources in kong db). So I just deleted every relevant resource from DB like this: (before delete, you may want to do kong config db_export)

delete from certificates; delete from routes; delete from services; delete from snis; delete from targets; delete from tags; delete from upstreams;

…and (re)applied Kube manifests (mostly ingresses) back → no more errors and everything worked great!

  • At the same time I did some manifest changes as e.g. HTTP->HTTPS redirect was much simpler nowadays
  • I see now that managed-by-ingress-controller tags are used in the Kong DB, so maybe better solution is to create these tags after shutdown of old ingress-controller. (no downtime at all, however downtime is very minimal in my solution as well)

I wanted to share my experience if it comes to any help in your situation. And maybe you can instruct me what I could have done better/different. Now, I am very happily using Kong 2 on Kubernetes :slight_smile:

1 Like

Thanks for the experience report on upgrades. We love such write ups.

Overall, you adopted the safest path possible in your solution which is good.

For future reference, please upgrade only one component at a time. So for example, if you upgrading Kong, don’t change the Ingress Controller. There is a version compatibility document which details on compatibility across versions of Kong and Ingress Controller.

The one problem, which is harder to solve is, in your case you were upgrading from the “pre-tag” to “post-tag” version, which resulted in the recreation of resources problem. That problem cannot be solved in an automated way and requires manual intervention during the upgrade, which is exactly what you did.

Finally, if you don’t have a good reason to use a database for Kong, then you should adopt Kong’s DB-less mode, which has seamless upgrades.

Good job overall and thanks again for sharing this!