I did the suggested upgrade path according to https://github.com/Kong/kong/blob/master/UPGRADE.md
- Kubernetes 1.13.x in GKE
- Postgres - Google Cloud SQL
- Old kong-ingress-controller: 0.4.0
- new 0.7.1
- Old Kong (proxy) 1.1.2
- new 2.0.1
Migration was rather straightforward. However I needed to figure out how to make sure I can rollback to old configuration easily if something bad happens…
So I ended up creating parallel load-balancer (with new external-ip) where I would route traffic by DNS load-balancing. The new Kong was behind this side.
After starting migration
kong migrations upwith 2.0.1 image, and bringing
kong-ingress-controller 0.7.1(and all) up. Everything was OK.
While starting to run both old and new in parallel - Everything was OK!
When shutting down the old kong & ingress-controller. I noticed the following:
Errors in ingress-controller logs, I see that new ingress-controller tried to recreate duplicate Kube resources.
Proxy operation worked fine though!
But I believe configuring/managing Kong with Kube manifests stopped functioning at this point (at least for the existing resources)
Something had to be done to fix this
- I am not sure if
kong migrations finnishwould have solved this, but I didn’t dare to try. (Harder to rollback after that)
- I am not sure if
In our case it was rather easy to just recreate the Kong database (only Kube resources in kong db). So I just deleted every relevant resource from DB like this: (before delete, you may want to do
kong config db_export)
delete from certificates; delete from routes; delete from services; delete from snis; delete from targets; delete from tags; delete from upstreams;
…and (re)applied Kube manifests (mostly ingresses) back -> no more errors and everything worked great!
- At the same time I did some manifest changes as e.g. HTTP->HTTPS redirect was much simpler nowadays
- I see now that
managed-by-ingress-controllertags are used in the Kong DB, so maybe better solution is to create these tags after shutdown of old ingress-controller. (no downtime at all, however downtime is very minimal in my solution as well)
I wanted to share my experience if it comes to any help in your situation. And maybe you can instruct me what I could have done better/different. Now, I am very happily using Kong 2 on Kubernetes