Ingress Controller with Helm Chart deployment

After deploying Kong with Helm 2 in a DB-enabled mode using standalone PostgreSQL in AWS RDS I end up with Kong POD running the PROXY and Ingress Controller container.

Q1. What is the purpose of that Ingress Controller container? Since the service points at the PROXY. Am I right assuming the Ingress Controller container is redundant when in DB-mode?

After disabling the Ingress Controller it in values.yaml I was unable to deploy. Kong POD was unable to locate the Service Account.

Q2. What’s the correct way of setting it up?

The ingress controller container is https://github.com/Kong/kubernetes-ingress-controller. The controller allows you to configure Kong via Kubernetes manifests rather than via the admin API, which is a good choice for most Kong deployments in Kubernetes. It’s not required, but I’d generally recommend using the controller if you don’t know you have a specific reason not to: configuring Kong through Kubernetes manifests allows you to take advantage of Kubernetes-native configuration that you’ll need anyways, and allows for easier integration with other Kubernetes-native tools.

The controller will work with either DB-backed or DB-less modes–I’d say it’s more correct to say that the controller removes the need for a database in most scenarios: since all that configuration is already stored in Kubernetes (specifically in etcd), you don’t need a separate persistent database to store Kong configuration, as the controller can generate DB-less configuration from the information it finds in your Kubernetes manifests.

Thanks for your help @traines, but these questions are about the DB-enabled mode.

Q1. What’s the role of the Ingress Controller Container in the DB-enabled mode?
Q2. Should it be disabled in the DB-enabled mode? (Bear in mind Service points at PROXY).
Q3. If that’s the case why is deployment fails with a error referring to a missing Service Account?

As a side note regarding the DB-less mode: It’s very easy to misconfigure the key-auth plugin when adding new consumers. Plugin then just stops working with no feedback and it’s not fixable unless to delete the deployment and deploy from scratch with all commands played in the correct order. I don’t think the DB-less mode (using Helm and Kube manifests) is production ready. There’s a number of issues in Github that seem to confirm my experience.

The Ingress Controller container communicates with the Kubernetes API and updates Kong via the admin port of the Proxy container. The Proxy does not read kubernetes resources, that’s what the ingress controller does DB or no-DB.

The controller performs the same function in both DB-backed and DB-less modes, and is recommended for Kubernetes deployments of either. The exact interaction with the admin API changes, but that’s a transparent implementation detail.

DB-less mode is considered production-ready; which issue are you referring to? I wasn’t able to find anything that looked relevant in a search. Bugs do crop up with either mode-- please report an issue if you find something that looks bugged and isn’t already reported.

https://github.com/Kong/kubernetes-ingress-controller/issues/735 may affect you here if you use a custom Ingress class–that problem generally affects any resource that shouldn’t require an Ingress class (in the case of a credential, because it’s loaded via its attached KongConsumer, which does require ingress.class annotations when using non-default classes (kong or no class annotation are accepted if you do not override the default). We are working on a fix for that, but if you’re affected in the meantime you should be able to work around it by adding an ingress.class annotation to the credential Secret. That issue will affect either mode, however, as it’s a bug in the controller.

Killing the controller containers (execing into running containers and running kill 1) should also work, without disrupting traffic, as it will force a restart of the controller container and the new instance will see the latest credential. However, the annotation workaround is better, as it doesn’t require intervention after the initial change to add it, whereas you’d need to restart after each change to the credential value with the kill workaround.

The ServiceAccount issue could be an issue with Helm 2 that we’re not aware of (check the output of helm template to see if it’s being rendered properly, or could be due to some old resource that wasn’t properly deleted. Trying to find that problem resource should fix the issue in the latter case, though it’s probably easier to choose a new name for the account to get that fixed quickly, and then search for the offending old config later at your convenience.