Multiple kong-ingress-controller compete to taking over of the ingress process and communication with cert-manager

Hello Kong friends

I would like if you can, of course, that you would check this behavior which I am sharing here in relation to this deployment. I am interested in know your thoughts about this situation

I got the following deployment in order to have multiple kong ingress controller to different environments

We got the following small behavior like study case between the development and production environment which use each one a different kong deployment. Are the following facts:

  • namespace kong

kong-ingress-controller development environment (kong and kong-proxy)

  • namespace development

app-service-dev C# application service

app-Service-dev-ingress using to app-service-dev and pointing to kong-ingress-controller located in namespace kong development environment above

  • namespace kong-production

kong-ingress-controller production environment(kong and kong-proxy)

  • namespace production

app-service-production C# application service

app-Service-prod-ingress using to app-service-production and pointing to kong-ingress-controller located in namespace kong-production environment above

In the first instance both app-Service-dev-ingress and app-Service-prod-ingress (located in development and production namespace) are working. If you go to:

We can see that both ingress resources are working and seem like if they were integrated with their respective kong-ingress-controller in their own namespaces (kong to dev service and kong-production to production service)

But, when I update some ingress-resource in any of the namespaces above mentioned (app-Service-dev-ingress or app-Service-prod-ingress), the kong-ingress-controller which is pointing that ingress-resource take over of all ingress operations inside my cluster and redirect all requests to their respective database.

As a result of this behavior, currently I was working with production ingress controller and the app-Service-prod-ingress ingress resource, and this take over monitoring my cluster and also has caught the requests made to domain, and it suppose that these requests are managed by the development ingress controller and resource.

Finally, in my kong ingress controller production database, I have routes of the app-service-dev C# application service development and the app-service-production C# application service production.

Let’s have a look at the following cases:

  • This is my kong (development) database

The first time when the ingress resource in development environment was created, the kong-ingress-controller in the development environment (namespace kong) take over this route and was saved it.
Let’s have a look at the date in orange on April 3, 2019 and the route with cert-manager in red color.

  • This is my kong-production database

The first time when the ingress resource in production environment was created, the kong-ingress-controller in the production environment (namespace kong-production) take over this route and was saved it.
Let us have a look at the date in the first record on the blue rectangle, on April 4, 2019 at 10:27 hours, in which cert-manager configure it in the path attribute

But, then, I proceed to make some requests to zcrmdev domain and the production database take this request and store it, such as we can see in the third record in orange on on April 4, 2019 at 10:34 hours

I am not sure if maybe this could to have some influence or impact in the way of how these two kong-ingress-controller in the different namespaces works because each one of them has registered the correct path according to request domain that correspond to each other, but, I think that the consequent route registers does not should be stored in the opposite database.

I think that this kong-ingress-controller will receive some requests from any domain direction ( or ) and kong-proxy will route the requests to their respective ingress resource and from there to their respective service pods in the different namespace and from there to their respective database.

My conclusion is that those KONGs controllers that I have, are fighting for who take over the power in the cluster

If you see the diagram above, I have that deployment working with kong and cert-manager (with support of acme-kong-kube-helper-).

And then, when kong-ingress-controller should communicate with cert-manager in order to validate some certificates, this situation also is presented, sometimes when cert-manager wants to validate the development environment domain certificate and the kong-ingress-controller who take the control belong to the production environment, this makes that cert-manager pod does not know which kong-ingress-controller to use to perform the validation.

I think that it is happening because cert-manager has not found the order to sign the certificated created previously, because this order is presented to be signed through the kong-ingress-controller development environment in the kong namespace and this kong does not have the control, due to who take the control is kong-ingress-controller production environment in kong-production namespace

Does sounds logic these affirmations? doesn’t?

Is this normal?

Does my ingress-controllers (located in kong and kong-production namespaces) have competition by taking over of the ingress operations inside my cluster?

It is possible to run multiple Ingress Controllers inside a single k8s cluster, but as you said, it leads to a situation where both the controllers are fighting to satisfy the Ingress.

You can solve this problem using the annotation so that your dev and prod ingress controllers don’t step over each other.

Hope that helps!

Hi @hbagdi thanks for take your time to read.

How to can I configure my ingress resource to listen to different class? I am using this class "kong" which point to kong-ingress-controller, this is a standard annotation, I think so, but how to can I differentiate one kong instance of other?

This means, I want to use multiple kong-ingress-controllers how to can I point to one kong in my ingress resource and other kong in other ingress resource?

You can configure each of your Kong Ingress controllers to listen for Ingress resources of a specific class using the --ingress-class CLI flag.

So, in case for your dev environment, you could add --ingress-class dev-env to the Kong Ingress Controller’s CLI flag.

And then, the same way, you can change that for your prod Kong cluster as well.

If this resolves your question, please mark the post as such to help future users. Thanks!

1 Like

Hi @hbagdi. Thanks for the answer. But, there is something which I have not clear:

What should be the value chosen to --ingress-class CLI flag?
Should be the name of my namespace?

Or could be any name defined in the --ingress-class CLI flag?

Does this mean if I have a namespace named development then the configuration of my kong-ingress-controller should stay this way?

- name: ingress-controller
  - /kong-ingress-controller
  # the kong URL points to the kong admin api server
  - --kong-url=https://localhost:8444
  - --admin-tls-skip-verify
  # the default service is the kong proxy service
  - --default-backend-service=kong/kong-proxy
  # Service from were we extract the IP address/es to use in Ingress status
  - --publish-service=kong/kong-proxy
  - --ingress-class=development

And then in my ingress resource if I want to point to that kong-ingress-controller
I should add the "development"

Here says:

In some deployments, one might wish to use multiple Kong clusters in the same k8s cluster (e.g. one which serves public traffic, one which serves “internal” traffic). For such deployments, please ensure that in addition to different ingress-class, the --election-id also needs to be different.)

That development value ( "development") will be my

Yes any name cane be defined for you the ingress-class and the name you use as this CLI flag, you should use the same for the annotation.

You can leave the --election-id flag as is, it will generate a different election ID for each of your Ingress class.

1 Like

Hi Team,
I want to configure multiple Kong instance on single PostgreSQL database. I configure my API into the Kong CE service and I can use Kong service easily. But my requirement is to handle multiple Kong instance using singe database the idea behind is to manage backup instance if primary instance got down.
Please guide me if anyone have idea about this scenario.