Managing multiple kong instances - Creation of the kong resources on its respective database

I have two KONG instances in a k8s cluster with their respective database

  • Kong sandbox instance is named kong-ingress-controller and it’s configuration is such as follow:

enter image description here

  • And also I have a Kong production instance which is named kong-ingress-controller-production and it’s configuration is such as follow:

enter image description here

Even, under this configuration schema, I can deploy both kongs (sandbox and production instances) in the same port, I mean 8001, because each kong is located in a different pod machine.

In kong sandbox instance I have created the following kong resources:

  • basic-auth and acl KongPlugins

  • 2 KongConsumer resources with its respective KongCredentials

  • And also I have configured the - --ingress-class=kong parameter in kong sandbox and I have an Ingress resource pointing to it

In the kong sandbox environment all resources previously mentioned are created and stored on its kong database.

Not of this way in the kong production environment. Let’s see …

I am creating also in kong production environment the following:

  • basic-auth and acl KongPlugins

  • 1 KongConsumer resource with its respective KongCredential

  • And also I have configured the - --ingress-class=kong-production parameter in kong production and I have an Ingress resource pointing to it

What is happening?

My two kong sandbox and production instances are running and working, but
kong sandbox environment database is taking over of the creation and storing of the KongConsumers and KongCredentials.

These resources are not being stored to the kong production database, the credentials, consumers, basic-auth and ACL plugins are stored on sandbox database …

Only the KongPlugins are being stored on kong production database.

The situation looks like the different kong connections were crossed at some point or at least kong sandbox environment is listening and taking the request to kong production environment.

A test or proof of that I am saying, is that even Kong production controller environment ignores the creation of the Kongconsumer and its KongCredential. THese are the logs in relation to this affirmation

I0509 14:23:21.759720       6 kong.go:113] syncing global plugins
I0509 14:29:57.353944       6 store.go:371] ingress rule without annotations
I0509 14:29:57.353963       6 store.go:373] ignoring add event for plugin swaggerapi-production-basic-auth based on annotation kubernetes.io/ingress.class with value 
I0509 14:29:57.395732       6 store.go:371] ingress rule without annotations
I0509 14:29:57.395756       6 store.go:373] ignoring add event for plugin swaggerapi-production-acl based on annotation kubernetes.io/ingress.class with value 
I0509 14:29:57.438604       6 store.go:439] ignoring add event for consumer zcrm365dev-consumer based on annotation kubernetes.io/ingress.class with value 
I0509 14:29:57.487996       6 store.go:505] ignoring add event for credential zcrm365dev-credential based on annotation kubernetes.io/ingress.class with value 
I0509 14:29:57.529698       6 store.go:505] ignoring add event for credential zcrm365-prod-acl-credential based on annotation kubernetes.io/ingress.class with value 

It’s weird because I am specifying in each Kong deployment the --ingress-class parameter and each one has a specific value of this way:

  • kong production environment --> - --ingress-class=kong-production
  • kong sandbox environment --> - --ingress-class=kong

And also in each Ingress resource which is pointing to each specific kong class using the kubernetes.io/ingress.class annotation of this way:

  • Ingress pointing to kong sandbox —> kubernetes.io/ingress.class: "kong"

  • Ingress pointing to kong production —> kubernetes.io/ingress.class: "kong-production"

Someone know what happen here?

How to can I redirect or at least perform debug of this behavior?
I have been checking the logs, and port-forward operation in order to confirm the availability of both kong instances and also that any of them is not defeating the other, such as we can see in this picture:

enter image description here

It was solved! I added the kubernetes.io/ingress.class: "kong-production" annotation and now my production instance has taken the credentilas, plugins, acl, and consumers. :smile: