Hello Kong friends
I would like if you can, of course, that you would check this behavior which I am sharing here in relation to this deployment. I am interested in know your thoughts about this situation
I got the following deployment in order to have multiple kong ingress controller to different environments
We got the following small behavior like study case between the development and production environment which use each one a different kong deployment. Are the following facts:
kong-ingress-controller
development environment (kong and kong-proxy)
app-service-dev
C# application service
app-Service-dev-ingress
using to app-service-dev
and pointing to kong-ingress-controller
located in namespace kong
development environment above
kong-ingress-controller
production environment(kong and kong-proxy)
app-service-production
C# application service
app-Service-prod-ingress
using to app-service-production
and pointing to kong-ingress-controller
located in namespace kong-production
environment above
In the first instance both app-Service-dev-ingress
and app-Service-prod-ingress
(located in development and production namespace) are working. If you go to:
-
http://zcrm365dev.possibilit.nl/index.html -
app-service-dev
C# application service development namespace -
http://zcrm365prod.possibilit.nl/index.html -
app-service-production
C# application service
We can see that both ingress resources are working and seem like if they were integrated with their respective kong-ingress-controller
in their own namespaces (kong
to dev service and kong-production
to production service)
But, when I update some ingress-resource in any of the namespaces above mentioned (app-Service-dev-ingress
or app-Service-prod-ingress
), the kong-ingress-controller
which is pointing that ingress-resource take over of all ingress operations inside my cluster and redirect all requests to their respective database.
As a result of this behavior, currently I was working with production ingress controller and the app-Service-prod-ingress
ingress resource, and this take over monitoring my cluster and also has caught the requests made to zcrm365dev.possibilit.nl domain, and it suppose that these requests are managed by the development ingress controller and resource.
Finally, in my kong ingress controller production database, I have routes of the app-service-dev
C# application service development and the app-service-production
C# application service production.
Let’s have a look at the following cases:
- This is my kong (development) database
The first time when the ingress resource in development
environment was created, the kong-ingress-controller in the development environment (namespace kong
) take over this route and was saved it.
Let’s have a look at the date in orange on April 3, 2019 and the route with cert-manager in red color.
- This is my kong-production database
The first time when the ingress resource in production
environment was created, the kong-ingress-controller in the production environment (namespace kong-production
) take over this route and was saved it.
Let us have a look at the date in the first record on the blue rectangle, on April 4, 2019 at 10:27 hours, in which cert-manager configure it in the path attribute
But, then, I proceed to make some requests to zcrmdev domain and the production database take this request and store it, such as we can see in the third record in orange on on April 4, 2019 at 10:34 hours
I am not sure if maybe this could to have some influence or impact in the way of how these two kong-ingress-controller in the different namespaces works because each one of them has registered the correct path according to request domain that correspond to each other, but, I think that the consequent route registers does not should be stored in the opposite database.
I think that this kong-ingress-controller will receive some requests from any domain direction (zcrm365dev.possibilit.nl
or zcrm365prod.possibilit.nl
) and kong-proxy
will route the requests to their respective ingress resource and from there to their respective service pods in the different namespace and from there to their respective database.
My conclusion is that those KONGs controllers that I have, are fighting for who take over the power in the cluster
If you see the diagram above, I have that deployment working with kong and cert-manager (with support of acme-kong-kube-helper-).
And then, when kong-ingress-controller
should communicate with cert-manager
in order to validate some certificates, this situation also is presented, sometimes when cert-manager
wants to validate the zcrm365dev.possibilit.nl
development environment domain certificate and the kong-ingress-controller
who take the control belong to the production environment, this makes that cert-manager
pod does not know which kong-ingress-controller
to use to perform the validation.
I think that it is happening because cert-manager
has not found the order to sign the zcrm365dev.possibilit.nl
certificated created previously, because this order is presented to be signed through the kong-ingress-controller
development environment in the kong
namespace and this kong does not have the control, due to who take the control is kong-ingress-controller
production environment in kong-production
namespace
Does sounds logic these affirmations? doesn’t?
Is this normal?
Does my ingress-controllers (located in kong
and kong-production
namespaces) have competition by taking over of the ingress operations inside my cluster?