Kong allows only one plugin of a given kind by ingress

Hi,

We are trying Kong as our new API Gateway, but we face some strange behavior with Kubernetes and Ingresses.

We had on ingress before, with external-dns and cert-manager without any issues.
We want to switch to 5 Ingresses with Kong with different configuration (KongPlugin(s) and KongIngress(es) attached to each Ingress) pointing to the same service (we are preparing to split in microservices later on).

Kong shows some strange behavior doing so :

  • If we deploy them all at the same time we got as a response on all hosts {"message":"no Route matched with those values"}
  • If we deploy them one by one, the first 3 ingresses seem configured properly (plugins and route redirects are applied by Kong), but any following new ingress deployment will return {"message":"no Route matched with those values"}. If we tried to remove them one by one, no new ingress will work anymore. Trashing Kong and reinstalling it seems to reset the problem and circling back as soon as we add too many ingresses.

Is there any limitation on how many ingresses Kong is supposed to handle ? For one service ?

Using the official chart 1.10.0 with a postgresql db on AWS.

Nota: Each ingress configuration has been tested independently and works perfectly fine with Kong when used within the 3 ingresses rule stated above. No matter in which order we deploy the ingresses, only the first 3 work, the following ones respond with {"message":"no Route matched with those values"}

Follow up on this issue, after some trials&errors to understand what’s the problem.

After deploying a few ingresses, in the kong pod logs, we can catch some error logs about already existing plugins but with random queuing resources that has nothing to do with ingresses or Kong :

time="2020-10-02T13:11:50Z" level=error msg="failed to update kong configuration: inserting plugins into state: entity already exists" component=controller
time="2020-10-02T13:11:50Z" level=error msg="failed to sync: inserting plugins into state: entity already exists" component=sync-queue
time="2020-10-02T14:54:45Z" level=warning msg="requeuing sync for 'kube-system/node-controller-token-sbz6w'" component=sync-queue
time="2020-10-02T14:54:52Z" level=warning msg="requeuing sync for 'cert-manager/cert-manager-cainjector-token-k4tts'" component=sync-queue

We can’t make any sense of this, why Kong is trying to sync whatever it is trying to sync from any existing resources in all namespaces of Kubernetes while complaining about duplicate plugins ?

Still don’t understand why Kong is doing these random warnings about queuing kubernetes resources it’s not supposed to interact with, but following previous posts, we were able to narrow the problem.

It seems you can’t have for a given Ingress more than one plugin of each kind (eg. you can’t link several cors plugins in one ingress)

Is this intended or is it a bug ?

That’s by design, yes: for a given request, only one instance of a plugin may run, and broadly speaking, Kong will run the most specific instance (e.g. if you have a CORS plugin for a consumer+route, that instance will take precedence over a CORS plugin instance on the route alone):

In the context of the CORS plugin specifically, is there a reason you’d want/need to have multiple CORS plugins apply to the same request? Can you elaborate on the use case where that seemed like it’d be a requirement? The single-instance-runs-only isn’t something easily changed, but we can perhaps suggest alternative configuration that’d better fit into that model.

Thanks for your quick reply @traines

That’s not really an issue anymore now that we understand the problem, we can indeed design it differently.

The use case was that we defined several CORS plugins (group by hosts configuration and requirements) to avoid DRY and duplicate configuration, that way we can apply one or more CORS plugins on each ingress depending on our needs.

Given the restriction, we will have to design a way to create a unique CORS plugin for each ingress based on our needs while avoiding to duplicate code/configuration. That’s ok, we just need to think the problem in reverse.

Our approach seemed very handy, but now that you mention it, that would probably be a nightmare to handle on the Kong side.

btw @traines would you have any insights on these warnings and why Kong logs them only when there’s a duplicate plugin it can’t sync ?
We were curious to understand if we misconfigured something or if that’s normal for Kong to scan any existing kubernetes resource ?

Not sure I fully understand the question, but I think it may help to give a high-level overview of the controller’s sync behavior. Maybe some errors in this description, as my memory is hazy on it, but I think this is largely correct:

  1. When the controller starts, it checks the existing complete K8S resource and Kong configuration, and tries to reconcile them.
  2. After that initial sync, the controller normally goes into a waiting state, where it doesn’t do much.
  3. When you add new K8S resources or modify existing K8S resources, the K8S API server sends an event to the controller. From there, the controller inspects the new resource, determines what configuration change it needs to add, and attempts to submit a change to Kong. If that change succeeds, it returns to (2), and the sync loop continues. If that change fails, the controller continuously tries to sync, and will in fact never give up even if the change will never succeed. It actually gets stuck there, because it can’t return to the waiting state (and accept new config)

The failure case in (3) is a known problem, and it’s quite annoying. We’re working on a broad set of approaches that should minimize it if not avoid it altogether, though I don’t have a full set of details just yet–stay tuned to future releases :slight_smile:

For now, we recommend enabling the admission webhook (ideally it’d be enabled by default, but there are some things we can’t effectively automate yet). That effectively attempts the configuration change before actually entering (3) (it asks Kong “would you accept this change?”) and rejects the new config via kubectl, preventing you from creating it and getting the controller stuck. There are some edge cases where it can’t detect an issue, however, so it’s not 100% foolproof, but it will catch a lot of potential issues before they start.

1 Like

Thanks for clarifying @traines, sorry my question was unclear but you got it right.

We’ll give a shot to the admission webhook you’re talking about, that would definitely helps to avoid problem during deployment and being able to catch it in CI/CD pipelines instead of debugging later on.

Thanks again for your detailed answers and great support.
Kong is a great product and we’re looking forward to use it actively.