ClusterRole and ClusterRoleBinding creation

As in our organization there is a different team having admin access on the Kubernetes Cluster, I just wonder what are the resources that require a Cluster Admin access to be created prior to the Kong Ingress Controller installation on Kubernetes.
I see that ClusterRole and ClusterRoleBinding require admin access. Probably the Custom Resource Definition (CRD) as well, but what other resources being part of the Kong Helm charts should be created by the Team having Admin access on the K8S cluster prior to running the Kong Ingress Conroller Helm installation charts?


For a fresh installation using the default configuration:

$ helm template example -n helmgress /tmp/symkong | grep -i kind            
kind: ServiceAccount
kind: ConfigMap
kind: CustomResourceDefinition
    kind: KongConsumer
kind: CustomResourceDefinition
    kind: KongCredential
kind: CustomResourceDefinition
    kind: KongPlugin
kind: CustomResourceDefinition
    kind: KongClusterPlugin
kind: CustomResourceDefinition
    kind: KongIngress
kind: CustomResourceDefinition
    kind: TCPIngress
    kind: ""
kind: ClusterRole
kind: ClusterRoleBinding
  kind: ClusterRole
  - kind: ServiceAccount
kind: Role
kind: RoleBinding
  kind: Role
  - kind: ServiceAccount
kind: Service
kind: Deployment

I believe only the items you’ve already mentioned (CRDs and the ClusterRole* resources) will normally require special permissions (ability to create cluster-wide resources, for the most part)

CRDs can be handled via or by sending through kubectl apply: Helm 3 doesn’t manage CRDs as part of the release (it only creates them at install if needed) and we don’t have any templating in that file, so in practice it’s often easiest to have a cluster admin create the CRDs directly. They will require updates occasionally, but will indicate when that’s necessary.

The cluster RBAC resources may be a bit more difficult to work with because they are templated (mainly to reference the ServiceAccount’s name). We may want to explore reduced-permissions templates in the future to work with the single-namespace deployment model discussed in Kong Ingress Controller without ClusterRole creation, but don’t have anything like that currently.

In lieu of support in the existing templates, that’d probably require merging permissions from the ClusterRole into the Role by hand and maintaining your own fork of the chart until there’s native support for it (we don’t have a timeline, but I’ll mark it down as something to look into).

Thanks Travis for your feedback.
So, if I understand well your reply:

  1. I can just let my K8S Cluster Admin team create all the CRDs mentioned in ([](http://KONG CRDS))
  2. Change the ClusterRole and ClusterRoleBindings mentioned in ([](http://Kong ClusterRole)) into Role and RoleBindings, keeping the other Role and RoleBindings unchanged.
  3. Delete the CRDs and ClusterRole and ClusterRoleBindings from the Helm charts (
  4. Run the Helm Install command with the CONTROLLER_WATCH_NAMESPACE flag to the specific Namespace Kong Ingress will apply to.

Are those steps enough to deploy Kong Ingress Controller to a specific Namespace without creating ClusterRoles and Bindings?

Thank you again

They should be–this is still a bit untested territory, so what’s presented so far are more high-level guidelines, and some level of trial and error will probably be necessary. Please keep us updated with questions on anything that doesn’t work and/or what you wind up with for a successful configuration. That will help inform our future work to implement this as a standard configuration in the chart/controller.

For (2) I’d originally intended to merge permissions from the ClusterRole into a single Role, but what you’ve proposed (creating two Roles, one of which contains the permissions originally in the ClusterRole) should work also (and will probably be easier to template later).

(3) isn’t necessary: once the CRDs are in place (after (1)) Helm will just ignore them.

Thanks Travis for your reply.
Just one more question. The ClusterRole template in ( mentions the verbs “-list, -watch” on “secrets” resources.

  • apiGroups:
    • “”
    • endpoints
    • nodes
    • pods
    • secrets
    • list
    • watch

Is that really required? Do you think it would be possible to remove “secrets” from the resource list, as this may be a security risk?

Thank you

It is, yes. We use Secrets for storing sensitive plugin configuration, credentials for consumers, and several other purposes.

We would like to reduce our access to them (this concern comes up often), but currently Kubernetes RBAC doesn’t afford us any way to restrict our access further (e.g. by labeling Secrets that the controller should have access to): you either get access to all Secrets (in a namespace or cluster-wide depending on role scope) or none.

Thanks Travis for your reply again.
In that case, how could I transform the ClusterRole into a simple Role, given the fact that in the ClusterRole template there are mentions to nodes, endpoints, secrets, etc…
I see

I do not really know how could I transform a ClusterRole template into a Role template? Which elements should I keep in the Role template?
Any idea?


It’s exercise for the reader territory. I don’t know myself :slightly_smiling_face:

Intuitively, you should be able to just change the type and add a namespace (ditto for the binding). The rulesets shouldn’t need to change as they’re PolicyRule arrays in both:

Where I think you may run into issues is with cluster-level resources, namely KongClusterPlugin. I’m not sure how K8S RBAC handles namespaced roles that include actions for cluster-level resources, though I don’t see anything mentioned in the docs I reviewed–it may fail gracefully. If it doesn’t, you should be able to remove KongClusterPlugin from its rule without issue–the controller should be able to operate normally without that access, gracefully pretending that there aren’t any KongClusterPlugins.

Thanks Travis,
Very much appreciated your support.
I will keep you posted.


Hi Travis,
I tried to deploy the ingress-controller and the proxy by changing the ClusterRole into Role and adding the namespace. The proxy container starts correctly, however the Ingress-Controller generates the following error:
Failed to list *v1.KongClusterPlugin: is forbidden: User “system:serviceaccount:kong:kong-kong” cannot list resource “kongclusterplugins” in API group “” at the cluster scope
it is expected as the Namespaced Role does not have Cluster level permissions.
Do you think that if I do not create the CustomResourceDefinition KongClusterPlugin (which is Cluster scoped), I can make it work?

Thank you again for your support.


With or without KongClusterPlugin in the role? If with, you’re probably blocked on and we’d need to address that in the controller code for you to proceed.

Hi Travis,
This happens with and without the KongClusterPlugin in the role. The role is by the way a Namespace Role, not a ClusterRole.
Harry created the following my initial question with respect to the KongClusterPlugin CustomResourceDefinition.
Would it be possible to make to code change and deliver a patched image of the kong-ingress-controller, just for me to test?


Not officially yet. Do you have a local registry you can push images to?

Although we don’t have a pre-built image, you should be able to check out the controller code locally, apply the change suggested in andrevtg’s comment , and then build/push it like so:

export; export TAG=0.9.0-dev; make container; docker push

Sub in your registry for the example there and update your deployment to use the custom image.

That change ignores KongClusterPlugin entirely, which will work for your use case, and should suffice for testing, but it’s not what we’ll actually do in the end. We need to support both environments with cluster-wide access and those without, so we’d need to add some sort of toggle between those modes.

Hi Travis,
Thank you.
I do have a Dockerhub private registry. Will you be able to push the kong-ingress-controller:0.9.0-dev image to the DockerHub registry, so that I could take it from there?

I would really like to try out this new version of the image to see if it will fit our case.

Thank you

Sorry–to clarify, you’d need to handle the patch and custom image build. We can answer any questions you have about building a custom image, but we can’t do it on your behalf.

The command sequence in my previous post should work for that. Did you have any questions about applying the patch and/or building and using the custom image?

Hi Travis,
Sorry, I think misunderstood your previous comments.
So, the new Kong Ingress image “kong-ingress-controller:0.9.0-dev” is available already? Is that correct ?
I will need your help to understand how can I build the new image based on the 0.9.0-dev image.
What exactly should I do?

Thank you for your help

It is not–again, we won’t be building it ourselves, you’ll need to check out a copy of the source code, apply the change, build your own image from it, and push it to a registry you control.

ClusterRole and ClusterRoleBinding creation covers the steps in a bit more detail; which of those do you have questions on?

Thanks again Travis,
Apologies for the misunderstanding.
I will build the image.
Would you please confirm the steps below:

  1. Getting the Kong Ingress Controller source code from:
  2. Changing the code in the main.go file - commenting the line below:
    //informers = append(informers, kongClusterPluginInformer)
  3. Building a the new image
    Would you have an example of a Dockerfile that builds the image of the Kong Ingress Controller (to make sure that I do not make mistakes)?

Thank you

Hi Travis,
Please ignore my last message.
I was able to build the image using the source code from repo.
I have just commented the line 341 in the main.go file like below:
//informers = append(informers, kongClusterPluginInformer)

Then, I run the make container" command and generated the image, which I pushed to our private registry under a new tag.
However, when I deploy Kong with the new kong-ingress-controller image, using the manifest, I get a strange error:
“1 main.go:561] Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration). Reason: Get “https://XX.XXX.X.1:443/version?timeout=32s”: badgateway
Refer to the troubleshooting guide for more information:”.
The strange thing is that when I deploy the standard version of the image ( using the same manifest file, on the same Kube Cluster, I do not have this error.

Any idea what could be the reason for that error?


Hi Travis,
I was able deploy the new image and it seems to work properly.
Now, I do not need to create ClusterRole and ClusterRoleBindings.
With that change, Kong Kubernetes for Enterprise fits our need now.

Thank you for your support.
Let me know if you need more information about the configuration that I have done to make it work.