KongConsumer is unable to fetch credential from secret

level=error msg="resource processing failed: credential \"test-apikey\" failure: failed to fetch secret: Secret kong/test-apikey not found" GVK="configuration.konghq.com/v1, Kind=KongConsumer" name=my-consumer namespace=kong

The secret exists in the namespace:

k get secret test-apikey -n kong -o yaml
apiVersion: v1
data:
  key: xxxx
  kongCredType: a2V5LWF1dGg=
kind: Secret
metadata:
  creationTimestamp: "2023-08-14T09:01:46Z"
  name: test-apikey
  namespace: kong
type: Opaque

Here is the kongConsumer CR:

apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: my-consumer
  namespace: kong
  annotations:
    kubernetes.io/ingress.class: kong
username: test-username
credentials:
- test-apikey

I have verified the role, rolebinding, serviceaccounts and those are all okay.
Would appreciate if someone can help.

I am running into the same problem. I am running Kong (3.4, but also tried 3.3) in Kubernetes in DB-less mode, deployed with the official Kong Helm chart in version 2.26.5. I have validated that my KongConsumer and Secret are in the same namespace but the consumer still fails to fetch the referenced secret in the same way @utkarsh079 mentioned.
Error: resource processing failed: credential \"kong-secret-kafka-consumer\" failure: failed to fetch secret: Secret sq-platform/kong-secret-kafka-consumer not found. While kubectl get secret -n sq-platform kong-secret-kafka-consumer yields

NAME                         TYPE     DATA   AGE
kong-secret-kafka-consumer   Opaque   2      16m

@utkarsh079 did you figure out the root cause and or a solution for this problem? Otherwise, is there anybody else who might be able to help?

Hello everyone, I wanted to circle back and provide an update on the issue. After further investigation and some helpful pointers from the community, I’ve managed to identify and resolve the problem.

The root cause of the issue was related to the way I was deploying and upgrading Kong. I used the official Kong Helm Charts. While Helm does install Custom Resource Definitions (CRDs) at the initial installation of a chart, it does not automatically update them during subsequent upgrades. This is a known behavior of Helm, which requires manual intervention to update the CRDs. While not a problem with the Kong Helmd Charts per se, the team is actively discussing how to improve the visibility of this issue or even resolve it.

Here’s what I missed: Some releases of the Kong Helm Chart include changes to the CRDs that must be applied for the upgrade to succeed. Because I hadn’t manually updated the CRDs after upgrading the Helm chart, Kong could not correctly process the resources, hence the errors when fetching the secrets.

The solution was to manually apply the CRD updates following the instructions in the official Kong Helm Charts repository.

For those who might encounter similar issues in the future, here’s a quick rundown of the steps I took to resolve it:

  1. Check the Kong charts upgrade guide for any CRD updates corresponding to your chart version.
  2. Manually apply the respective CRD updates for the release you are rolling out by kubectl applying the CRDs provided in the repo.
  3. After updating the Helm Chart alongside applying the CRD updates, the resources recover themselves.

I hope this helps anyone who might be struggling with similar issues. Thank you to the community for the support :slight_smile:

We ran into this issue as well. Has anyone come up with a solution?

Depending on your Kong Ingress Controller version (>3.x) you’ll need to use this label in the secret (and drop kongCredType in the data section):

labels:
    konghq.com/credential: key-auth

More info here: Credential Type Labels - v3.0.x | Kong Docs

I am also facing the same error after upgrading ingress 3.0 .
Even I added a labels as per below.


  labels:
    konghq.com/credential: key-auth

we see those errors in logs as well, even after we added labels to secrets.

We’re also facing this issue with a helm upgrade to version 2.26.0 (up until 2.23.0 it works)

We’re using the default configuration for the key-auth plugin, nothing fancy. And yeah we did update the CRDs manually after upgrading.

This happens on an AKS cluster with k8s version 1.27.9

Appreciate any insights you can share