One service multiple routes based on headers and different plugins configuration per route

We are using kong in a kubernetes environment in db less mode. Our setup worked but suddenly its no longer working as expected.
There is one backend with one service. And we want to use different plugin configuration based on a http header.
e.g.
custom-header: test -> route1 -> backend
custom-header: private -> route2 -> backend
all request with a different custom headers or without a custom header should end in a no route found error

We have done this with a normal deployment and a normal service and the following kong resources:
Per route we created an ingress:

- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      configuration.konghq.com: testapp-private-routing
      konghq.com/plugins: keyrock-auth-authorize
      kubernetes.io/ingress.class: kong
      kubernetes.io/tls-acme: "true"
      meta.helm.sh/release-name: test
      meta.helm.sh/release-namespace: default
    labels:
      app.kubernetes.io/instance: test
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: testapp
      app.kubernetes.io/version: 2.4.0
      helm.sh/chart: testapp-0.1.0
    name: test-testapp-private
    namespace: default
  spec:
    rules:
    - host: testapp.example.de
      http:
        paths:
        - backend:
            serviceName: test-testapp
            servicePort: 1026
          path: /
    tls:
    - hosts:
      - testapp.example.de
      secretName: testapp-tls-cert
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      configuration.konghq.com: testapp-test-routing
      konghq.com/plugins: keyrock-auth-authorize
      kubernetes.io/ingress.class: kong
      kubernetes.io/tls-acme: "true"
      meta.helm.sh/release-name: test
      meta.helm.sh/release-namespace: default
    labels:
      app.kubernetes.io/instance: test
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: testapp
      app.kubernetes.io/version: 2.4.0
      helm.sh/chart: testapp-0.1.0
    name: test-testapp-test
    namespace: default
  spec:
    rules:
    - host: testapp.example.de
      http:
        paths:
        - backend:
            serviceName: test-testapp
            servicePort: 1026
          path: /
    tls:
    - hosts:
      - testapp.example.de
      secretName: testapp-tls-cert

and one kongingress resource per route:

- apiVersion: configuration.konghq.com/v1
  kind: KongIngress
  metadata:
    annotations:
      meta.helm.sh/release-name: test
      meta.helm.sh/release-namespace: default
    generation: 1
    labels:
      app.kubernetes.io/managed-by: Helm
    name: testapp-private-routing
    namespace: default
  route:
    headers:
      custom-header:
      - private
- apiVersion: configuration.konghq.com/v1
  kind: KongIngress
  metadata:
    annotations:
      meta.helm.sh/release-name: test
      meta.helm.sh/release-namespace: default
    labels:
      app.kubernetes.io/managed-by: Helm
    name: testapp-test-routing
    namespace: default
  route:
    headers:
      custom-header:
      - test

Now executing something like curl "https://testapp.example.de" get forwarded to the backend. And with not specified headers like curl "https://testapp.example.de -H 'custom-header: notspecified' get forwarded as well. But only request like curl "https://testapp.example.de -H 'custom-header: private' and curl "https://testapp.example.de -H 'custom-header: test' should get forwarded to the backend.
we are using kong:2.1.4 and kong-ingress-controller:1.0
What are we doing wrong?
EDIT:
it seems like the kongingress resource does not have any effect. Using configuration.konghq.com: does-not-exists is not throwing any error in the logs and the result is the same

I have to use konghq.com/override instead of configuration.konghq.com don’t know why this worked before.