We are using kong with ingress in production and it is deploy with helm chart
- kong : 1.3
- Ingress Controller : 0.6.0
We use to deploy via https://bit.ly/k4k8s and we managed to configure IP restrcition.
The only difference is that we used our own database (google cloud SQL) to store the data.
Unfortunetly it is not the case anymore, we try a simple whilte list Ip restriction to protect some of our services but all the request are block for every Ip including the one in the whitelist group
apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: white-list-intra namespace : A config: whitelist: -SOME_IP plugin: ip-restriction
then we patched the ingress
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "kong" nginx.ingress.kubernetes.io/use-regex: "true" certmanager.k8s.io/cluster-issuer: letsencrypt-production-issuer plugins.konghq.com: white-list-intra name: ingress-A namespace: default
Are we doing something wrong or is it a bug? Could it come from the google cloud sql kong database ?