Using local cluster service with knative

I have used knative with istio for a couple of years now and I’m setting up a new environment and I’m using the Kong ingress controller instead of istio to leverage it’s api gateway capabilities. Everything is up and running and can connect to knative services through the Kong gateway; however, what’s not working is the ability to call the local svc cluster (service.default.svc.cluster.local) uri of the service from within the k8s cluster. When calling the local uri, from a curl command running on a pod in the cluster for example, the gateway returns an empty response from the service. My install is using the standard helm chart running on an AWS EKS cluster with proxy v2 enabled to preserve the original client ip. Curious if any one else has encountered this and has a suggested remedy?

The host pattern will be something like this. “host”: “svc-00001.namespace.80.svc”. here 00001 is the revision number.

To get all the hosts, these are the steps I followed.

kubectl port-forward --namespace kong deployment/ingress-kong 8444:8444
curl -k https://127.0.0.1:8444/services/

ah, ok. thx for that. the downside of that approach though is that in knative, new deployments/routes are created all the time which updates the deployment number (svc-00001). also, kong does not support the knative reference to a service:

sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: svc-name

i ended up using the uri option instead, but that’s not ideal since i’m going outside of the cluster to call a service that’s inside the cluster. i have a requirement where the service runs on a schedule within the cluster and is also accessible outside the cluster which i manage via kong.

sink:
uri: kong-svc-endpoint