Routing to an external name

I have an EKS cluster “cluster1” that has two services:

  1. “service1”
  2. “a default backend”
    This service1 is exposed as abc.example.com/provisioning
    This cluster1 has an Nginx ingress controller

I have another EKS cluster, which is my edge cluster. Lets call it as “cluster2”. On cluster2, I have installed the Kong ingress controller. And I am configuring the ingress as follows:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-for-service1proxy
  annotations:
    kubernetes.io/ingress.class: kong
spec:
  tls:
    - secretName: xyz
      hosts: 
        - api.example.com
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /provisioning
            pathType: Prefix
            backend:
              service:
                name: proxy-to-service1
                port:
                  number: 443
              

I then define my service as follows

apiVersion: v1
kind: Service
metadata:
  name: proxy-to-service1
  annotations:
    konghq.com/protocol: "https"
    konghq.com/host-header: "abc.example.com"
    konghq.com/preserve-host: "false"
  labels:
    app: proxy-to-service1
spec:
  type: ExternalName
  externalName: abc.example.com

Now all this is deployed on the edge cluster.
When I browse to https://api.example.com/provisioning/
it actually hits my default-backend in my cluster1 and does not hit my service1 properly…

From my limited debugging attempts, I suspect that the hostname is going in an improper fashion… i.e. the cluster1 is expecting to route entries with hostname=abc.example.com and a path of /provisioning properly, whereas that hostname is not being setup properly…

Any help is appreciated…

All, I managed to solve this… I am posting solution in hopes of helping another soul who might face the same issue.

The configuration present here was correct, actually the request was being stonewalled by my service1 in cluster1. The service1 only accepted requests on abc.example.com as a host and when the request went through to it using the api.example.com, it did not recognize this host and pushed the response down to the default backend.