Kong Ingress controller is unable to route request to Istio virtual services

We’re running Istio service mesh on Kubernetes and Kong as API gateway and ingress controller for our K8S cluster.
We’ve created virtual services and destination rules for our micro-services and communications between our micro-services are working as expected except Kong is sending traffic directly to Upstream server instead of applying the virtual service and destination rules.
We have a micro-service deployed with 3 different versions and we have created virtual service to send traffic to only version v1 and v3. This is working fine when the micro-services communicate internally. But whenever we hit it through kong, the traffic is getting distributed evenly ( like it does without any traffic rule).
Also, kong ingress LB is showing as unknown node and I’m assuming it because the LB is outside the Istio mesh. Is it possible to use Kong instead of istio ingress gateway or am I missing something here.

Can you explain how you have deployed Kong with Istio?
Is Kong itself part of the Istio Mesh or not?

The following blog goes into a little bit of detail on how Kong can be integrated with Istio:

Also, please make sure that you have got the Envoy side-car running alongside Kong.

We’re using Kong as API gateway and ingress controller. We’ve installed it using helm. The Envoy sidecar container is also running along with kong container and we can see the routes using istioctl.
The bookinfo application is also working fine with our Kong + istio setup.

All traffic rules for all the bookinfo services are working as expected, except productpage as kong is hitting it directly. For the rest of the services, productpage service is hitting them and productpage is part of istio mesh. (Since the sidecar is also running with Kong, it should also be a part istio mesh, but it doesn’t seem to route according to rules in VirtualService and DestinationRules)

@kevin.chen Have you tried running multiple versions of productpage service (say v1 and v2) and then set some traffic rules in its Virtualservice? I believe kong would evenly distribute the traffic irrespective of traffic distribution rules defined in your Virtualservice.

FYI, the traffic rule doesn’t apply only if it’s coming from kong. The inter-communication between other micro-services is working as expected.
Please let me know if you need more details here.

Can you share the Ingress rule that you have created?

We are working on a blogpost and documentation on how to configure this.

You will have to set the following annotation on the productpage service:

After adding the annotation on the service, kong stopped load-balancing. But istio VirtualService rules are still not in effect. Even if we delete the Istio VirtaulService, it keeps sending the traffic on 1 version only.

apiVersion: extensions/v1beta1
kind: Ingress
configuration.konghq.com: kong-ingress-config
kubernetes.io/ingress.class: kong
name: kong-ingress-sms-notification-service
namespace: hyke-stage
-host: stage-pvt.hykeapi.com
- backend:
serviceName: sms-notification-service
servicePort: 8080
path: /sms-notification-service

apiVersion: v1
kind: Service
ingress.kubernetes.io/service-upstream: “true”
chart: java-gradle-0.25
name: sms-notification-service
namespace: hyke-stage
-name: http
port: 80
protocol: TCP
targetPort: 8080
app: sms-notification-service
sessionAffinity: None
type: ClusterIP

We’re running 3 different versions behind sms-notification-service and we can see all 3 endpoints when we describe service. But for some reason, all the traffic is getting routed to only one version after adding the annotation.

Is there any update on this?

After adding the annotation on service, the upstream target has the Kubernetes service name as expected (PFA screenshot). But the load balancing is still not working as expected. kong-upstream
How to adjust upstream slots value using KongIgress?

Do you have a routing policy in Istio for Kong to send traffic only to a specific version of the virtual service?

Please share more configuration on the Istio side as to how you have set this up.

hello here @hbagdi I step into istio world recently and I encounter the same issue while set up the kong with ingress controller and istio,the traffic of kong simply pass to the PassthroughCluster and magically appear into pods in kiali.
I deploy kong in istio mash and set service with upstream annotation.

After some dig into Envoy’s proxy configuration and trying to compare to istio ingress controller’s proxy config,even with tcpdump,I found the problem.
The problem is how istio setup listener with route and host header in HTTP request.
there is the point why demo setup in https://konghq.com/blog/kong-istio-setting-service-mesh-kubernetes-kiali-observability/ can success because is it didn’t preserve Host and send request to 9080 port with productpage.default.svc:9080 host header and capture with proxy listener on 9080 ,so is success

While with deployment with kong ingress controller,it trigger the preserve Host setting,while HTTP request send to 9080 port listener,the host header in HTTP request is not replace from the source(example: product.example.com),and istio’s HTTP route on 9080 is base on HTTP request,since there is no route for the host product.example.com in request,then it fall back to PassthroughCluster.

If you setup the host in virtual service with the name (product.example.com) not in service registry,is still won’t work since it only add route on port 80 listener with product.example.com route,not on 9080,and kong only send traffic to service ip:9080 port with product.example.com HTTP host,then it fallback to PassthroughCluster again.

But if we don’t preserve Host in HTTP request it will caused some problem in some application.So as the workaround I make sure the service is only on 80 port that need to accept external traffic ,set up a virtual service with external host name and don’t bind it with a Gateway.

Just need to figure out how to configure istio to add route to right port listener with external HTTP host request to kong pods like how istio gateway ingress did,or force kong send all it traffic to 80 port(via kong ingress controller) while service is in a istio mash namespace I guess.

Hello, is there anybody who can help with this issue? I’m facing the same problem.

My Current setup:

My findings so far:

  1. Requests coming out from the client > kong > my-service is not intercepted by the proxy even though the host is equal to my-service.
  2. Requests coming out from kong (curl from kong container) > service is intercepted by the proxy

and here are the TCP dump result from both requests consecutively:

  1. Client > Kong > Service
Frame 703: 584 bytes on wire (4672 bits), 584 bytes captured (4672 bits)
Ethernet II, Src: ea:72:a9:ea:9b:e6 (ea:72:a9:ea:9b:e6), Dst: aa:ff:0f:31:aa:85 (aa:ff:0f:31:aa:85)
Internet Protocol Version 4, Src: (, Dst: 3964386632353831.dev-global-aaa.dev.svc.cluster.local (
Transmission Control Protocol, Src Port: 58448 (58448), Dst Port: http (80), Seq: 1, Ack: 1, Len: 518
Hypertext Transfer Protocol
    GET /login HTTP/1.1\r\n
    Host: dev-global-aaa.dev.svc\r\n
    Connection: keep-alive\r\n
    X-Forwarded-Proto: https\r\n
    X-Forwarded-Host: api-dev.example.com\r\n
    X-Forwarded-Port: 8443\r\n
    user-agent: curl/7.64.1\r\n
    accept: */*\r\n
    X-Consumer-ID: 864ed4cf-a28c-4bec-92b7-f0f760750ec6\r\n
    X-Consumer-Custom-ID: anonymous-user\r\n
    X-Consumer-Username: anonymous-user\r\n
    X-Anonymous-Consumer: true\r\n
    nvcountry: global\r\n
    x-nv-system-id: global\r\n
    x-nv-request-uuid: 4d869056-ffdb-4f91-8285-eaae713c88d9\r\n
    [Full request URI: http://dev-global-aaa.dev.svc/login]
    [HTTP request 1/5]
    [Response in frame: 705]
    [Next request in frame: 985]
  1. Kong > Service
Ethernet II, Src: ea:72:a9:ea:9b:e6 (ea:72:a9:ea:9b:e6), Dst: aa:ff:0f:31:aa:85 (aa:ff:0f:31:aa:85)
Internet Protocol Version 4, Src: (, Dst: (
Transmission Control Protocol, Src Port: 38098 (38098), Dst Port: cslistener (9000), Seq: 1, Ack: 1, Len: 1615
Hypertext Transfer Protocol
    GET /login HTTP/1.1\r\n
    host: dev-global-aaa.dev.svc\r\n
    user-agent: curl/7.66.0\r\n
    accept: */*\r\n
    x-forwarded-proto: http\r\n
    x-request-id: 1469023f-95ef-4955-8022-e8a7767c80de\r\n
    x-envoy-decorator-operation: dev-global-aaa.dev.svc.cluster.local:80/*\r\n
     [truncated]x-envoy-peer-metadata: ChwKDElOU1RBTkNFX0lQUxIMGgoxMC44Ni4zLjc4CrsCCgZMQUJFTFMSsAIqrQIKDQoDYXBwEgYaBGtvbmcKDAoDZW52EgUaA2RldgofCgRuYW1lEhcaFWRldi1nbG9iYWwta29uZy0xLTUtMQohChFwb2QtdGVtcGxhdGUtaGFzaBIMGgo1NGI1Y2JkZjVkCh4KB3JlbGV
    x-envoy-peer-metadata-id: sidecar~\r\n
    x-b3-traceid: e500b44194677918e2736bcf37ccdb2d\r\n
    x-b3-spanid: e2736bcf37ccdb2d\r\n
    x-b3-sampled: 0\r\n
    content-length: 0\r\n
    [Full request URI: http://dev-global-aaa.dev.svc/login]
    [HTTP request 1/3]
    [Response in frame: 65]
    [Next request in frame: 419]

And my suspicion is that kong with the service-upstream annotation sends correctly to the service but using other hostname which resolves to correct service ip but is not in the istio whitelist rule as shown as in the following figure.

© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ