Kong Ingress Controller + Istio Service Mesh doesn't support global mTLS?

I was running through Kevin Chen’s guide, Kong Ingress Controller and Service Mesh: Setting up Ingress to Istio on Kubernetes and found that it works great if you don’t have mTLS enabled in your service mesh, but doesn’t work at all if you do. This was a huge disappointment as mesh-wide mTLS is one of the primary value adds of a service mesh.

Has anyone tried this out and run into this problem? I tried both enabling mTLS just to the productpage service, and also enabling it mesh-wide (set global.mtls.enabled=true during Istio install), and both methods result in the error: “upstream connect error or disconnect/reset before headers. reset reason: connection termination”.

For reference, I’m running on AWS EKS with Kubernetes version 1.14 and Istio version 1.3. I haven’t tried with a later version of Istio yet, but there haven’t been any changes to how Istio mTLS works in 1.4 or 1.5 that would cause me to think it would work with a later version.

It does works but the instruction in that blogpost probably don’t add up correctly.
Can you check the envoy container logs and share the host header that envoy is seeing?
Envoy sends a reset because it can’t figure out which service the traffic to forward to.

Once we know what the host header is being set, we can figure out where things are going wrong on the Kong side.

Hey, thanks for the reply!

Actually, I can see the traffic hitting productpage’s Envoy sidecar. It definitely looks like the traffic is making it to the right destination, but the client sidecar (Kong’s Envoy proxy) seems to be sending a request the server Envoy doesn’t like.

[2020-04-07 23:56:54.672][31][debug][filter]     [external/envoy/source/extensions/filters/listener/original_dst/original_dst.cc:18] original_dst: New connection accepted
[2020-04-07 23:56:54.672][31][debug][filter] [external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:72] tls inspector: new connection accepted
[2020-04-07 23:56:54.672][31][debug][main] [external/envoy/source/server/connection_handler_impl.cc:287] [C29] new connection
[2020-04-07 23:56:54.672][31][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:167] [C29] handshake error: 1
[2020-04-07 23:56:54.672][31][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:200] [C29] TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST
[2020-04-07 23:56:54.672][31][debug][connection] [external/envoy/source/common/network/connection_impl.cc:190] [C29] closing socket: 0

As for the host header, the blog does specify that needs to be dealt with, and I have that config in place:

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: do-not-preserve-host
  namespace: default
route:
  preserve_host: false

For reference, I have mTLS globally enabled with MeshPolicy and a DestinationRule, and istioctl says that my mTLS config is bueno.

MeshPolicy:

apiVersion: authentication.istio.io/v1alpha1
kind: MeshPolicy
metadata:
  name: default
spec:
  peers:
  - mtls: {}

And the Destination Rule:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  labels:
    app: security
    chart: security
    heritage: Helm
    release: istio
  name: default
  namespace: istio-system
spec:
  host: '*.local'
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL

And istioctl authn tls-check:

:~/istio-bin/istio-1.3.3$ istioctl authn tls-check ingress-kong-7c955554f6-2qrj7 -n kong
HOST:PORT                                                       STATUS       SERVER     CLIENT     AUTHN POLICY     DESTINATION RULE
details.default.svc.cluster.local:9080                          OK           mTLS       mTLS       default/         default/istio-system
istio-citadel.istio-system.svc.cluster.local:8060               OK           mTLS       mTLS       default/         default/istio-system
istio-citadel.istio-system.svc.cluster.local:15014              OK           mTLS       mTLS       default/         default/istio-system
istio-galley.istio-system.svc.cluster.local:443                 OK           mTLS       mTLS       default/         default/istio-system
istio-galley.istio-system.svc.cluster.local:9901                OK           mTLS       mTLS       default/         default/istio-system
istio-galley.istio-system.svc.cluster.local:15014               OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:80          OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:443         OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:15020       OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:15029       OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:15030       OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:15031       OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:15032       OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:15443       OK           mTLS       mTLS       default/         default/istio-system
istio-ingressgateway.istio-system.svc.cluster.local:31400       OK           mTLS       mTLS       default/         default/istio-system
istio-pilot.istio-system.svc.cluster.local:8080                 OK           mTLS       mTLS       default/         default/istio-system
istio-pilot.istio-system.svc.cluster.local:15010                OK           mTLS       mTLS       default/         default/istio-system
istio-pilot.istio-system.svc.cluster.local:15011                OK           mTLS       mTLS       default/         default/istio-system
istio-pilot.istio-system.svc.cluster.local:15014                OK           mTLS       mTLS       default/         default/istio-system
istio-policy.istio-system.svc.cluster.local:9091                CONFLICT     mTLS       HTTP       default/         istio-policy/istio-system
istio-policy.istio-system.svc.cluster.local:15004               CONFLICT     mTLS       HTTP       default/         istio-policy/istio-system
istio-policy.istio-system.svc.cluster.local:15014               CONFLICT     mTLS       HTTP       default/         istio-policy/istio-system
istio-sidecar-injector.istio-system.svc.cluster.local:443       OK           mTLS       mTLS       default/         default/istio-system
istio-sidecar-injector.istio-system.svc.cluster.local:15014     OK           mTLS       mTLS       default/         default/istio-system
istio-telemetry.istio-system.svc.cluster.local:9091             CONFLICT     mTLS       HTTP       default/         istio-telemetry/istio-system
istio-telemetry.istio-system.svc.cluster.local:15004            CONFLICT     mTLS       HTTP       default/         istio-telemetry/istio-system
istio-telemetry.istio-system.svc.cluster.local:15014            CONFLICT     mTLS       HTTP       default/         istio-telemetry/istio-system
istio-telemetry.istio-system.svc.cluster.local:42422            CONFLICT     mTLS       HTTP       default/         istio-telemetry/istio-system
kong-proxy.kong.svc.cluster.local:80                            OK           mTLS       mTLS       default/         default/istio-system
kong-proxy.kong.svc.cluster.local:443                           OK           mTLS       mTLS       default/         default/istio-system
kong-validation-webhook.kong.svc.cluster.local:443              OK           mTLS       mTLS       default/         default/istio-system
kube-dns.kube-system.svc.cluster.local:53                       OK           mTLS       mTLS       default/         default/istio-system
kube-dns.kube-system.svc.cluster.local:53                       OK           mTLS       mTLS       default/         default/istio-system
kubernetes.default.svc.cluster.local:443                        CONFLICT     mTLS       HTTP       default/         api-server/istio-system
productpage.default.svc.cluster.local:9080                      OK           mTLS       mTLS       default/         default/istio-system
prometheus.istio-system.svc.cluster.local:9090                  OK           mTLS       mTLS       default/         default/istio-system
ratings.default.svc.cluster.local:9080                          OK           mTLS       mTLS       default/         default/istio-system
reviews.default.svc.cluster.local:9080                          OK           mTLS       mTLS       default/         default/istio-system

I’ve torn this Kong + Istio setup down and rebuilt it twice to make sure I didn’t do something silly, and I got the same result both times. As soon as I change the MeshPolicy to mtls: PERMISSIVE, the traffic goes through. It doesn’t make sense to me, as there are no other Destination Rules in play, it’s a fresh install, and the mTLS config looks right. But here we are.

It would be really helpful if someone else could test and confirm/disprove what I’m seeing! I’m also open to more troubleshooting suggestions.

I doubt that it is you doing something wrong. There is a configuration gap most likely.

Two things:

  • Can you export Kong’s configuration and paste it here? You can use decK dump once you kubectl port-forward 8444. This will shows us Kong’s configuration and make more intelligent guesses.
  • Could you somehow figure out what is the final host header that is sent by Kong?

The host header might not be an issue here, but I can’t figure out why would that TLS handshake fail. That is not even specific to the request that Kong is making.

Having a lot of trouble getting “deck dump” to work. It does some work and then eventually spits out a 400 error.

GET /acls?size=1000 HTTP/1.1
Host: localhost:8001
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip


GET /oauth2?size=1000 HTTP/1.1
Host: localhost:8001
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip


HTTP/1.1 400 Bad Request
Connection: close
Content-Length: 220
Content-Type: text/html; charset=UTF-8
Date: Wed, 08 Apr 2020 19:55:49 GMT
Server: openresty

<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
</body>
</html>

Seems like Envoy may be getting in the way here since mTLS is enabled globally. I’m hesitant to turn off mTLS as that will change the state of the configuration we’re trying to capture.

Is there another method for capturing the configuration? Maybe kubectl exec into the pod and get it from somewhere?

kubectl port-forward pod-name 8444:8444
deck dump --tls-skip-verify --kong-addr https://localhost:8444 # generates kong.yaml

Can you try this?

1 Like

Dear Diary,

Jackpot:

_format_version: "1.1"
services:
- connect_timeout: 60000
  host: productpage.default.9080.svc
  name: default.productpage.9080
  path: /
  port: 80
  protocol: http
  read_timeout: 60000
  retries: 5
  write_timeout: 60000
  routes:
  - name: default.productpage.00
    paths:
    - /
    path_handling: v0
    preserve_host: false
    protocols:
    - http
    - https
    regex_priority: 0
    strip_path: false
    https_redirect_status_code: 426
upstreams:
- name: productpage.default.9080.svc
  algorithm: round-robin
  slots: 10000
  healthchecks:
    active:
      concurrency: 10
      healthy:
        http_statuses:
        - 200
        - 302
        interval: 0
        successes: 0
      http_path: /
      https_verify_certificate: true
      type: http
      timeout: 1
      unhealthy:
        http_failures: 0
        http_statuses:
        - 429
        - 404
        - 500
        - 501
        - 502
        - 503
        - 504
        - 505
        tcp_failures: 0
        timeouts: 0
        interval: 0
    passive:
      healthy:
        http_statuses:
        - 200
        - 201
        - 202
        - 203
        - 204
        - 205
        - 206
        - 207
        - 208
        - 226
        - 300
        - 301
        - 302
        - 303
        - 304
        - 305
        - 306
        - 307
        - 308
        successes: 0
      unhealthy:
        http_failures: 0
        http_statuses:
        - 429
        - 500
        - 503
        tcp_failures: 0
        timeouts: 0
    threshold: 0
  hash_on: none
  hash_fallback: none
  hash_on_cookie_path: /
  targets:
  - target: 192.168.22.132:9080
    weight: 100

Ok, I think I’ve figured out what’s wrong here. Keep in mind this is following the instructions in the Kong Kubernetes.io blog post to a ‘T’. Take a look at these istio-proxy access logs taken from the Kong ingress gateway pod’s Envoy sidecar.

A failed request (global mTLS enabled):

[2020-04-10T18:39:11.782Z] "GET /productpage HTTP/1.1" 503 UC "-" "-" 0 95 6 - "192.168.34.238" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36" "7cddb836-15e1-4fdb-bbe4-760bec984b27" "192.168.22.132:9080" "192.168.22.132:9080" PassthroughCluster - 192.168.22.132:9080 192.168.34.238:0 - -

A “successful” request (global mTLS disabled):

[2020-04-10T19:00:54.423Z] "GET /productpage HTTP/1.1" 200 - "-" "-" 0 3769 139 138 "192.168.2.190" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36" "dd2874ae-11d1-41ab-b610-3a73ec2618f1" "192.168.22.132:9080" "192.168.22.132:9080" PassthroughCluster - 192.168.22.132:9080 192.168.2.190:0 - -

Notice how Envoy routes the traffic via the “Passthrough” cluster in both cases? Regardless of whether global mTLS is disabled or enabled, the traffic is passing through Envoy’s Passthrough Cluster because the destination Kong is sending the request to doesn’t match any service cluster in the Istio service mesh!

It seems like Kong is trying to load balance and determine the endpoint (or Target in Kong-speak) itself, rather than let Envoy handle the task. Notice that the Envoy “upstream host” in the above access log is actually the endpoint for the “productpage” service. This doesn’t match any clusters in the mesh, and Kong shouldn’t be resolving endpoints in Kubernetes. It should be letting Envoy handle this.

~/istio-bin/istio-1.3.3$ kubectl get endpoints productpage
NAME          ENDPOINTS             AGE
productpage   192.168.22.132:9080   47h

By doing so, the net effect is that the example where global mTLS is disabled accidentally works as Envoy is just passing the traffic through and not applying any of its traffic management logic. With global mTLS enabled in Istio, the traffic fails outright as Envoy is just passing it through and not encapsulating it with mTLS. The blogged Kong solution winds up routing traffic “around” the Istio service mesh.

That blog post needs to be updated with a working config (not yet sure what that should be), or it needs to be pulled, sadly. :frowning:

Can you set upstream.host_header in KongIngress resource and associate the KongIngress resource with the product page k8s Service?
Please set the host header to productpage.default.svc.
I think this will make envoy route traffic correctly.

1 Like

Thanks @hbagdi, that seems to have done the trick!

For reference, these are the resource configs now, using Bookinfo’s productpage as a test app.

Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    configuration.konghq.com: do-not-preserve-host
  name: productpage
  namespace: default
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: productpage
          servicePort: 9080
        path: /

KongIngress

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: do-not-preserve-host
  namespace: default
route:
  preserve_host: false
upstream:
  host_header: productpage.default.svc

Service

apiVersion: v1
kind: Service
metadata:
  annotations:
    ingress.kubernetes.io/service-upstream: "true"
  labels:
    app: productpage
    service: productpage
  name: productpage
  namespace: default
spec:
  clusterIP: 10.100.76.123
  ports:
  - name: http
    port: 9080
    protocol: TCP
    targetPort: 9080
  selector:
    app: productpage
  sessionAffinity: None
  type: ClusterIP

Note that it appears that you don’t need to have the Service annotated with the KongIngress configuration. All that’s needed is to annotate the Ingress resource with configuration.konghq.com: <KongIngress_NAME>. If you annotate the service, but not the ingress, it breaks. There also doesn’t seem to be a difference in how the traffic is processed if both the service and ingress are annotated.

Another note; it’s important to keep the ingress.kubernetes.io/service-upstream: "true" annotation on the productpage service. This is still needed to keep Kong from trying to select its own endpoint/target from the productpage service. Without it, Kong selects an endpoint and Envoy receives that pod’s IP as the upstream local address instead of the service’s cluster IP (preferred, so Envoy can properly load balance, etc.).

With this config in place (and everything else from the blog post deployed as specified), the traffic successfully matches to an Envoy cluster and routes through the mesh as expected. I tested this config with Istio mTLS enabled and disabled and it works in both states. :+1:

Thanks again for the help! I hope this is useful for anyone else trying to integrate Kong with Istio. :smiley:

@Michael_Davis, just saw this thread. Thanks for walking through with Harry and finding the gaps in my blog post. Will update accordingly to fix these gaps asap!

1 Like

@kevin.chen, no problem, I’m glad we were able to get it working. Thanks for the blog post - I really appreciate the content as well as the helpful suggestions from Harry to iron out the details.

I work for Aspen Mesh, an enterprise service mesh solution that adds additional functionality and security on top of open source Istio. This type of content is very interesting to us and it’s great to see that kind of stuff posted as we’re always getting requests from customers to integrate with different cloud native solutions, particularly ingress controllers. :white_check_mark:

Glad things are working out.

We should partner and explore what we can do together. We can start as simple as making some content together and sharing with our communities. How does that sound to you?

@Michael_Davis

I am running into a similar problem where if Istio’s mTLS policy is set to STRICT I get a 504 error, however if I mark the mTLS policy as PERMISSIVE the traffic is able to be served correctly.

I went through and double checked all of my resources (KongIngress, Ingress, and Service) to ensure that they were what you posted above. Any ideas on what might be going on here?

Hi @Michael_Davis and @hbagdi

i seem to have a similar issue, most or all of my traffic through kong ends up going through PassthroughCluster even after trying all of the options/annotation mentioned here :frowning:

Fun stuff… Kiali shows both ways…

and if i look at traffic distribution the values seem to be identical for PassthroughCluster an Kong=>Service traffic

Not sure where to go from here… :frowning:

Found my problem while browsing other issues…
I was missing the namespace in my KongIngress resource :man_facepalming:

Now all looks nice and tidy :slight_smile: