kongIngress sending traffic to https upstream over port 80

Hi,

This is my first post here so the formatting my be a little bad, please do ask me in case it’s not understandable.

Okay, so we are trying to use KongIngress in our K8S cluster now but facing one of the major problem. The calls to an https upstream are being sent over port 80 causing “(SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number)”.
I tried to google it a lot but could not find a solution to the problem.

We are having an external server working as content-server and is configured to serve on https/443, the K8S cluster has an AWS ALB in front so https is offloaded on ALB and the http traffic reaches kong ingress which in-turn makes an https connection to upstream content-server. So basically the request flow is like:
User → ALB → Kong → Content server

I have used below manifests to create the external service and Ingress:

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: upstream-https-proto
proxy:
  protocol: https
---
apiVersion: v1
kind: Service
metadata:
  name: mycontent
  annotations:
    configuration.konghq.com: upstream-https-proto
spec:
  type: ExternalName
  externalName: content-mydomain.com
  ports:
    - port: 443
      targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: mycontent
  annotations:
    kubernetes.io/ingress.class: kong
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: "/content"
        backend:
          serviceName: mycontent
          servicePort: 443

This creates below configuration if I check in kong dashboard:


and

The first thing that seems problematic is the Port in service configuration, even when the protocol is https port is 80.
{
“host”: “mycontent.default.svc”,
“created_at”: 1577622543,
“connect_timeout”: 60000,
“id”: “b17d161c-52bc-58dc-a841-042d37e46a5b”,
“protocol”: “https”,
“name”: “default.mycontent.443”,
“read_timeout”: 60000,
“port”: 80,
“path”: “/”,
“updated_at”: 1577622543,
“client_certificate”: null,
“tags”: null,
“write_timeout”: 60000,
“retries”: 5
}

However I could find in another post that it really doesn’t matter and what really acts is the target of upstream, which I see is fine as below:
{
“next”: null,
“data”: [{
“created_at”: 1577622445.955,
“upstream”: {
“id”: “905838e6-a98d-59bc-b578-337869076d54”
},
“id”: “3265fb2c-3e25-5e8a-9c26-9ce0c31a4b75”,
“target”: “content-mydomain.com:443,
“weight”: 100
}]
}

So for some time our platform works fine and I can see the traffic in the logs of proxy container but after some time something strange happens and the calls start failing with 502 status code with below logs in kong proxy:

2019/12/29 08:06:45 [error] 26#0: *5282581243 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 1.2.3.4, server: kong, request: “GET /content/testimg_thumb.png HTTP/1.1”, upstream: “https://5.6.7.8:80/data/testimg_thumb.png”, host: “mydomain.com”, referrer: “Web Hosting, Domain Name Registration - MyDomain.com

Note: The IP 5.6.7.8 corresponds to content-mydomain.com

Now the interesting thing is that when I delete the ingress-kong pods the calls start working again with new ingress-controller and proxy containers.

I am using below versions:
Ingress controller: 0.6.0
Kong: 1.4.0

It will be really helpful if someone can point if I’m making any mistakes here.

Some questions to understand your problem:

  1. Are you using DB or DB-less mode for Kong? How did you install Kong Ingress Controller?
  1. When you can see the traffic in the logs initially, do you see the requests actually succeeding from a client perspective?

  2. Can you please upgrade to Kong 1.4.2? Kong 1.4.0 has a few bugs. They are unrelated as far as I can see but there is a remote chance that they could somehow result in this behavior.

Hi @hbagdi, Thanks for replying. To answer your questions:

  1. We are using DB-less mode of kong and have installed using https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/single/all-in-one-dbless.yaml manifest with a couple of minor changes like using 8001 port for admin and addition of env vars for exporting proxy logs on stdout. And one more change in Service, we are not making it of type LoadBalancer, instead we are having an Ingress which basically creates an ALB and is using “kong-proxy” service as backend.

  2. Yes, the requests complete successfully in the beginning and I see below logs in kong proxy

1.2.3.4 - - [29/Dec/2019:08:17:54 +0000] “GET /content/testimg_thumb.png HTTP/1.1” 200 29151 “Web Hosting, Domain Name Registration - MyDomain.com” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36”

  1. I’ll again update the thread again after upgrading the kong version.

Hi,

Sorry for delayed response here. I’ve upgraded the kong version to 1.4.2 and did not see any difference.
It’ll be really helpful if someone can assist/guide us here.

Seems like an issue with the upstream service. Is it reliably hosted and can Kong Ingress Controller reach it always?

The upstream is an ExternalName service and yes is reachable all the time from Kong Ingress Controller and we got below gibberish logs in the upstream.

1.2.3.4 - - [29/Dec/2019:08:06:45 +0400] “\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03xOK\x91B\x1C\xD7\x90\x95\x08\xEF\x0B\xA5(\x06\x98\xCB$\xCAuN\xB75\x07U\x11\xD9v\x82\xD9\xB5\xAC \xDC\x94\x0E\xD1!\xE1\xAA*N\xD8\xA4\xD3\xDAv\xA60J6o\xC6\xB8\x82\x1C\xDEK\x02\x12\x08\xA3P\x10(\x00>\x13\x02\x13\x03\x13\x01\xC0,\xC00\x00\x9F\xCC\xA9\xCC\xA8\xCC\xAA\xC0+\xC0/\x00\x9E\xC0$\xC0(\x00k\xC0#\xC0’\x00g\xC0” 400 150 “-” “-” “-”

The interesting thing as I mentioned earlier is that everything works just fine when we restart the ingress-kong pods but the calls start failing after some time.

Do you think there’s anything wrong with our configuration. Please let me know if you need any further information to figure this out.

Could you upgrade to Controller 0.7 and test this out again please?

0.6 does periodic resets of configuration, which could be causing it (I doubt that can cause this though).

I see a similar issue in my setup with the latest controller 0.7.0 and Kong version 1.4.2.
The port is always 80, even though I use the configuration.konghq.com/protocols and the configuration.konghq.com/protocol to specify the protocol and the port number is set to 443 in the Service manifest.

Hi @hbagdi,

Due to business reasons the call was moved out of kong and I could not test further.
Moreover I have updated the version to 0.7 and will try to simulate the call manually for testing.

@krish7919 are you facing the same problem with the actual responses or is it just that you are seeing the “port 80” information in kong api calls.

Hi,

I am still not 100% sure here but I have come across a case where similar behavior is observed if you keep “preserve host” to true with the route, try making it false if you face the same issue as me.