Hi,
This is my first post here so the formatting my be a little bad, please do ask me in case it’s not understandable.
Okay, so we are trying to use KongIngress in our K8S cluster now but facing one of the major problem. The calls to an https upstream are being sent over port 80 causing “(SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number)”.
I tried to google it a lot but could not find a solution to the problem.
We are having an external server working as content-server and is configured to serve on https/443, the K8S cluster has an AWS ALB in front so https is offloaded on ALB and the http traffic reaches kong ingress which in-turn makes an https connection to upstream content-server. So basically the request flow is like:
User → ALB → Kong → Content server
I have used below manifests to create the external service and Ingress:
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: upstream-https-proto
proxy:
protocol: https
---
apiVersion: v1
kind: Service
metadata:
name: mycontent
annotations:
configuration.konghq.com: upstream-https-proto
spec:
type: ExternalName
externalName: content-mydomain.com
ports:
- port: 443
targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycontent
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:
- host: mydomain.com
http:
paths:
- path: "/content"
backend:
serviceName: mycontent
servicePort: 443
This creates below configuration if I check in kong dashboard:
and
The first thing that seems problematic is the Port in service configuration, even when the protocol is https port is 80.
{
“host”: “mycontent.default.svc”,
“created_at”: 1577622543,
“connect_timeout”: 60000,
“id”: “b17d161c-52bc-58dc-a841-042d37e46a5b”,
“protocol”: “https”,
“name”: “default.mycontent.443”,
“read_timeout”: 60000,
“port”: 80,
“path”: “/”,
“updated_at”: 1577622543,
“client_certificate”: null,
“tags”: null,
“write_timeout”: 60000,
“retries”: 5
}
However I could find in another post that it really doesn’t matter and what really acts is the target of upstream, which I see is fine as below:
{
“next”: null,
“data”: [{
“created_at”: 1577622445.955,
“upstream”: {
“id”: “905838e6-a98d-59bc-b578-337869076d54”
},
“id”: “3265fb2c-3e25-5e8a-9c26-9ce0c31a4b75”,
“target”: “content-mydomain.com:443”,
“weight”: 100
}]
}
So for some time our platform works fine and I can see the traffic in the logs of proxy container but after some time something strange happens and the calls start failing with 502 status code with below logs in kong proxy:
2019/12/29 08:06:45 [error] 26#0: *5282581243 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 1.2.3.4, server: kong, request: “GET /content/testimg_thumb.png HTTP/1.1”, upstream: “https://5.6.7.8:80/data/testimg_thumb.png”, host: “mydomain.com”, referrer: “Web Hosting, Domain Name Registration - MyDomain.com”
Note: The IP 5.6.7.8 corresponds to content-mydomain.com
Now the interesting thing is that when I delete the ingress-kong pods the calls start working again with new ingress-controller and proxy containers.
I am using below versions:
Ingress controller: 0.6.0
Kong: 1.4.0
It will be really helpful if someone can point if I’m making any mistakes here.