Hi all,
We recently moved from a kubernetes cluster generated via kops to a cluster generated by rancher. We installed our apps as well as the ingress controller that manages them and all works fine. We then went to deploy kong, but the YAMLs that we have been using in our kops clusters don’t seem to work. From the logs, it looks like kong is running properly, but the health checks for the AWS NLB are failing in the rancher cluster. This is the YAML definition for the Kong Kubernetes (LoadBalancer) Service, which creates the AWS NLB. Anything obvious that we are doing wrong?
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
I’ve tried hitting the health check endpoint for a node that has a kong pod running on it and I get this as the response
HTTP/1.1 503 Service Unavailable
Content-Length: 88
Content-Type: application/json
Date: Thu, 09 Jul 2020 16:26:52 GMT
X-Content-Type-Options: nosniff
{
"localEndpoints": 0,
"service": {
"name": "kong-proxy",
"namespace": "kong"
}
}