I’m brand new to Kong, which Ive installed Kong for EKS using the Helm chart. But the ELB that is created has no instances and, as a result, doesn’t route traffic; the Kong service doesn’t record an ingress IP.
The service looks good at a glance:
% kubectl get svc --namespace kong kong-1588254443-kong-proxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-1588254443-kong-proxy LoadBalancer 172.20.80.126 aaf3ed74e244d46a2a902dc50bf0aff1-1281107532.us-east-2.elb.amazonaws.com 80:30808/TCP,443:30118/TCP 81m
But:
% kubectl get svc --namespace kong kong-1588254443-kong-proxy -o jsonpath='{.status.loadBalancer}'
map[ingress:[map[hostname:aaf3ed74e244d46a2a902dc50bf0aff1-1281107532.us-east-2.elb.amazonaws.com]]]%
I’m not even sure if it should have instances; the EKS cluster instances don’t show up as available to add to the ELB, so it’s possible that it’s (supposed to be) working some other way.
With no IP I can’t query the proxy IP that way, but by hostname it returns nothing, because the ELB has nowhere to route traffic to:
% curl -i aaf3ed74e244d46a2a902dc50bf0aff1-1281107532.us-east-2.elb.amazonaws.com:80
curl: (52) Empty reply from server
I’m guessing I’ve got something set up wrong in our AWS networking, but I’m not sure what it is, and I can’t find anything about this situation in the Kong docs.
That looks right. You either have a hostname or an IP.
What determines that? Currently the docs and the output from the Helm chart have you run:
kubectl get svc --namespace kong kong-1588254443-kong-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
That returns nothing for me, which is what led me to think it was an error. Do the docs need updating? Or is this something idiosyncratic about my environment?
There are no instances registered to the LB at all. It’s not an issue of health.
I’m not sure what security groups to check, since there are no instances, but the secgroup for the LB itself is pretty relaxed – 80 and 443 to the world.
I tried reprovisioning the chart with an NLB instead of a Classic LB, but it never got an external IP. I suspect that’s a different problem, but I’d be happy to solve that instead, too.
I’ve made some progress on this. It turns out to mostly have been a stupid mistake; my subnets were missing the kubernetes.io/role/elb=1 annotation. (In fairness, the AWS docs make that sound optional if you’re fine using a random public subnet from each AZ.)
That said, it’s still not quite right. When I provision Kong with an NLB, I am able to get the 404 as expected, so that’s a great improvement. But the k8s service never shows an external IP:
% kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-1588619163-kong-proxy LoadBalancer 172.20.191.41 <pending> 80:32179/TCP,443:31211/TCP 10m
If I manually collect the LB hostname from the AWS CLI, though, I can curl it:
% curl -i ad1418901fba9489eaeedafe70b04d81-403d465ea5b6dca1.elb.us-west-2.amazonaws.com
HTTP/1.1 404 Not Found
Date: Mon, 04 May 2020 19:17:19 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Content-Length: 48
X-Kong-Response-Latency: 1
Server: kong/2.0.3
{"message":"no Route matched with those values"}
So there’s still something awry, but obviously much less awry since I, like, did things right.
We don’t have any influence over that on our end, unfortunately; you’d need to check with AWS to see why it’s not getting populated (could possibly be some rule not allowing AWS internals to talk to the Kubernetes API, but I don’t know enough about the details of EKS to say for sure).