Kong for EKS: ELB has no instances

I’m brand new to Kong, which Ive installed Kong for EKS using the Helm chart. But the ELB that is created has no instances and, as a result, doesn’t route traffic; the Kong service doesn’t record an ingress IP.

The service looks good at a glance:

% kubectl get svc --namespace kong kong-1588254443-kong-proxy                                     
NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)                      AGE
kong-1588254443-kong-proxy   LoadBalancer   172.20.80.126   aaf3ed74e244d46a2a902dc50bf0aff1-1281107532.us-east-2.elb.amazonaws.com   80:30808/TCP,443:30118/TCP   81m

But:

% kubectl get svc --namespace kong kong-1588254443-kong-proxy -o jsonpath='{.status.loadBalancer}'
map[ingress:[map[hostname:aaf3ed74e244d46a2a902dc50bf0aff1-1281107532.us-east-2.elb.amazonaws.com]]]%      

Look ma, no IP!

And the ELB doesn’t have any instances, either:

%  aws elb describe-load-balancers --load-balancer-name aaf3ed74e244d46a2a902dc50bf0aff1 --query 'LoadBalancerDescriptions[0].Instances'
[]

I’m not even sure if it should have instances; the EKS cluster instances don’t show up as available to add to the ELB, so it’s possible that it’s (supposed to be) working some other way.

With no IP I can’t query the proxy IP that way, but by hostname it returns nothing, because the ELB has nowhere to route traffic to:

% curl -i aaf3ed74e244d46a2a902dc50bf0aff1-1281107532.us-east-2.elb.amazonaws.com:80
curl: (52) Empty reply from server

I’m guessing I’ve got something set up wrong in our AWS networking, but I’m not sure what it is, and I can’t find anything about this situation in the Kong docs.

Thanks for any help!

That looks right. You either have a hostname or an IP.

Re: no instances registered in the ELB, that’s an issue specific to EKS.
I’ve fixed this issue several times but don’t recall the solution right now.

That looks right. You either have a hostname or an IP.

What determines that? Currently the docs and the output from the Helm chart have you run:

kubectl get svc --namespace kong kong-1588254443-kong-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

That returns nothing for me, which is what led me to think it was an error. Do the docs need updating? Or is this something idiosyncratic about my environment?

On the original question: Any idea what I might look at to understand this better?

Your cloud provider.
AWS has hostname, Google has IP. Others have one or the other too.

Try seeing if the LB has target instances but they are unhealthy or not. Firewalls(security groups) might be another to figure out.

There are no instances registered to the LB at all. It’s not an issue of health.

I’m not sure what security groups to check, since there are no instances, but the secgroup for the LB itself is pretty relaxed – 80 and 443 to the world.

I tried reprovisioning the chart with an NLB instead of a Classic LB, but it never got an external IP. I suspect that’s a different problem, but I’d be happy to solve that instead, too.

Weird. @traines Any idea of what’s going on here?

I’ve made some progress on this. It turns out to mostly have been a stupid mistake; my subnets were missing the kubernetes.io/role/elb=1 annotation. (In fairness, the AWS docs make that sound optional if you’re fine using a random public subnet from each AZ.)

That said, it’s still not quite right. When I provision Kong with an NLB, I am able to get the 404 as expected, so that’s a great improvement. But the k8s service never shows an external IP:

% kubectl get svc -n kong
NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
kong-1588619163-kong-proxy   LoadBalancer   172.20.191.41   <pending>     80:32179/TCP,443:31211/TCP   10m

If I manually collect the LB hostname from the AWS CLI, though, I can curl it:

% curl -i ad1418901fba9489eaeedafe70b04d81-403d465ea5b6dca1.elb.us-west-2.amazonaws.com   
HTTP/1.1 404 Not Found
Date: Mon, 04 May 2020 19:17:19 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Content-Length: 48
X-Kong-Response-Latency: 1
Server: kong/2.0.3

{"message":"no Route matched with those values"}

So there’s still something awry, but obviously much less awry since I, like, did things right.

Populating that is handled by the cloud provider: AWS’s code for integrating with Kubernetes has logic that should create the load balancer and then send its name or address to the Kubernetes API, specifically https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#loadbalancerstatus-v1-core

We don’t have any influence over that on our end, unfortunately; you’d need to check with AWS to see why it’s not getting populated (could possibly be some rule not allowing AWS internals to talk to the Kubernetes API, but I don’t know enough about the details of EKS to say for sure).

They do have code to handle it, but it may be hard to track it down. https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L4025-L4053 are the functions for generating that status information, but it’s unclear how up-to-date that code is: cloud provider code is moving out of the Kubernetes main tree, and https://github.com/kubernetes/cloud-provider-aws/issues/42 suggests that the current full codebase is only available internally within AWS engineering.