Kong with AWS Application Load Balancer

Hi
I was trying to Generate AWS ALB through All-in-one-kong-deployment.yaml, but its not working, can anyone help me regarding this?
Snippet where i suppose to tell ALB created i.e at KONG-PROXY-SERVICE k8 object as follows:

################################################
apiVersion: v1
kind: Service
metadata:
name: kong-proxy
namespace: kong2
annotations:
# # Cloud-provider specific annotations
# # GKE
# # GKE creates a L4 LB for any service of type LoadBalancer
# # TODO figure out how to enable Proxy Protocol on an L4 LB for GKE
# # AWS
# # Use NLB over ELB
service.beta.kubernetes.io/aws-load-balancer-type: “alb”
# # Use L4 LB so that Kong can do TLS termination
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
# # Enable Proxy Protocol when Kong is listening for proxy-protocol
# #service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '
#service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: “3600”
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-
************************************************************
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: ‘*’
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:

  • name: kong-proxy
    port: 80
    targetPort: 8000
    protocol: TCP
  • name: kong-proxy-ssl
    port: 443
    targetPort: 8000
    protocol: TCP
    selector:
    app: kong
    ##############################

Please help me with this.

Thanks,

Please elaborate on what is not working. Is the LoadBalancer not being provisioned or are you having trouble in forwarding traffic from the LoadBalancer to Kong?

Hi Harry,

I am able to create Classic Loadbalancer and Network Loadbalancer as well. But when I opt for Application Load Balancer, it creates CLB only not a ALB on AWS account.
service.beta.kubernetes.io/aws-load-balancer-type: “alb”

I guess in above line , We mention whether we have to create CLB or NLB or ALB. Help me with this.

This is an EKS specific issue. Please reach out to AWS for more help on this.
The following gist might somewhat help:

Eks doesn’t looks like it supports alb without using alb ingresss controller
https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html

Alb
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html

HI

How did you fix this issue? Did you find any workaround?

We have been using the alb-ingress-controller and been pointing to kong-proxy backend for a long while with no issues at all. As Satwant mention, this seems to be the only solution that is currently available.

Hi, my aim is to deploy Kong Ingress Controller in our K8s cluster.

Do you think it’s feasible to have an AWS ALB (an Ingress AWS ALB Controller) in front of the kong-proxy service?

I mean, I provisioned such resources, and so far I do not see any issue, even if I do not use an AWS NLB.

However, looking at the Kong docs I read:

What do you suggest?

Thanks in advance.

Hey!

I’m also stuck here. You can’t really us an Application Load Balancer. Note that in the context of K8s alb can be either Amazon Load Balancer or Application Load Balancer.

My issue is that I can forward the TLS terminated connection to Kong but Kong assumes it is on the edge of the network and that the plain connection is not encrypted and erroneously flagged as http and not https.

I have read that King doesn’t support proxy protocol v2. Is that true?

If it doesn’t then how are we supposed to use it with an NLB?

Currently I managed to correctly use an AWS ALB. No issues so far, the behaviour is solid. I also attached a WAF in front of it.

Hey all,
We are stuck in the same step. Tried to use alb-ingress-controller on top of the Kong ingress controller with alb path routing. But it does not resolve some of the Kong static content, most probably due to path-based routing.

Can someone please shed some light on this matter?

CristianPupazan, ltartarin90: It would be really great if you can share any references if you have any, so we can go through it.

Hi @Chamin_Wickramarathn @CristianPupazan Could you guys please guide how I can point kong ingress controller to use alb or share some reference links which would be helpful
Thanks

hi, sorry for the late reply.

1. deploy the alb-ingress-controller
Instructions to install the alb-ingress-controller can be found here (I used helm ): https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html

2. deploy the kong-proxy

Deploy kong without creating a load balancer (use NodePort type). I used helm again: https://github.com/Kong/charts

3. Create your ingress
Then create your ingress pointing to the kong proxy service:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: default
  name: ingres_name
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/healthcheck-port: "80"
    alb.ingress.kubernetes.io/certificate-arn: "certificate arn here"
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/target-type: instance
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
spec:
  rules:
    - host: your_host_here
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ssl-redirect
                port:
                  name: use-annotation
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kong-proxy 
                port:
                  number: 80

Note: the above ingress creates an ALB that does some extra things like SSL termination, redirects HTTP to HTTPS, and sets a more strict ssl-policy.

Now you can integrate WAF with your newly created ALB, etc.

Hope this helps.

1 Like

I kinda have the same set-up as Cristian. I deployed ALB + EKS with terraform, the only catch is i need to predefined the NodePort port number in the automation , and ensure the port number is match and specify in the Kong Helm chart values.

Hi Cristian, with this approach you’ve attached - and we have exactly the same in our infra - were you able to use Kong’s CRDs for the infrastructure configuration? I know they provide decK solution, I just wanted to use K8S specs do define the whole routing etc. I was able to define all necessary plugins with IaC approach, unfortunately, since routes are defined in kong via Ingress, did you manage to define them with your setup?

Hi Jan,
We are not using the ingress for this setup (we use the ingress controller somewhere else for internal endpoints but without the ALB). We had this setup running for a long time, I believe before the ingress controller got released, and haven’t got around to changing it yet. We use helm to deploy our services and each service has a helm hook where it registers its routes, and plugins interacting with the kong admin directly.

Can you please provide an example of this hook? Or maybe this is a inhouse solution?

Sure, we use helm charts to deploy our microservices. More about helm hooks here: https://helm.sh/docs/topics/charts_hooks/

The hook to add to your helm chart will look something like this:

update-kong-hook.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: "{{.Release.Name}}-{{.Values.version}}"
  labels:
    heritage: {{.Release.Service | quote }}
    release: {{.Release.Name | quote }}
    chart: "{{.Chart.Name}}-{{.Chart.Version}}"
  annotations:
    "helm.sh/hook": post-install,post-upgrade
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    metadata:
      name: "{{.Release.Name}}"
      labels:
        heritage: {{.Release.Service | quote }}
        release: {{.Release.Name | quote }}
        chart: "{{.Chart.Name}}-{{.Chart.Version}}"
    spec:
      restartPolicy: Never
      containers:
        - name: register-service-job
          image: "internal-litle-python-helper-project"
          command: ["python"]
          args:
            - "/opt/register_service.py"
            - "--host"
            - {{.Values.kong_admin_url}}
            - "--data"
            - '{"name": "my-service", "url": "http://service-url"}'
        - name: register-route-job
          image: "internal-litle-python-helper-project"
          command: ["python"]
          args:
            - "/opt/register_route.py"
            - "--host"
            - {{.Values.kong_admin_url}}
            - "--data"
            - '{"name": "my-service", "paths": "/my-path", "strip_path": "false", "service_name": "my-service"}'

In the “internal-litle-python-helper-project” we put a bunch of python scripts that make it easier to interract with the kong-admin. For example:

register_service.py

#!/usr/bin/env python
import argparse
import json
import requests
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--host', default=False, dest='kong_host', help='Kong admin host',
                        required=True)
    parser.add_argument('--data', default=False, dest='data',
                        help='Json payload - example: \'{"name": "test", "url": "http://foo"}\' ',
                        required=True)
    return vars(parser.parse_args())
if __name__ == "__main__":
    args = parse_args()
    print "Running with args: %s" % args
    kong_admin = args['kong_host'] + "/services"
    data = json.loads(args['data'])
    print "REGISTERING SERVICE: %s" % data["name"]
    r = requests.post(kong_admin,
                      data={'name': data["name"], 'url': data["url"]})
    if r.status_code == 409:
        response_patch = requests.patch(kong_admin + "/" + data["name"],
                                        data={'url': data["url"]})
        print "UPDATED: %s ,%s" % (response_patch.status_code, response_patch.reason)
    elif r.status_code == 201:
        print "CREATED: %s ,%s" % (r.status_code, r.reason)
    else:
        print "ERROR registering service: %s" % r
        exit(1)

And similar for registering routes, plugins, etc.

I am sure there is a more elegant way to do this but haven’t got around to look into it.

Hope this helps.

Hi!

Hope is not too late to answer this!

I recently worked on a new implementation using OAuth2.0 plugin and migrate from NLB to ALB to make use of WAF policies. Starts returning http problem.

After some digging, we found out that the issue was the backend-protocol on the Target Group was set to HTTP by default, so we add this annotation to the Ingress for Kong:

alb.ingress.kubernetes.io/backend-protocol: HTTPS

And now the Target Group for the NodePort 443 is set to HTTPS correctly and can access the oauth endpoints.

hi @CristianPupazan
I also deployed ALB + kong ingress + EKS like you. But I still have problem the nodeport healthcheck. currently, i reach the healthcheck error 404 in target group
can i know how are your configure ?