Access from outside the k8s cluster with kong ingress controller without NodePort

Is it possible to establish an ingress rule via kong-ingress-controller without a separate nginx-ingress-controller in the k8s cluster and without NodePorts?

Current state:
The k8s cluster is established via Rancher.
Rancher automatically deploys a nginx-ingress-controller. If I am deploying a service and the related ingress rule, the service is accessible from outside the k8s cluster without NodePorts, just with the IP Address + defined path as URI.

Question:
Now i expect that the behaviour of the kong-ingress-controller is the same like nginx-ingress-controller. So I deinstalled the nginx controller and want to use just the kong controller. But if I am deploying the same ingress rule it is not accessible from outside, just with IP:NodePort/path.
So first question: Is it possible to only use kong-ingress-controller without any other controller?
Second question: How can i mange it, that the behaviour is the same like described in “current state”?

1 Like

Is it possible to only use kong-ingress-controller without any other controller?

Yes.

How can i mange it, that the behaviour is the same like described in “current state”?

Kong Ingress Controller is deployed with a service of type LoadBalancer. Figure out the IP address of the load-balancer for the kong-proxy service in the kong namespace and use that to query your requests.

If you would like more help, please specify the versions of Kong and Kong Ingress Controller you are running and how you installed them, pasting deployments in a gist and putting a link here will be helpful too.

Hope this helps!

1 Like

Thank you for your answer.

Unfortunately my cloud provider does not provide a load balancer service. So the kong-proxy external ip is pending all the time. Under this Link you can find the kong-ingress-controller deployment yaml.
Here you can find the kong-proxy service yaml and here the kong-admin yaml.

As described here i am able to make services accessible via the nginx-ingress-controller, installed by rancher. But at the moment i can not explain why it works, because there is no service or related things. Just the Ports 80/433 in the deployment yaml are declared.

Would be nice if this behaviour can be also achieved by kong-ingress-controller. Hope you can help me :slight_smile:

edit:
I tried to add the lines:
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
to the kong-ingress-controller.yaml and i also tried to set the hostNetwork flag true. but it does not work. As i understood the ingress things, i can deploy the kong-proxy and admin as ClusterIP, only accessible from internal. And the ingress controller establishes the route from outside the cluster to an intenral service via an ingress rule. So there is no need of external load balancers, ips etc. or? especially the ingress controller is a load balancer in some way.

@hbagdi How I understand it is that there is no type LoadBalancer because the hoster does not offer this type. The LoadBalancer service does only work if you run Kubernetes on Google or AWS - and not on a bare metal hoster.

Any idea how this works without the LoadBalancer service type?

The nginx-controller deployment works because it is deployed as a daemon set.
Meaning, the controller will run on port 80 and 443 of each node of your k8s cluster.

I don’t see any reason why this deployment cannot work with Kong Ingress Controller.

While I don’t have the YAML file to solve this, I would recommend to do the following:

I’m assuming that you are using Kong with a Database (if you using are running Kong without a database, this would be way simpler):

Kong Ingress Controller has two deployment, one for the controller and one for the Kong pods which actually proxy the traffic.
Change the deployment of kong-proxy pods to Daemon set, bind them to port 80 of the host and that should be it.

Let me know if you would like me to elaborate further.

If i am right, it does not matter if we are trying to get the access with DeamonSet or Deployment. With Deployment i just have to figure out the right IP Address from the Node where the Controller is running.
So, i downloaded the official yaml files for kong + ingress controller deployment and modified the ports and the hostNetwork value. As described here the wait-for-db job cannot find the database then.
This are the errors:
01/07/2019 09:28:30 Error: [PostgreSQL error] failed to retrieve server_version_num: host or service not provided, or not known
01/07/2019 09:28:30 Run with --v (verbose) or --vv (debug) for more details
How can i solve this issue?

As per k8s docs, it seems like you will need to set the dnsPolicy to ClusterFirstWithHostNet explicitly.

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

The deployment now works again. But i still have no access from outside the cluster …
Can you have a look on that yaml?
My current solution is to get access from outside via the nginx ingress controller and have a ingress to kong proxy as type ClusterIP. But it would be nice to know how it works with kong-ingress-c.

When you say you can’t access it oustide the cluster, what do you mean now?

I mean that I still can not access the cluster from the internet … maybe the containerized k8s cluster from Rancher is the reason?

# Ingress routing
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: apigateway
  namespace: apigateway
  annotations:
    kubernetes.io/ingress.class: "nginx-direct"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
  - hosts:
    - apigateway.mycompany.net
    secretName: tls-wildcard-cert
  rules:
  - host: apigateway.mycompany.net
    http:
      paths:
      - path: /
        backend:
          serviceName: kong-proxy
          servicePort: 443

I have this ingress and it works for me without type LoadBalancer.

Service is like this -

apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
  namespace: voy-apigateway
  annotations:
    # Cloud-provider specific annotations
    # GKE
    # GKE creates a L4 LB for any service of type LoadBalancer
    # TODO figure out how to enable Proxy Protocol on an L4 LB for GKE
    # AWS
    # Use NLB over ELB
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    # Use L4 LB so that Kong can do TLS termination
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    # Enable Proxy Protocol when Kong is listening for proxy-protocol
    #service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
  labels:
    app: kong-proxy
spec:
  type: ClusterIP
  ports:
  - name: kong-proxy
    port: 80
    targetPort: 8000
    protocol: TCP
  - name: kong-proxy-ssl
    port: 443
    targetPort: 8443
    protocol: TCP
  selector:
    app: kong

Does this help?


© 2018 Kong Inc.    Terms  •  Privacy  •  FAQ