Expose admin/proxy from ingress k8s service (AWS)

Hi I am trying to route kong-admin and kong-proxy from Ingress route.
It works in previous config(kong 1.5 - KIC controller.bintray.io/kong-ingress-controller:0.5.0) works.
We’re trying to upgrade at the current version but it’s been quite hard find any example or any helpful guide. We are trying to upgrade at this config but doesn’t work(kong 2.4 - KIC kong/kubernetes-ingress-controller:1.3) and we get:

request: https://elb-dns/proxy
response: Bad Request

request: https://elb-dns/admin
{“message”:“Not found”}

k8s ingress service:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kong-ingress-1
namespace: kong
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: “kong”
spec:
rules:

  • http:
    paths:
    • path: /admin
      backend:
      serviceName: kong-ingress-controller
      servicePort: 443
    • path: /proxy
      backend:
      serviceName: kong-proxy
      servicePort: 443

k8s yaml definitions:

apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: “http”
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: “443,8443”
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: “*”
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west1xxxx
name: kong-proxy
namespace: kong
spec:
ports:
- name: kong-proxy
port: 80
targetPort: 8000
protocol: TCP
- name: kong-proxy-ssl
port: 443
targetPort: 8000
protocol: TCP
selector:
app: kong
externalTrafficPolicy: Local
type: LoadBalancer


apiVersion: v1
kind: Service
metadata:
name: kong-ingress-controller
namespace: kong
spec:
type: ClusterIP
ports:
- name: kong-admin
port: 80
targetPort: 8001
protocol: TCP
- name: kong-admin-ssl
port: 443
targetPort: 8001
protocol: TCP
selector:
app: ingress-kong


apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong


apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: “8100”
prometheus.io/scrape: “true”
traffic.sidecar.istio.io/includeInboundPorts: “”
labels:
app: ingress-kong
spec:
serviceAccountName: kong-serviceaccount
containers:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: postgres.default.svc.cluster.local
- name: KONG_PG_DATABASE
value: kong
- name: KONG_PG_USER
value: kong
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:8001, 0.0.0.0:8444 ssl
- name: KONG_PROXY_LISTEN
value: ‘off’
- name: KONG_TRUSTED_IPS
value: 0.0.0.0/0,::/0
- name: KONG_REAL_IP_HEADER
value: X-Forwarded-For
- name: KONG_REAL_IP_RECURSIVE
value: ‘on’
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
image: kong:2.4
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: admin-api
ports:
- name: kong-admin
containerPort: 8001
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: “true”
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1


apiVersion: apps/v1
kind: Deployment
metadata:
name: kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: kong
template:
metadata:
labels:
name: kong
app: kong
spec:
containers:
- name: kong-proxy
image: kong:2.4
env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: postgres.default.svc.cluster.local
- name: KONG_PG_DATABASE
value: kong
- name: KONG_PG_USER
value: kong
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_PROXY_ACCESS_LOG
value: “/dev/stdout”
- name: KONG_PROXY_ERROR_LOG
value: “/dev/stderr”
- name: KONG_ADMIN_LISTEN
value: ‘off’
- name: KONG_TRUSTED_IPS
value: 0.0.0.0/0,::/0
- name: KONG_REAL_IP_HEADER
value: X-Forwarded-For
- name: KONG_REAL_IP_RECURSIVE
value: ‘on’
ports:
- name: proxy
containerPort: 8000
protocol: TCP
- name: proxy-ssl
containerPort: 8443
protocol: TCP
lifecycle:
preStop:
exec:
command: [ “/bin/sh”, “-c”, “kong quit” ]

can some give any clue about this?
do I have to edit the yaml definition in a proper format or is fine like this?
just really frustrated for not finding up any solution

1 Like

I have the same issue - I want to access the kong admin URLs from another namespace

Setup:
kubectl kustomize ams-analysis-api/src/main/kubernetes | kubectl apply -f -

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml

patchesStrategicMerge:
  - kong-ingress-env.yaml

kong-ingress-env.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-kong
  namespace: kong
spec:
  template:
    spec:
      containers:
        - env:
            - name: KONG_ADMIN_LISTEN
              value: 127.0.0.1:8001, 127.0.0.1:8444 ssl
            - name: KONG_PORT_MAPS
              value: 80:8000, 443:8443, 8001:8001, 8444:8444
          name: proxy
          ports:
            - containerPort: 8444
              name: admin-ssl
              protocol: TCP
            - containerPort: 8001
              name: admin
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
  namespace: kong
spec:
  ports:
    - name: admin
      port: 8001
      protocol: TCP
      targetPort: 8001
    - name: admin-ssl
      port: 8444
      protocol: TCP
      targetPort: 8444
  selector:
    app: ingress-kong
  type: LoadBalancer

Then - per the K8s docs - I created a NodePort in the kong namespace:

{
    "kind" : "Service",
    "apiVersion" : "v1",
    "metadata" : {
        "name" : "kong-admin-api",
        "labels" : {
            "app" : "ingress-kong"
        },
        "namespace" : "kong"
    },
    "spec" : {
        "type" : "NodePort",
        "selector" : {
            "app" : "ingress-kong"
        },
        "ports" : [
            {
                "port" : 8001,
                "targetPort" : 8001,
                "name" : "admin",
                "protocol" : "TCP",
                "nodePort" : 31001
            },
            {
                "port" : 8444,
                "targetPort" : 8444,
                "name" : "admin-ssl",
                "protocol" : "TCP",
                "nodePort" : 31444
            }
        ]
    }
}

Which I should be able to access via an ExternalName

{
    "apiVersion" : "v1",
    "kind" : "Service",
    "metadata" : {
        "name" : "kong-ns-proxy",
        "namespace" : "ams-kubernetes"
    },
    "spec" : {
        "type" : "ExternalName",
        "externalName" : "kong-admin-api.kong.svc.cluster.local",
        "name" : "kong-ns-proxy"
    }
}

But the HTTP request just hangs

I know that the admin is up and enabled, because if I use port-forward on the 8001 port I can access the Admin Api

kubectl port-forward -n kong svc/kong-admin-api 8001:8001 &

Changing the KONG_ADMIN_LISTEN to 0.0.0.0:8001 solved my issue indirectly, but now the port is exported outside the cluster

What needs to be here to allow a Service or NodePort in the kong namespace to connect, and thus allow an ExternalName in another namespace to access the API ?

The ExternalName shouldn’t be necessary. The namespaced Service record should be all you need for access inside the cluster, and an Ingress in the same namespace is all you need if you want to allow external access. You can route from other namespaces to Service hostnames fine; it’s only Ingress resources that must be in the same namespace as the Service they’re forwarding to. The latter is an intentional security restriction.

By default, the admin listen in our standard Deployments is internal-only because those Deployments include an ingress controller container in the same Pod, and that container can access listens over localhost. The expectation is that you won’t normally need direct access to the admin API since you’ll handle configuration via Kubernetes resources instead, with the ingress controller translating them into Kong configuration and interacting with the admin API on your behalf.

You can use a ClusterIP and listen on 0.0.0.0 if you wish to make the admin Service available inside the cluster only, though that’s a bit unusual–there are some configurations where the ingress controller runs in its own Pod and needs that, but we usually recommend running the controller in the same Pod as the proxy container.

Most other use cases that expose the admin API outside the Pod do expose it outside the cluster also, to allow external user machines to connect to it. Using the admin GUI/Kong Manager is the most common reason for this, and it requires external exposure of the admin API. If you’re using Enterprise, we recommend RBAC to limit access, but if not, there are other options.