Expose admin/proxy from ingress k8s service (AWS)

Hi I am trying to route kong-admin and kong-proxy from Ingress route.
It works in previous config(kong 1.5 - KIC controller.bintray.io/kong-ingress-controller:0.5.0) works.
We’re trying to upgrade at the current version but it’s been quite hard find any example or any helpful guide. We are trying to upgrade at this config but doesn’t work(kong 2.4 - KIC kong/kubernetes-ingress-controller:1.3) and we get:

request: https://elb-dns/proxy
response: Bad Request

request: https://elb-dns/admin
{“message”:“Not found”}

k8s ingress service:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kong-ingress-1
namespace: kong
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: “kong”
spec:
rules:

  • http:
    paths:
    • path: /admin
      backend:
      serviceName: kong-ingress-controller
      servicePort: 443
    • path: /proxy
      backend:
      serviceName: kong-proxy
      servicePort: 443

k8s yaml definitions:

apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: “http”
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: “443,8443”
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: “*”
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west1xxxx
name: kong-proxy
namespace: kong
spec:
ports:
- name: kong-proxy
port: 80
targetPort: 8000
protocol: TCP
- name: kong-proxy-ssl
port: 443
targetPort: 8000
protocol: TCP
selector:
app: kong
externalTrafficPolicy: Local
type: LoadBalancer


apiVersion: v1
kind: Service
metadata:
name: kong-ingress-controller
namespace: kong
spec:
type: ClusterIP
ports:
- name: kong-admin
port: 80
targetPort: 8001
protocol: TCP
- name: kong-admin-ssl
port: 443
targetPort: 8001
protocol: TCP
selector:
app: ingress-kong


apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong


apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: “8100”
prometheus.io/scrape: “true”
traffic.sidecar.istio.io/includeInboundPorts: “”
labels:
app: ingress-kong
spec:
serviceAccountName: kong-serviceaccount
containers:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: postgres.default.svc.cluster.local
- name: KONG_PG_DATABASE
value: kong
- name: KONG_PG_USER
value: kong
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:8001, 0.0.0.0:8444 ssl
- name: KONG_PROXY_LISTEN
value: ‘off’
- name: KONG_TRUSTED_IPS
value: 0.0.0.0/0,::/0
- name: KONG_REAL_IP_HEADER
value: X-Forwarded-For
- name: KONG_REAL_IP_RECURSIVE
value: ‘on’
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
image: kong:2.4
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: admin-api
ports:
- name: kong-admin
containerPort: 8001
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: “true”
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1


apiVersion: apps/v1
kind: Deployment
metadata:
name: kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: kong
template:
metadata:
labels:
name: kong
app: kong
spec:
containers:
- name: kong-proxy
image: kong:2.4
env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: postgres.default.svc.cluster.local
- name: KONG_PG_DATABASE
value: kong
- name: KONG_PG_USER
value: kong
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_PROXY_ACCESS_LOG
value: “/dev/stdout”
- name: KONG_PROXY_ERROR_LOG
value: “/dev/stderr”
- name: KONG_ADMIN_LISTEN
value: ‘off’
- name: KONG_TRUSTED_IPS
value: 0.0.0.0/0,::/0
- name: KONG_REAL_IP_HEADER
value: X-Forwarded-For
- name: KONG_REAL_IP_RECURSIVE
value: ‘on’
ports:
- name: proxy
containerPort: 8000
protocol: TCP
- name: proxy-ssl
containerPort: 8443
protocol: TCP
lifecycle:
preStop:
exec:
command: [ “/bin/sh”, “-c”, “kong quit” ]

can some give any clue about this?
do I have to edit the yaml definition in a proper format or is fine like this?
just really frustrated for not finding up any solution


© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ