Hello,
We have an older Kong setup in production that is working well. We are attempting to replicate this setup, deployment and config exactly, in our Stage environment.
Production
Environment: Kubernetes (EKS)
Kubernetes: 1.15
Mode: with database
Database: Postgres 9.6
kong: 1.1
kong-ingress-controller: 0.4.0
plugins: cors
Stage
Environment: Kubernetes (EKS)
Kubernetes: 1.18
Mode: with database
Database: Postgres 9.6
kong: 1.1
kong-ingress-controller: 0.5.0
plugins: cors
The only difference in Stage is that K8s 1.18 required a bump to the kong-ingress-controller version, from 0.4.0 to 0.5.0 (in order to deploy successfully). Other than that, the configs are identical. Meaning specifically that, when:
kubectl port-forward svc/kong-admin-api 8001
and then:
curl -i localhost:8001/
and then diff the config output in JSON, the only differences are the node-id, hostname, and pids. The services and routes are likewise identical, as is the CORS plugin config.
The Issue
The issue is that when we attempt to login in Stage, we cannot get a token from our backend service to allow authentication.
Production:
kong-<pod-hash> kong-proxy <ip-1> - - [date] "OPTIONS /v1/tokens HTTP/1.1" 200 0 "https://portal.<domain>.com/"
kong-<pod-hash> kong-proxy <ip-2> - - [date] "POST /v1/tokens HTTP/1.1" 200 1250 "https://portal.<domain>.com/"
Above you can see the POST to the /v1/tokens endpoint gets a 200.
Stage:
kong-<pod-hash> kong-proxy <ip-1> - - [date] "OPTIONS /v1/tokens HTTP/1.1" 200 0 "https://portalstage.<domain>.com/"
kong-<pod-hash> kong-proxy <ip-2> - - [date] "POST /v1/tokens HTTP/1.1" 400 207 "https://portalstage.<domain>.com/"
Above you can see the POST to the /v1/tokens endpoint throws a 400.
If it’s relevant:
[1] ip-1 / ip-2 are IPv4 addresses of the EKS nodes
[2] The load balancer services are also identical. Both are Classic ELB with:
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=Stage
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:<region>:<redacted>
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
name: kong-proxy
namespace: dev
spec:
ports:
- name: kong-proxy
port: 443
protocol: TCP
targetPort: 8000
selector:
app: kong
type: LoadBalancer
[3] And here is the kong deployment manifest.
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kong
name: kong
name: kong
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: kong
name: kong
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: kong
name: kong
spec:
containers:
- env:
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_PG_HOST
value: postgres
- name: KONG_PROXY_ACCESS_LOG
value: /dev/stdout
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_LISTEN
value: "off"
image: kong:1.1
imagePullPolicy: IfNotPresent
name: kong-proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
initContainers:
- command:
- /bin/sh
- -c
- kong migrations list
env:
- name: KONG_ADMIN_LISTEN
value: "off"
- name: KONG_PROXY_LISTEN
value: "off"
- name: KONG_PROXY_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PG_HOST
value: postgres
- name: KONG_PG_PASSWORD
value: kong
image: kong:1.1
imagePullPolicy: IfNotPresent
name: wait-for-migrations
restartPolicy: Always
We’re trying to figure out why these installations/configs are behaving differently, and of course to get past the 400 so we can login.
Any help is much appreciated.
Ben