POST /v1/tokens HTTP/1.1 400

Hello,

We have an older Kong setup in production that is working well. We are attempting to replicate this setup, deployment and config exactly, in our Stage environment.

Production

Environment: Kubernetes (EKS)
Kubernetes: 1.15
Mode: with database
Database: Postgres 9.6
kong: 1.1
kong-ingress-controller: 0.4.0
plugins: cors

Stage

Environment: Kubernetes (EKS)
Kubernetes: 1.18
Mode: with database
Database: Postgres 9.6
kong: 1.1
kong-ingress-controller: 0.5.0
plugins: cors

The only difference in Stage is that K8s 1.18 required a bump to the kong-ingress-controller version, from 0.4.0 to 0.5.0 (in order to deploy successfully). Other than that, the configs are identical. Meaning specifically that, when:

kubectl port-forward svc/kong-admin-api 8001

and then:

curl -i localhost:8001/

and then diff the config output in JSON, the only differences are the node-id, hostname, and pids. The services and routes are likewise identical, as is the CORS plugin config.

The Issue

The issue is that when we attempt to login in Stage, we cannot get a token from our backend service to allow authentication.

Production:

kong-<pod-hash> kong-proxy <ip-1> - - [date] "OPTIONS /v1/tokens HTTP/1.1" 200 0 "https://portal.<domain>.com/"
kong-<pod-hash> kong-proxy <ip-2> - - [date] "POST /v1/tokens HTTP/1.1" 200 1250 "https://portal.<domain>.com/" 

Above you can see the POST to the /v1/tokens endpoint gets a 200.

Stage:

kong-<pod-hash> kong-proxy <ip-1> - - [date] "OPTIONS /v1/tokens HTTP/1.1" 200 0 "https://portalstage.<domain>.com/"
kong-<pod-hash> kong-proxy <ip-2> - - [date] "POST /v1/tokens HTTP/1.1" 400 207 "https://portalstage.<domain>.com/" 

Above you can see the POST to the /v1/tokens endpoint throws a 400.

If it’s relevant:

[1] ip-1 / ip-2 are IPv4 addresses of the EKS nodes

[2] The load balancer services are also identical. Both are Classic ELB with:

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=Stage
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:<region>:<redacted>
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
  name: kong-proxy
  namespace: dev
spec:
  ports:
  - name: kong-proxy
    port: 443
    protocol: TCP
    targetPort: 8000
  selector:
    app: kong
  type: LoadBalancer

[3] And here is the kong deployment manifest.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kong
    name: kong
  name: kong
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kong
      name: kong
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: kong
        name: kong
    spec:
      containers:
      - env:
        - name: KONG_PG_PASSWORD
          value: kong
        - name: KONG_PG_HOST
          value: postgres
        - name: KONG_PROXY_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_PROXY_ERROR_LOG
          value: /dev/stderr
        - name: KONG_ADMIN_LISTEN
          value: "off"
        image: kong:1.1
        imagePullPolicy: IfNotPresent
        name: kong-proxy
        ports:
        - containerPort: 8000
          name: proxy
          protocol: TCP
        - containerPort: 8443
          name: proxy-ssl
          protocol: TCP
      initContainers:
      - command:
        - /bin/sh
        - -c
        - kong migrations list
        env:
        - name: KONG_ADMIN_LISTEN
          value: "off"
        - name: KONG_PROXY_LISTEN
          value: "off"
        - name: KONG_PROXY_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_PROXY_ERROR_LOG
          value: /dev/stderr
        - name: KONG_ADMIN_ERROR_LOG
          value: /dev/stderr
        - name: KONG_PG_HOST
          value: postgres
        - name: KONG_PG_PASSWORD
          value: kong
        image: kong:1.1
        imagePullPolicy: IfNotPresent
        name: wait-for-migrations
      restartPolicy: Always

We’re trying to figure out why these installations/configs are behaving differently, and of course to get past the 400 so we can login.

Any help is much appreciated.

Ben

If anyone runs into a similar issue, this was our solution.

It was difficult to determine whether the issue was in the Kong config or our user-service (microservice running in EKS) that takes care of our token-based authentication.

So we used kubectl to port forward directly to the server port of the user-service and were able to get a token (authenticate), confirming the issue was in our Kong config.

We then deployed a user-service image with some updated logging and could clearly see:

user-service-54df9f9b-xphn5 user-service [31/Mar/2021:17:42:09 +0000] 1.2.3.4 "POST / HTTP/1.1" 400 (6 ms)
user-service-54df9f9b-gz9hc user-service [31/Mar/2021:17:42:10 +0000] 1.2.3.4 "POST / HTTP/1.1" 400 (12 ms)
user-service-54df9f9b-gz9hc user-service [31/Mar/2021:17:42:11 +0000] 1.2.3.4 "POST / HTTP/1.1" 400 (7 ms)
user-service-54df9f9b-xphn5 user-service [31/Mar/2021:17:42:12 +0000] 1.2.3.4 "POST / HTTP/1.1" 400 (6 ms)

Kong was stripping the request URL. We then reviewed the routes and found:

"strip_path": true,

which was the culprit. We removed and re-added the routes with this field set to false, and this resolved the issue.

1 Like

Thank you for the report and the followup bwmills! The default for “strip_path” was changed to “false” in newer versions of the Ingress Controller to be more consistent with what folks are accustomed to in Kubernetes.

Hopefully that keeps people from running into what you did, but always thankful for a report and its followup.

Many thanks @Aaron_Miller and good to know.

Fwiw, we are planning to migrate to db-less mode and deploy the latest version - this was just a first step.

Have a good day/evening,

Ben