Connection and Upgrade Headers modified

I’m trying to run JupyterHub behind Kong in Kubernetes. Most things work fine, but Terminals are unresponsive to user input.
When I log the user pod, I am seeing 400’s pop up on websocket connections:
400 GET /user/{username}/terminals/websocket/1
This is definitely a Kong issue, because if I port-forward the service directly everything works fine.
I tried replicating the request with CURL using the same headers as the browser passes, and I got the response: Can "Upgrade" only to "WebSocket"
This is something which tornado does when the upgrade header does not equal websocket, so I pointed the same request at an echo-server to see what headers the server is recieving. Indeed, the upgrade header is gone entirely, and the connection header was replaced:

> GET / HTTP/1.1
> Connection: keep-alive, Upgrade
> Upgrade: websocket

Request Headers:

How can I prevent Kong from overwriting these headers? The documentation says these headers should not be overwritten anyway?

Are you indeed on 1.4? That version had a bug that mangles Connection improperly with Firefox-style values (keep-alive, Upgrade instead of Upgrade alone). fixes it as of 2.0.1. It doesn’t look like we released any 1.x version with it; you’d need to patch the image manually if you don’t want to upgrade yet.

1 Like

We’ve been running JupityrHub behind Kong, and I’m just confirming traines idea, since we ran into that exact bug.

Are you indeed on 1.4?

Yes I am, but I’m unsure if that bug is the culprit. I have the same behavior on Chrome

Tried upgrading to Kong 2.2.1, but identical behavior. Terminal is unresponsive and websocket returns 400:

Request headers:
GET /user/dummy/terminals/websocket/1 HTTP/1.1
Host: jupyter.<company>.<com>
User-Agent: Mozilla/5.0 (<User OS>) Gecko/20100101 Firefox/84.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Sec-WebSocket-Version: 13
Origin: https://jupyter.<company>.<com>
Sec-WebSocket-Extensions: permessage-deflate
Sec-WebSocket-Key: <key>
Connection: keep-alive, Upgrade
Cookie: jupyterhub-user-dummy=<cookie>
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket

Response Headers:
HTTP/1.1 400 Bad Request
content-security-policy: frame-ancestors 'self';report-uri /hub/security/csp-report
Content-Type: text/html; charset=UTF-8
date: Thu, 07 Jan 2021 09:30:00 GMT
server: TornadoServer/6.1
Via: kong/2.2.1
x-content-type-options: nosniff
x-jupyterhub-version: 1.2.2
X-Kong-Proxy-Latency: 1
X-Kong-Upstream-Latency: 3
Content-Length: 34
Connection: keep-alive

I don’t have an echoserver set up to check the upstream headers on the cluster I’m testing on, but presume the same thing is happening with Connection being overridden and Upgrade being removed.

@tyree731 Did you ever solve this?

Since I believe you can run custom scripts on the JupityrHub side of things, maybe try printing out the request body JupityrHub is receiving and see if perhaps something is getting lost in translation.

Also, what is your route and service configuration? Are you running any plugins that would affect the request?

What are you seeing with an echo-server now? Is there possibly any other HTTP-aware hop between Kong and JupyterHub?

This is a clean install of Kong, no KongPlugins or KongIngresses configured. My Kubernetes Ingress is created by the JupterHub helm chart. The corresponding values are:

  enabled: true
  annotations: kong
  - jupyter.newcluster.<company>.<com>

I’ll do an echo test and post that soon

Request headers:
GET / HTTP/1.1
User-Agent: curl/7.54.0
Accept: /
Connection: keep-alive, Upgrade
Upgrade: websocket

Upstream headers:
x-forwarded-for=IP, IP

There is no hop between Kong and Jupyterhub, but kong is itself behind an AWS Classic Loadbalancer

You’re using a classic load balancer? Can you try using an Application Load Balancer? I know ALB’s support websockets, but I’m not so sure classic load balancers do.