Using ngx.socket to forward request to upstream instead of using default nginx proxy module


#1

Due to nginx proxy-ing behaviour, there are limitations in what plugins can do in header_filter, body_filter, and log. Things like modifying response header based on response body is impossible. Even logging need to be done in ngx.timer.at due to the absence of cosocket, thus facing the risk of the timer not executing due to the queue being full. Since kong is an API Management, i think those things are somewhat crucial in some scenarios.

I am not an expert on this but just wondering, what if kong use it’s own proxy stack that is executed on access phase using ngx.socket.*? what are the disadvantages of this?


#2

Implementing our own proxy module in Lua would add flexibility and fix some of the problems you mention, but on the other hand the nginx proxy module is a well-tested and widely used solution. Doing our own proxy is always a possibility, but if we go that route we’ll have to do it very carefully. At the moment there are other more prioritary changes.


#3

can’t kong use https://github.com/pintsized/lua-resty-http#proxy as the proxy module? i’ve tried running httpc:proxy_request, and then httpc:request_uri and then https:proxy_response in access phase of a custom plugins and it works.

*Sorry for the late reply, didn’t realize the notification


#4

Even if the code change is not quite big, this sits at the center of what Kong does. Doing even the slightest change has the potential of affecting all of Kong users. If we are going to do this change, we must have a powerful reason for doing it. Right now, our priorities push up in other directions. When the time comes, if it does, we’ll address this change. Just not for now.


#5

Just curiously looking in the forums around topics of replacing the proxy_pass directive, and I found this thread. Not only am I currently thinking about the flexibility it may give for Kong and the community to start leveraging either a Kong open source in house(Like a fork of: https://github.com/ledgetech/lua-resty-http, or using it directly.) But reading another OpenResty forum post got me wondering if there are perf benefits to moving away as well when I read posts like the one below:

https://groups.google.com/forum/#!topic/openresty-en/fRBPicZOcME

Seems simple enough for me to try out so I may inline a custom conf that uses lua-resty-http as opposed to proxy_pass in some sandbox testing and see if I also see a large performance boost with regards to latency/throughput.

Edit - I still doubt that the guy who posted in the OpenResty forums has a proper test setup, how in the world could NGINX’s be so inefficient looking compared to a 3rd party github repo one, which is why validating it out yourself is always best :+1: .