Discussions on our custom nginx.conf/nginx-kong.conf files

I think it would be great to have an open forum post where users can post their Kong nginx builds to and their use case and the community and developers point out optimizations or problems they notice with their configuration.

I will kick it off with my config:

Setup: Running on Docker Alpine 3.6, running API transactions on average 8kb. Hoping to support some lower volume transactions client/backend payloads as high as 20mb if possible. Running 4 nodes total (2 CPU 3000 Mibs RAM in each).

I tried including listen directive optimizations as well as an extra ssl optimization, still unsure how much disabling mutex will help at high TPS. Http2 optimizations and a few others.


worker_processes auto;                                                          
daemon off;                                                                     
pid pids/nginx.pid;                                                             
error_log /dev/stderr notice;                                                   
worker_rlimit_nofile 1048576;                                                   
events {                                                                        
    worker_connections 16384;                                                   
    multi_accept on;                                                            
    accept_mutex off; #Need to study this, I do run Linux new enough for  EPOLLEXCLUSIVE and I use reuseport, I also understand thundering herd but at decent tps the cpu is wasted less and supposedly it shaves off milliseconds at higher tps.                                                    
#thread_pool pool0 threads=4 max_queue=65536; # Need to verify this, can't use until find way to compile openresty from source and then install kong via luarocks.                                 
http {                                                                          
    include 'nginx-kong.conf';                                                  


#Performance Tweaks
tcp_nodelay on;			#  Enables or disables the use of the TCP_NODELAY option. The option is enabled only when a connection is transitioned into the keep-alive state
tcp_nopush on; 			#  Enables or disables the use of the TCP_NOPUSH socket option on FreeBSD or the TCP_CORK socket option on Linux.

proxy_buffer_size 128k; 	# Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform. It can be made smaller, however. 
proxy_buffers 10 512k; 		# Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform. 
proxy_busy_buffers_size 512k;	# When buffering of responses from the proxied server is enabled, limits the total size of buffers that can be busy sending a response to the client while the response is not yet fully read. In the meantime, the rest of the buffers can be used for reading the response and, if needed, buffering part of the response to a temporary file. By default, size is limited by the size of two buffers set by the proxy_buffer_size and proxy_buffers directives. 

reset_timedout_connection on;	# Enables or disables resetting timed out connections. The reset is performed as follows. Before closing a socket, the SO_LINGER option is set on it with a timeout value of 0. When the socket is closed, TCP RST is sent to the client, and all memory occupied by this socket is released. This helps avoid keeping an already closed socket with filled buffers in a FIN_WAIT1 state for a long time. It should be noted that timed out keep-alive connections are closed normally. 

keepalive_requests 200;		# Sets the maximum number of requests that can be served through one keep-alive connection. After the maximum number of requests are made, the connection is closed.D Default 100
keepalive_timeout 120s;		# The first parameter sets a timeout during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. The optional second parameter sets a value in the “Keep-Alive: timeout=time” response header field. Two parameters may differ. The “Keep-Alive: timeout=time” header field is recognized by Mozilla and Konqueror. MSIE closes keep-alive connections by itself in about 60 seconds. 

#http2_max_requests 200;         Comparible to the keepalive_requests above, can't use it because we need to run nginx 1.11.6 not 1.11.2
http2_idle_timeout 75s;          #Comparible to the keepalive_timeout above.
http2_recv_buffer_size 5m;       #Sets the size of the per worker input buffer.
http2_body_preread_size 128k;    #Sets the size of the buffer per each request in which the request body may be saved before it is started to be processed. 
client_max_body_size 20m;
proxy_ssl_server_name on;
underscores_in_headers on;

# Need to verify these 
#aio threads=pool0;    Can't use this until we solve compiling nginx from source
#directio 4m;          Can't use this until we solve compiling nginx from source

lua_package_path '${{LUA_PACKAGE_PATH}};;';
lua_package_cpath '${{LUA_PACKAGE_CPATH}};;';
lua_socket_pool_size ${{LUA_SOCKET_POOL_SIZE}};
lua_max_running_timers 4096;
lua_max_pending_timers 16384;
lua_shared_dict kong                5m;
lua_shared_dict kong_cache          256m;
lua_shared_dict kong_process_events 5m;
lua_shared_dict kong_cluster_events 5m;
lua_shared_dict kong_healthchecks   5m;
lua_shared_dict kong_cassandra      5m;

lua_socket_log_errors off;
lua_ssl_trusted_certificate '/my/path/to/cert';
lua_ssl_verify_depth 2;

init_by_lua_block {
    kong = require 'kong'

init_worker_by_lua_block {

proxy_next_upstream_tries 999;

upstream kong_upstream {
    balancer_by_lua_block {
    keepalive 60;

server {
    server_name kong;
    listen ${{PROXY_LISTEN}}${{PROXY_PROTOCOL}} deferred reuseport;
    error_page 400 404 408 411 412 413 414 417 /kong_error_handler;
    client_header_buffer_size 8k;
    large_client_header_buffers 2 16k;
    error_page 500 502 503 504 /kong_error_handler;

    access_log ${{PROXY_ACCESS_LOG}};
    error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};

    client_body_buffer_size 5m;

    listen ${{PROXY_LISTEN_SSL}} ssl${{HTTP2}}${{PROXY_PROTOCOL}} deferred reuseport;
    ssl_certificate ${{SSL_CERT}};
    ssl_certificate_key ${{SSL_CERT_KEY}};
    ssl_protocols TLSv1.2;
    ssl_certificate_by_lua_block {

    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets on;
    ssl_session_timeout 10m;
    ssl_prefer_server_ciphers on;
    ssl_ciphers ${{SSL_CIPHERS}};

    proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
    proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};

    real_ip_header     ${{REAL_IP_HEADER}};
    real_ip_recursive  ${{REAL_IP_RECURSIVE}};

    location / {
        set $upstream_host               '';
        set $upstream_upgrade            '';
        set $upstream_connection         '';
        set $upstream_scheme             '';
        set $upstream_uri                '';
        set $upstream_x_forwarded_for    '';
        set $upstream_x_forwarded_proto  '';
        set $upstream_x_forwarded_host   '';
        set $upstream_x_forwarded_port   '';

        rewrite_by_lua_block {

        access_by_lua_block {

        proxy_http_version 1.1;
        proxy_set_header   Host              $upstream_host;
        proxy_set_header   Upgrade           $upstream_upgrade;
        proxy_set_header   Connection        $upstream_connection;
        proxy_set_header   X-Forwarded-For   $upstream_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $upstream_x_forwarded_proto;
        proxy_set_header   X-Forwarded-Host  $upstream_x_forwarded_host;
        proxy_set_header   X-Forwarded-Port  $upstream_x_forwarded_port;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_pass_header  Server;
        proxy_pass_header  Date;
        proxy_ssl_name     $upstream_host;
        proxy_pass         $upstream_scheme://kong_upstream$upstream_uri;

        header_filter_by_lua_block {

        body_filter_by_lua_block {

        log_by_lua_block {

    location = /kong_error_handler {
        content_by_lua_block {

One thing I recently added to our custom conf is as follows:

Anywhere in proxy servers header_filter_by_lua_block’s we have added some snippits disabling Server/Via headers:

header_filter_by_lua_block { 
   ngx.header["Server"] = nil
   ngx.header["Via"] = nil

This will help prevent leaking Webserver information as can be seen here(and if I were to add server_tokens to my KONG_HEADERS env field this would have read something like Kong/x.x as opposed to OpenResty and its version numbers):

Our security audit team is a pretty picky bunch and I just do what I am told :laughing: . This could be addressed by Kong directly in the runloop handler if server_tokens was missing in the KONG_HEADERS field. Otherwise if populated then set to Kong version for Via/Server.