Constant increase of memory consumption

Hi!.

I have deployed a Kong cluster into AWS ECS, I’m very glad about it’s performance, but I have realized that the ECS’s service where runs, consumes more and more memory without an increase on the requests number or other reasonable cause.

Here you can see what I mean:
ECS used memory for last 2 weeks

ECS used CPU for last 2 weeks

NLB requests for last 2 weeks

I would like to know why used memory is constantly increasing, because this behavior makes the service autoscale and add more task and increase the billing. Can anyone help me to find what is the root cause of this?

Here are more details about the cluster configuration:

Kong version: 1.0.2
Database: Postgresql
KONG_MEM_CACHE_SIZE: 128m
KONG_UPSTREAM_KEEPALIVE: 60
KONG_CLIENT_BODY_BUFFER_SIZE: 8k
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_DB_UPDATE_FREQUENCY: 10
KONG_SSL_CIPHER_SUITE: old
KONG_PG_TIMEOUT: 10000
KONG_ADMIN_LISTEN: off

Enabled plugins:

Oauth2.
Cors
ip-restriction
rate-limiting (use redis as backend)
request-transformer
response-transformer

Resources configuration:

Now there are 10 task running, each one has assigned 1900 CPUunits, and 3000 Mb as memory soft limit.

Thanks!!.

I have exactly the same behavior, only with the default configuration on containers hosted on azure with prometheus, zipkin, file-log. Currently, i only have one service that is exposed by nginx metrics to do health checks to kong. They are 3 nodes and the ping request are 3 per second per node.

How many worker processes did your node(or per nodes) spawn? I generally notice a climb in mem too but it usually starts plateauing at around 2-3GB for me, I run 6 worker processes an 10GB RAM right now. I also only run my Kong nodes once a week before I recycle them though regardless with a rolling redeploy. 150TPS or so peak during day and like 20-30TPS at night.

Hi jeremyjpj0916, nginx_worker_processes is config as default (auto), the instances where Kong is running has 4 CPU cores, so there are 4 workers per instance. Now I’m making a redeploy every two weeks too, that’s why I would like to now where all this memory goes and avoid theses unnecessary redeploys.

Thanks.