DB-Less mem_cache_size performance

Hi,

I’m doing some performance tests to kong running in docker containers (3 nodes with db-less) and my configuration (yaml) as around 60 lines.

The test is doing about 5k rps using mock server as upstream.

On the iteration of tests im increasing the mem_cache_size from 512, 1024 and 2040. And with the same tests with the same conditions when i increase the mem_cache_size the results are better.

Can some explain that.

Im only using Prometheus plugin

Thanks

Hi,

On the iteration of tests im increasing the mem_cache_size from 512, 1024 and 2040. And with the same tests with the same conditions when i increase the mem_cache_size the results are better.

At first glance, I would take this assumption with a grain of salt.

What is the sample size of your tests? Which resources are you tracking and what differences have you observed? Can you elaborate on which results you are referring to?

yes.

I’m using a mock-server to do an upstream server.
I have 4 endpoints mocked with messages around 1k, 2k, 4k, 8k, and 16k of size. all HTTP GET

The distribution is 25% on (1k) 50% on (2k, 4k and 8k) and 15% on (16k).

I don´t have any plugin installed.

The test is using JMeter around 5 to 10 minutes with 5k rps.

I´ve done the test several times with the same value on the variable mem_cache_size and the results are the same.

But when I upgrade mem_cache_size the tests are different. More memory is not a signal of better
results, some times yes but is not linear.

The analysis is based on the throughput, Average response times and 90, 95, 99 percentiles of the responses.

The mock server is stable, with the same results on all tests.

Thanks

More memory should not mean better results, especially not in DB-less mode. This should be a hint that there is no correlation between memory size and performance here.

I suspect your host (or upstream mock server host) is dealing with other workloads. You should make sure to work in a lab environment dedicated to benchmarking before drawing significant conclusions (e.g. dedicated host, bare metal, etc…).

yes,

Totally agree with you, but it was from that reason i have opened this issue.

The test environment is dedicated and we are collecting metrics from the mock server (JVM) to understand if it is a problem on the mock server. But is not.

I already try in production, with un-used service that has a 99% of 4ms response time and doing exactly the same test with different mem_cache_size the tests results are different.

if i repeat the test ( n times) with the same mem_cache_size the results are the same.

Thanks