Kong on ECS: CPU and Memory requirements?

Does anyone have some good suggestions for tuning ECS CPU and memory requirements for Kong? A rule of thumb based on RPM would be pretty awesome. :slight_smile:

Hmm I can’t speak to ECS but I can speak to CPU/MEM for our cluster running on Openshift with VMs. We rock 5 CPU and 10GB RAM(although right now with little traffic and few proxies we are only seeing 1GB in use so its currently overkill but we intend to catch up to use :slight_smile: ). 5 CPU for us maxed out in load tests gave us a performance around 3,500 requests per second with about 2-3 global plugins and 2 or so proxy specific plugins enabled (all the auth/security/logging plugins we run). You can use that number to give you an idea of converting to requests per minute with regards to your CPU requirements. Note many environments are different and I feel our underlying VMs may not have the best performance compared to others so best to test/tune/scale till you find a good medium with the platform you deploy onto.

1 Like

At the risk of a somewhat distant reply from the community spirit (and from community-contributed answers), it is worth noting that Enterprise Edition documentation maintains an Implementation Checklist. It, among other things, details some basic system requirements/guidelines. They apply similarly to Kong CE (Community Edition, the open source version).

2 Likes

I just wanted to reply to this with an update on what I tried and the results I got.

I’m running Kong on AWS in ECS on a t2.xlarge instance (2-3 Kong nodes on one instance), behind an ALB, using RDS Postgres as the datastore. I’m using acl, key-auth, logging, and rate-limiting plugins. (The rate-limiting policy is ‘local’, not hitting Postgres.)

Using loader.io (a very nice service!) I was able to make nearly 4500 api calls from 250 clients over a minute before hitting timeouts (and tbh I think the timeouts were coming from the upstream server). :slight_smile:

At peak load CPU usage was just over 3%.

If it looks like I’ve forgotten anything or haven’t accounted for anything, do let me know! Always happy to get feedback!

4 Likes

Would love to know how you setup Kong on ECS :slight_smile:

So I am working on updating our setup to use a newer version of Kong :slight_smile: and decided to switch some things up …

Our setup:

  • Postgres RDS
  • ECS Fargate
  • ALB

Because we’re migrating from a couple of versions back and our data has a relatively small footprint, we’re doing a blue/green deploy by setting up a completely new Kong installation (ECS + RDS db) that we’ll just switch our ALB to when complete.

To do this, we set up a new RDS instance and a bastion EC2 instance to connect to it. On that bastion EC2 instance we created and published our new Docker image (we have a slightly custom nginx template) and ran bash scripts to add our services, routes, consumers, plugins (our current version of Kong is pre-DecK), set up loopback of admin port, etc.

Once the DB is set up nicely, we set up a new ECS cluster. We were using EC2 instances with ECS before but have moved to Fargate because AWS is better than we are at packing containers. :slight_smile: It used to be that ECS only let you expose one port per container – this has changed but it’s easy enough for us to loopback the admin port. (If you are using Kong Enterprise, you probably want to expose that port too because the Vitals web UI doesn’t work with loopback.)

Make sure your new ECS security group has inbound access to your RDS database and backend services, and your ALB has inbound access to ALL TCP PORTS of your ECS cluster (because they are assigned dynamically, your cluster won’t just assign 8000)! This is the thing that is most likely to trip you up.

Most of the heavy lifting in your ECS setup is setting up the task, and inside that, the container. The container environment variables will be what you use to connect to your DB. You can use AWS secrets manager if you need those to be encrypted.

We serve ~20k requests per minute and run two nodes with 1024 MiB and 512CPU units and seeing CPU/memory utilization hovering around 25%.

We set up a temporary ALB to point to the new cluster, and are using that to run automated tests on our endpoints. Once we feel good about those we’ll run one last user import (our source-of-truth for consumer & key info is another db) and make the switch. (We are a public-facing API and get about 10 new signups/day, so it’s not difficult for us to do a ‘hot’ switch. If you have many more users you might have to announce signup downtime.)

The nice thing about upgrading this way is that we don’t have to ‘pre-warm’ our ALB with AWS (it’s hot already because it’s in service) and if something goes wrong with deployment, we can switch back to the previous ECS cluster and db pretty quickly. (This does double the infra costs for the period of the switchover, however.)

I hope this helps!

1 Like