Kong-proxy and ingress-control

I want to sure that the function detail
ingress -control : visit the api of k8s ,get change of ingress and services…
kong-proxy : put the change message to db of kong
whether it is true?
and if my node in k8s is 8C 32G
how many resource should allocate to kong. kong-proxy. ingress-control ?
Otherwise what is the env param “KONG_NGINX_WORKER_PROCESSES” should be seted?

If I’m reading your explanation right, I think you’re correct, yes: the controller checks the Kubernetes API for Ingress/Service/etc. resource updates, and when it sees them, it sends requests to the Kong proxy’s admin API to create Kong configuration, which is then stored in Kong’s database (either a separate database like PostgreSQL or Cassandra or a flat file–subsequently loaded into memory–for DB-less mode).

There’s no specific number for the worker process count, though you should always set one instead of using the auto default (it won’t immediately cause issues, but it will spawn as many workers as there are cores on the kubelet, which usually isn’t what you want). 2-4 is often fine as a starting point depending on the core count of your kubelet nodes.

We don’t have much in terms of resource guidelines, though I’d generally recommend having 2 cores and multiple G of RAM allocated to Kong proxy Pods as a solid baseline–it can run with less (I do so for test environments), but I do see some issues with slow startup especially for Pods that have more restricted resources (attempting to allocate, say, only 10m core and 512MB RAM has given me trouble in the past, though those were intentionally resource-starved small test environments).

The proxy is pretty efficient on resources, but the best measure of its performance (and specifically its performance with any particular plugin configuration) is practical load testing: I always recommend constructing a reasonable test approximation of your traffic and observing what throughput you get with some given test request load. That should provide you sufficient data to see whether you need to look at increasing per-Pod allocations and roughly estimate when you’d need to look at scaling the Deployment.

thank you very much for your reply


© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ