Kubernetes DB Mode deployment

I have a K8s deployment of Kong with Postgres DB (installed with helm). With this deployment mode I have two types of deployments via ingress controller or via the admin APIs(like OAuth, needs Admin API deployment). With this said I am trying to see which of my components communicate with Postgres

When I run the admin API, I understand that the request goes to Kong proxy container which in turn communicates with Postgres. So my data plane (Kong Proxy) talks to PS.
Deployment type 1 - (Admin API)

Rest Admin API → Kong Proxy container → PS

Also when I create an kong service via kubernetes ingress yaml, my assumption is that the ingress connects to the proxy container and even in this Kong proxy container communicates with the PS.

K8s YAML → via Ingress Controller container–> Kong Proxy container → PS

And finally i assume that the proxy container also communicates with the PS to get data and cache it

Ae these the only communication ,to and from, the PS datastore (during design and runtime)? Also is this
a one way communication between kong proxy to PS? Or does the PS communicate with any other container (say ingress controller)?

Your understanding is generally correct, yes. The controller is a Kong admin API client that translates Kubernetes resources (e.g. Ingresses, KongPlugins) into Kong entities (e.g. routes, plugins) and sends them to the admin API listen on the proxy container. The controller container never communicates with Postgres directly.

The proxy container in turn inserts those entities into Postgres along with change events. Other proxy containers in the cluster continually watch the event stream and, if they see a change event for an entity they currently have in their local cache, pull an updated copy of that entity from Postgres.

Lastly, the controller containers utilize a simple leader election mechanism to ensure that only one controller interacts with the admin API (because of the event system mentioned above, it wouldn’t make sense to have multiple controller instances submit the same admin API request to their local proxy container). The current leader is stored in a ConfigMap. If the current leader is shutting down it will clear the ConfigMap, and the first still-running instance to see that there is no longer a leader will set itself as the new leader.


© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ