Kubernetes DB Mode deployment

I have a K8s deployment of Kong with Postgres DB (installed with helm). With this deployment mode I have two types of deployments via ingress controller or via the admin APIs(like OAuth, needs Admin API deployment). With this said I am trying to see which of my components communicate with Postgres

When I run the admin API, I understand that the request goes to Kong proxy container which in turn communicates with Postgres. So my data plane (Kong Proxy) talks to PS.
Deployment type 1 - (Admin API)

Rest Admin API → Kong Proxy container → PS

Also when I create an kong service via kubernetes ingress yaml, my assumption is that the ingress connects to the proxy container and even in this Kong proxy container communicates with the PS.

K8s YAML → via Ingress Controller container–> Kong Proxy container → PS

And finally i assume that the proxy container also communicates with the PS to get data and cache it

Ae these the only communication ,to and from, the PS datastore (during design and runtime)? Also is this
a one way communication between kong proxy to PS? Or does the PS communicate with any other container (say ingress controller)?

Your understanding is generally correct, yes. The controller is a Kong admin API client that translates Kubernetes resources (e.g. Ingresses, KongPlugins) into Kong entities (e.g. routes, plugins) and sends them to the admin API listen on the proxy container. The controller container never communicates with Postgres directly.

The proxy container in turn inserts those entities into Postgres along with change events. Other proxy containers in the cluster continually watch the event stream and, if they see a change event for an entity they currently have in their local cache, pull an updated copy of that entity from Postgres.

Lastly, the controller containers utilize a simple leader election mechanism to ensure that only one controller interacts with the admin API (because of the event system mentioned above, it wouldn’t make sense to have multiple controller instances submit the same admin API request to their local proxy container). The current leader is stored in a ConfigMap. If the current leader is shutting down it will clear the ConfigMap, and the first still-running instance to see that there is no longer a leader will set itself as the new leader.

1 Like

Is there an official yarm file using Postgres DB in K8S? Or maybe you can share a simple yaml file? 6 month age, I have implemented the API gateway using Kong:2.0 and ingress:0.9.1 versions. However, using Kong2:3 and ingress:1.2 does not proceed in “PodInitializing”. could you share the yarm file using Database?

@WONJAE kubernetes-ingress-controller/all-in-one-postgres.yaml at main · Kong/kubernetes-ingress-controller · GitHub is our stock Kong+KIC+Postgres example.

Many Thank. It really helped me a lot. :grinning:

@traines - do we have a similar one for cassandra deployment. I am getting the below error while I am trying to enable KONG via. cassandra instance

time=“2021-05-21T16:37:12Z” level=fatal msg="Cassandra-backed deployments of Kong managed by the ingress controller are no longer supported;you must migrate to a Postgres-backed or DB-less deployment"

We are also looking for similar kong ingress 1.2 deployment with Cassandra as data store, do you guys have any yaml files handy and do you support Cassandra deployment for Kong Ingress 1.2

We don’t–by design, we just don’t support Cassandra-backed instances with the controller, hence the startup failure message you see. Our rationale for removing support was:

  • Functionality required by the controller (particularly resource tagging) has different implementations for Postgres and Cassandra, and the Cassandra implementation has known (and essentially un-fixable) performance issues that make it work poorly with the controller.
  • Kubernetes resources in etcd are the source of truth for controller-managed configurations, and etcd provides data persistence similar to Cassandra for many purposes, so there was a reduced use case for Cassandra-backed instances with the controller.

Manifest-wise, managing a Cassandra cluster is considerably more complex than managing a Postgres instance, so while we can provide a basic Postgres StatefulSet that works well enough for most purposes, we couldn’t provide a general-purpose Cassandra example even when we did support it on earlier versions of the controller.

Thank you @traines for the explanation. So if I understand it right, kong will discontinue supporting cassandra on k8s and PS will be the only DB used in K8s.
Also, considering postgres’ limitation with multi region, how do you envision KONG in K8s in a multi region setup without CP/DP mode but with Database mode.

The read-only replica functionality for Postgres may be useful for some region/site-local stuff. While that doesn’t provide protocol-level topology awareness like Cassandra, it’s hopefully sufficient for reducing cross-region reads with a bit of ancillary tooling to populate the appropriate hostname for that region’s Kong instance configuration (if your setup doesn’t handle that inherently via local DNS resolution or similar). It should be possible to set up something that directs config changes to the site with the read/write Postgres instance if desired also.

The OAuth2 plugin is a bit of a special case because it requires writes to the main datastore from all sites to handle token creation. IMO that’s a shortcoming in the current OAuth2 plugin implementation: while it does use the main datastore for ephemeral data, it arguably shouldn’t. The main datastore isn’t well-suited for that kind of data; Kong’s design expects that the primary use of the main datastore is for persistent configuration. Other plugins handle this more effectively by making use of Redis for their ephemeral data, and I think that’s a better option that has its own options for handling site-local routing (unsure how easy it is to do this with stock Redis Cluster, but it looks like the major SaaS offerings provide proprietary site-aware implementations).

The limitation there is that there’s no standard interface in the Kong PDK for Redis, though prior art exists in other plugins (e.g. rate-limiting) and we provide a 1st-party library for it.

1 Like