Best Way to Deploy Kong Manager and Kong API in Kubernetes (Internal Connectivity Issues)

I’m planning to deploy Kong Manager and Kong Admin API in a Kubernetes cluster, but I’m facing internal connectivity issues between Kong Manager and the Kong Admin API.

I’ve followed the architecture shown in the attached diagram. Initially, I tried to access Kong Manager and the Admin API internally, but they could not connect. As a workaround, I exposed both components via two separate AWS ELBs (one for Kong Manager UI and another for the Admin API), and it worked externally.

Now, I also plan to expose Kong Proxy externally. So I’m considering adding an additional ELB for the proxy component.

What is the best deployment approach?

  1. Create three ELBs: one each for Kong Manager UI, Kong Admin API, and Kong Proxy (Working 100%)
  2. Create two ELBs: one for Kong Manager + Admin API, and one for Kong Proxy
  3. Create two ELBs: one for Kong Manager UI, and another for Admin API + Proxy

My main questions:

  • What is the best practice for ELB separation in this setup?
  • Is it recommended to combine any of these components under one ELB?
  • How can I resolve the internal connectivity issue between Kong Manager and the Admin API in Kubernetes?

Current Architecture:

Error:

I get a connection error between Kong Manager and the Admin API internally (see below):

@yasanthae Please check this documentation for help on your topic. Thanks

Thank you very much, Rick. I followed this, but it seems to be for the enterprise version. I’m using the free version.