Kong Hybrid mode for Kubernetes

Hey Team,
I am trying to do a sample open source helm installation of kong as an hybrid deployment in two different kubernetes clusters.

Cluster 1 - Control Plane
Followed the steps here for hybrid installations -


charts/charts/kong at main · Kong/charts - Hybrid mode


Helm chart for Kong. Contribute to Kong/charts development by creating an account on GitHub.

After making necessary configuration changes for CP in values.yml & helm install i can see the kong pods are coming up . I have exposed my svc - kong-kong-cluster as a LoadBalancer

Cluster 2 - Data Plane.
Copied the contents of cluster.crt & cluster.key from CP /tmp and pasted these certs in /tmp folder of DP cluster & created k8s secret as earlier.

made necessary configurations required for DP role in values.yml -
disabled admin
role: data_plane
database: off
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
lua_ssl_trusted_certificate: /etc/secrets/kong-cluster-cert/tls.crt
cluster_control_plane: LoadBalancer-IP-CP-kong-cluster-service:8005

On helm install i am getting this error -

Error: no objects visited
helm.go:75: [debug] no objects visited

Am i missing anything here…?
Is there any other required configurations for establishing connection between CP and DP between two cluster.

I also tried enabling enterprise and creating an empty licence k8s secret.
Same response as above.

Kindly help me out with the setup…

For hybrid more setup, all the data plane requires is connectivity to the control plane instance along with the cluster certificates to initiate the mTLS connection. However it looks like you have an issue with the helm install, what is the command you are running?

Here is an example values file for DP: charts/minimal-kong-hybrid-data.yaml at main · Kong/charts · GitHub

Hey thanks for writing back,.,
i am using the same values.yml file , where the only change is
cluster_control_plane: CHANGEME-control-service.CHANGEME-namespace.svc.cluster.local:8005
cluster_control_plane: control-planes-ClusterserviceELB:80
My control plane is running in a different cluster.
my helm command is -
helm install kong kong/kong -f values.yml -n kong

Also tried to connect to kong connect cloud offering by generating certificates…

The runtime is not appearing in the kong runtime page

How do i fix this issue.

Hey Koushik - your first post indicated that you are setting up a self-hosted version of Kong where both the Control and Data planes would be running on your kubernetes clusters. However, now you mentioned that you want to connect your DPs to Konnect Cloud - so which one is it? If the latter, you’ll need to follow the steps to change all the values in the DP to connect to Konnect’s control plane

I Tried both the approaches…
The first approach of self hosted version-
My ControlPlane is in one k8s cluster , whereas my DP in another.
the helm installation for CP is coming up , whereas the DP installation when i give my cluster_control_plane: control-planes-ClusterserviceELB:443 (as the LoadBalancer ip generated by CP cluster of Clusterservice-ELB) the helm chart isn’t coming up…it throws no object visited.

The second approach -
I followed the same steps as metioned in the link but after helm instalation,
i cannot see the runtime in the Kong Konnect runtimes after clicking done.

i tried using both a public cluster and a private Gcp cluster.

both the CP cluster and DP cluster has the same cluster.crt and cluster.key kubernetes secret, which was generated using open ssl in the CP cluster.

The port for the clustering service (cluster_control_plane) is 8005 and not 443 (with the IP or hostname being that of your clustering service)

I have edited the servicePort to 443 in values.yaml file…

  enabled: true
  # To specify annotations or labels for the cluster service, add them to the respective
  # "annotations" or "labels" dictionaries below.
  annotations: {}
  #  service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
  labels: {}

    enabled: true
    servicePort: 443
    containerPort: 8005
    parameters: []

  type: LoadBalancer

Either way i am not able to establish the connectivity i.e the helm chart fails to come up.

AAh I see. Anyway this should not be the issue why your DP pod does not come up. Looks like your error is similar to this: https://github.com/helm/helm/issues/7685

Yeah the issue seems to be somewhat similar… But incase i change the url and try connecting it to the Konnect cloud then my helm chart comes up…but the DP cluster is not listed in Runtime Ui of Konnect cloud.

The self hosted and kong konnect seems to be working fine…
i seem to have missed the
lua_ssl_trusted_certificate and also when i tried using image tag 2.3 it worked fine.
Thank you

© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ