I have installed kong in hybrid mode in my Kubernetes cluster:
This is the control-plane.yaml:
flavor: helm_simple
metadata: {}
kind: helm
provided: false
disabled: false
version: '0.1'
spec:
helm:
namespace: default
repository: 'https://charts.konghq.com'
wait: false
recreate_pods: false
chart: kong
values:
ingressController:
enabled: false
image:
repository: kong/kong-gateway
tag: 3.7.1.2
secretVolumes:
- kong-cluster-cert
- kong-proxy-cert
env:
role: control_plane
cluster_cert: /etc/secrets/kong-cluster-cert/fullchain.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
database: postgres
cluster_mtls: pki
pg_database: postgres
pg_user: '${postgres.postgres-kong.out.interfaces.reader.username}'
pg_password: '${postgres.postgres-kong.out.interfaces.writer.password}'
pg_host: '${postgres.postgres-kong.out.interfaces.writer.host}'
pg_ssl: 'on'
password: admin
cluster_ca_cert: /etc/secrets/kong-cluster-cert/ca-cert.pem
pg_schema: ''
admin_gui_url: 'https://kong-manager-dev-ksa-01.example.com'
admin_gui_api_url: 'https://kong-admin-dev-ksa-01.example.com'
admin_gui_session_conf: '{"secret":"secret","storage":"postgres","cookie_secure":true}'
enterprise:
enabled: true
license_secret: kong-enterprise-license
vitals:
enabled: true
rbac:
enabled: true
admin_gui_auth: basic-auth
session_conf_secret: kong
admin:
enabled: true
http:
enabled: true
type: ClusterIP
tls:
enabled: false
cluster:
enabled: true
tls:
enabled: true
clustertelemetry:
enabled: true
tls:
enabled: true
manager:
enabled: true
type: ClusterIP
http:
enabled: true
tls:
enabled: false
proxy:
enabled: false
This is my data-plane.yaml:
flavor: helm_simple
metadata: {}
kind: helm
provided: false
disabled: false
version: '0.1'
spec:
helm:
namespace: default
repository: 'https://charts.konghq.com'
wait: false
recreate_pods: false
chart: kong
values:
ingressController:
enabled: false
image:
repository: kong/kong-gateway
tag: 3.7.1.2
secretVolumes:
- kong-cluster-cert
env:
role: data_plane
database: 'off'
cluster_control_plane: 'kong-cp-kong-cluster.default.svc.cluster.local:8005'
cluster_telemetry_endpoint: 'kong-cp-kong-clustertelemetry.default.svc.cluster.local:8006'
lua_ssl_trusted_certificate: /etc/secrets/kong-cluster-cert/ca-cert.pem
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
cluster_ca_cert: /etc/secrets/kong-cluster-cert/ca-cert.pem
cluster_mtls: pki
cluster_server_name: kong-admin-dev-ksa-01.example.com
log_level: debug
proxy_listen: '0.0.0.0:8000, 0.0.0.0:8443 ssl'
proxy_ssl_cert: /etc/secrets/kong-cluster-cert/tls.crt
proxy_ssl_cert_key: /etc/secrets/kong-cluster-cert/tls.key
enterprise:
enabled: true
license_secret: kong-enterprise-license
proxy:
enabled: true
admin:
enabled: false
manager:
enabled: false
I have uploaded my CA-signed go daddy certificate as a Kubernetes secret. I could also see the certificates at /etc/secrets/kong-cluster-cert inside my pods.
But Upon hitting my https://kong-proxy-dev-ksa-01.example.com/service/api-endpoint, I am getting self-signed notification in my postman, along with the correct api response. which means it is not referring to my CA-signed certificate and still using the self signed certificate.
My kong-admin API and kong-manager GUI are working perfectly fine
Both my kong-control plane and kong-data-plane are in the same namespace “default”
What am I missing here. Please help!!!