Kong Gateway CP and DP not communicating TLS issues

Hi all - I am trying to do a fresh install of Kong Gateway 3.9.X following the tutorials: Install Kong Gateway | Kong Docs and Deploy Kong Gateway in Hybrid Mode | Kong Docs. I am running into some TLS communications issues, though:

2024/12/24 00:48:08 [warn] 2554#0: *637 [lua] data_plane.lua:167: communicate(): [clustering] connection to control plane wss://kong-cp-kong-cluster.kong.svc.cluster.local:8005/v1/outlet?node_id=3b912159-2a97-4269-8cde-4071c31d59d4&node_hostname=kong-dp-kong-7dc7b97c58-nvgzq&node_version=3.9.0.0 broken: ssl handshake failed: 20: unable to get local issuer certificate (retrying after 6 seconds) [kong-cp-kong-cluster.kong.svc.cluster.local:8005], context: ngx.timer

The issuer is configured correctly to issue TLS certificates, can create the proper certificate and secret objects, and handles cert generation through HashiCorp Vault. The exact configuration is working fine for my Postgres and MongoDB deployments, which leads me to believe it isn’t the TLS certificates themselves but the configuration. The root CA and all certificates in the chain are ECDSA SHA 256 with prime256v1 algorithm. All CAs also contain the Client & Server Auth extensions (was required by some nuances in Postgres Operator but shouldn’t, I hope, interfere here as I was able to test the none pki mode with a CA that has both of those successfully)

Here is the cp yaml file:


# -----------------------------------------------------------------------------
# Ingress Controller parameters
# -----------------------------------------------------------------------------

# Kong Ingress Controller's primary purpose is to satisfy Ingress resources
# created in k8s. It uses CRDs for more fine grained control over routing and
# for Kong specific configuration.
ingressController:
  enabled: true
  # Specify Kong Ingress Controller configuration via environment variables
  env:
    # The controller disables TLS verification by default because Kong
    # generates self-signed certificates by default. Set this to false once you
    kong_admin_tls_skip_verify: true
    # If using Kong Enterprise with RBAC enabled, uncomment the section below
    # and specify the secret/key containing your admin token.
    # kong_admin_token:
    #   valueFrom:
    #     secretKeyRef:
    #        name: CHANGEME-admin-token-secret
    #        key: CHANGEME-admin-token-key
    # have installed CA-signed certificates.
    publish_service: kong/kong-dp-kong-proxy
    kong_admin_token: kong_admin_password
  
  ingressClass: kong

# Specify Kong's Docker image and repository details here
image:
  repository: kong/kong-gateway
  tag: "3.9.0.0"
   
# Mount the secret created earlier
# secretVolumes:
#   - kong-cp-kong-cluster-cert


# -----------------------------------------------------------------------------
# Kong parameters
# -----------------------------------------------------------------------------

# Specify Kong configuration
# This chart takes all entries defined under `.env` and transforms them into into `KONG_*`
# environment variables for Kong containers.
# Their names here should match the names used in https://github.com/Kong/kong/blob/master/kong.conf.default
# See https://docs.konghq.com/latest/configuration also for additional details
# Values here take precedence over values from other sections of values.yaml,
# e.g. setting pg_user here will override the value normally set when postgresql.enabled
# is set below. In general, you should not set values here if they are set elsewhere.
env:
  # This is a control_plane node
  role: control_plane
  # These certificates are used for control plane / data plane communication
  lua_ssl_trusted_certificate:  /etc/cert-manager/cluster/ca.crt
  cluster_mtls: pki
  cluster_cert:  /etc/cert-manager/cluster/tls.crt
  cluster_cert_key:  /etc/cert-manager/cluster/tls.key
  log_level: debug
   
  # Database
  # CHANGE THESE VALUES
  database: postgres
  pg_database: kong
  pg_schema: kong
  pg_user:
      valueFrom:
        secretKeyRef:
            key: username
            name: vso-db-kong
  pg_password: 
    valueFrom:
      secretKeyRef:
          key: password
          name: vso-db-kong
  pg_host: postgres.d.zoerml.zlocal
  pg_ssl: "on"
   
  # Kong Manager passwor
  password: kong_admin_password
  # These should have no trailing slashes. This will cause bugs
  # admin_gui_api_url: 'https://10.20.4.130:8444/api'
  # admin_gui_url: 'https://manager.kong.d.zoerml.zlocal'
  # Change the secret and set cookie_secure to true if using a HTTPS endpoint
  # admin_gui_session_conf: '{"secret":"secret","storage":"kong","cookie_secure":false}'
   
# -----------------------------------------------------------------------------
# Kong Enterprise parameters
# -----------------------------------------------------------------------------

# Toggle Kong Enterprise features on or off
# RBAC and SMTP configuration have additional options that must all be set together
# Other settings should be added to the "env" settings below
enterprise:
  enabled: true
  license_secret: kong-enterprise-license
  rbac:
     enabled: false
     admin_gui_auth: basic-auth

# Specify Kong admin API service and listener configuration
admin:
  enabled: true
  # type: LoadBalancer
  # ingressClassName: kong
  http:
    # Enable plaintext HTTP listen for the admin API
    # Disabling this and using a TLS listen only is recommended for most configuration
    enabled: false
    # servicePort: 8001
    # containerPort: 8001
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []
  tls:
    # Enable HTTPS listen for the admin API
    enabled: true
    servicePort: 8444
    containerPort: 8444
    # Set a target port for the TLS port in the admin API service, useful when using TLS
    # termination on an ELB.
    # overrideServiceTargetPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    

    # Specify the CA certificate to use for TLS verification of the Admin API client by:
    # - secretName - the secret must contain a key named "tls.crt" with the PEM-encoded certificate.
    # - caBundle (PEM-encoded certificate string).
    # If both are set, caBundle takes precedence.
    # client:
    #   caBundle: ""
    #   secretName: ""

  # Kong admin ingress settings. Useful if you want to expose the Admin
  # API of Kong outside the k8s cluster.
  ingress:
    # Enable/disable exposure using ingress.
    enabled: false
    # ingressClassName: kong
    # TLS secret name.
    # tls: kong-admin.example.com-tls
    # Ingress hostname
    # hostname: admin.kong.d.zoerml.zlocal
    # Ingress path.
    path: /
    pathType: Prefix
     
# Specify Kong cluster service and listener configuration
#
# The cluster service *must* use TLS. It does not support the "http" block
# available on other services.
#
# The cluster service cannot be exposed through an Ingress, as it must perform
# TLS client validation directly and is not compatible with TLS-terminating
# proxies. If you need to expose it externally, you must use "type:
# LoadBalancer" and use a TCP-only load balancer (check your Kubernetes
# provider's documentation, as the configuration required for this varies).
cluster:
  enabled: true
  tls:
    enabled: true
    # servicePort: 8005
    # containerPort: 8005
    # parameters: []
   
clustertelemetry:
  enabled: true
  tls:
    enabled: true
    servicePort: 8006
    containerPort: 8006
    parameters: []
   
# Optional features
manager:
  # Enable creating a Kubernetes service for Kong Manager
  enabled: false
  http:
    # Enable plaintext HTTP listen for Kong Manager
    enabled: true
    servicePort: 8002
    containerPort: 8002
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []
  tls:
    # Enable HTTPS listen for Kong Manager
    enabled: false
    servicePort: 8445
    containerPort: 8445
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    
  ingress:
    enabled: true
    hostname: manager.kong.zoerml.zlocal
    path: /
    pathType: Prefix
    ingressClassName: kong

# -----------------------------------------------------------------------------
# Configure cert-manager integration
# -----------------------------------------------------------------------------

certificates:
  enabled: true

  # Set either `issuer` or `clusterIssuer` to the name of the desired cert manager issuer
  # If left blank a built in self-signed issuer will be created and utilized
  issuer: kong-sa-01

  # Set proxy.enabled to true to issue default kong-proxy certificate with cert-manager
  proxy:
    enabled: false
    # Set `issuer` or `clusterIssuer` to name of alternate cert-manager clusterIssuer to override default
    # self-signed issuer.
    issuer: kong-sa-01
    clusterIssuer: ""
    # Use commonName and dnsNames to set the common name and dns alt names which this
    # certificate is valid for. Wildcard records are supported by the included self-signed issuer.
    commonName: "api.d.zoerml.zlocal"
    # Remove the "[]" and uncomment/change the examples to add SANs
    # - "app.example"
    # - "*.apps.example"
    # - "*.kong.example"
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always

  # Set admin.enabled true to issue kong admin api and manager certificate with cert-manager
  admin:
    enabled: true
    # Set `issuer` or `clusterIssuer` to name of alternate cert-manager clusterIssuer to override default
    # self-signed issuer.
    issuer: kong-sa-01
    clusterIssuer: ""
    # Use commonName and dnsNames to set the common name and dns alt names which this
    # certificate is valid for. Wildcard records are supported by the included self-signed issuer.
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always

  # Set portal.enabled to true to issue a developer portal certificate with cert-manager
  portal:
    enabled: false
    # Set `issuer` or `clusterIssuer` to name of alternate cert-manager clusterIssuer to override default
    # self-signed issuer.
    issuer: kong-sa-01
    clusterIssuer: ""
    # Use commonName and dnsNames to set the common name and dns alt names which this
    # certificate is valid for. Wildcard records are supported by the included self-signed issuer.
    commonName: "developer.api.zoerml.zlocal"
    # Remove the "{}" and uncomment/change the examples to add SANs
    # - "manager.kong.example"
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always

  # Set cluster.enabled true to issue kong hybrid mtls certificate with cert-manager
  cluster:
    enabled: true
    # Issuers used by the control and data plane releases must match for this certificate.
    issuer: kong-sa-01
    commonName: "kong_clustering"
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always


  ingress:
    # Enable/disable exposure using ingress.
    enabled: false
    ingressClassName: kong
    # TLS secret name.
    # tls: kong-manager.example.com-tls
    # Ingress hostname
    hostname: manager.kong.d.zoerml.zlocal
    # Map of ingress annotations.
    annotations: {}
    # Ingress path.
    path: /
    # Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix)
    pathType: Prefix
   
# Specify Kong proxy service configuration
proxy:
  # Enable creating a Kubernetes service for the proxy
  enabled: false

# Enable/disable migration jobs, and set annotations for them
migrations:
  # Enable pre-upgrade migrations (run "kong migrations up")
  preUpgrade: false
  # Enable post-upgrade migrations (run "kong migrations finish")
  postUpgrade: false
  # Annotations to apply to migrations job pods
  # By default, these disable service mesh sidecar injection for Istio and Kuma,
  # as the sidecar containers do not terminate and prevent the jobs from completing
  annotations:
    sidecar.istio.io/inject: false
  # Additional annotations to apply to migration jobs
  # This is helpful in certain non-Helm installation situations such as GitOps
  # where additional control is required around this job creation.
  jobAnnotations: {}
  # Optionally set a backoffLimit. If none is set, Jobs will use the cluster default
  backoffLimit:
  # Optionally set to specify the time-to-live (TTL) for a pod after it has completed its execution before automatic deletion. If left unset, pod lifetime is indefinite.
  ttlSecondsAfterFinished:
  resources: {}
  # Example reasonable setting for "resources":
  # resources:
  #   limits:
  #     cpu: 100m
  #     memory: 256Mi
  #   requests:
  #     cpu: 50m
  #     memory: 128Mi
  ## Optionally specify any extra sidecar containers to be included in the deployment
  ## See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core
  ## Keep in mind these containers should be terminated along with the main
  ## migration containers
  # sidecarContainers:
  #   - name: sidecar
  #     image: sidecar:latest

Here is the dp yaml:


# -----------------------------------------------------------------------------
# Deployment parameters
# -----------------------------------------------------------------------------
image:
  repository: kong/kong-gateway
  tag: "3.9.0.0"
  pullPolicy: IfNotPresent

# -----------------------------------------------------------------------------
# Kong parameters
# -----------------------------------------------------------------------------

# Specify Kong configuration
# This chart takes all entries defined under `.env` and transforms them into into `KONG_*`
# environment variables for Kong containers.
# Their names here should match the names used in https://github.com/Kong/kong/blob/master/kong.conf.default
# See https://docs.konghq.com/latest/configuration also for additional details
# Values here take precedence over values from other sections of values.yaml,
# e.g. setting pg_user here will override the value normally set when postgresql.enabled
# is set below. In general, you should not set values here if they are set elsewhere.
env:
    # data_plane nodes do not have a database
    role: data_plane
    database: "off"

    # Tell the data plane how to connect to the control plane
    cluster_control_plane: kong-cp-kong-cluster.kong.svc.cluster.local:8005
    cluster_telemetry_endpoint: kong-cp-kong-clustertelemetry.kong.svc.cluster.local:8006

    # cluster_telemetry_endpoint: ""
    # Configure control plane / data plane authentication
    lua_ssl_trusted_certificate:  /etc/cert-manager/cluster/ca.crt
    cluster_mtls: pki
    cluster_cert:  /etc/cert-manager/cluster/tls.crt
    cluster_cert_key:  /etc/cert-manager/cluster/tls.key
    cluster_ca_cert:  /etc/cert-manager/cluster/ca.crt
    
   
    log_level: debug
    # admin_gui_api_url: 'https://10.20.4.130:8444/api'

# Enterprise functionality
enterprise:
  enabled: true
  license_secret: kong-enterprise-license


# Kong for Kubernetes with Kong Enterprise with Enterprise features enabled and
# exposed via TLS-enabled Ingresses. Before installing:
# * Several settings (search for the string "CHANGEME") require user-provided
#   Secrets. These Secrets must be created before installation.
# * Ingresses reference example "<service>.kong.CHANGEME.example" hostnames. These must
#   be changed to an actual hostname that resolve to your proxy.
# * Ensure that your session configurations create cookies that are usable
#   across your services. The admin session configuration must create cookies
#   that are sent to both the admin API and Kong Manager, and any Dev Portal
#   instances with authentication must create cookies that are sent to both
#   the Portal and Portal API.

# Sections:
# - Deployment parameters
# - Kong parameters
# - Ingress Controller parameters
# - Postgres sub-chart parameters
# - Miscellaneous parameters
# - Kong Enterprise parameters

# Do not use Kong Ingress Controller
ingressController:
  enabled: false


# Override namepsace for Kong chart resources. By default, the chart creates resources in the release namespace.
# This may not be desirable when using this chart as a dependency.
# namespace: "example"

# -----------------------------------------------------------------------------
# Configure cert-manager integration
# -----------------------------------------------------------------------------

certificates:
  enabled: true
  issuer: kong-sa-01
  # Set cluster.enabled true to issue kong hybrid mtls certificate with cert-manager
  cluster:
    enabled: true
    # Issuers used by the control and data plane releases must match for this certificate.
    issuer: "kong-sa-01"
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always
  admin:
    enabled: true
    # Issuers used by the control and data plane releases must match for this certificate.
    issuer: "kong-sa-01"
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always
  proxy:
    enabled: true
    # Issuers used by the control and data plane releases must match for this certificate.
    issuer: "kong-sa-01"
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always
  # Set portal.enabled to true to issue a developer portal certificate with cert-manager
  portal:
    enabled: false
    # Set `issuer` or `clusterIssuer` to name of alternate cert-manager clusterIssuer to override default
    # self-signed issuer.
    issuer: kong-sa-01
    clusterIssuer: ""
    # Use commonName and dnsNames to set the common name and dns alt names which this
    # certificate is valid for. Wildcard records are supported by the included self-signed issuer.
    commonName: "developer.api.zoerml.zlocal"
    # Remove the "{}" and uncomment/change the examples to add SANs
    # - "manager.kong.example"
    privateKey:
      algorithm: ECDSA
      size: 256
      rotationPolicy: Always



   
admin:
  enabled: false

manager:
  enabled: false

# Specify Kong proxy service configuration
proxy:
  # Enable creating a Kubernetes service for the proxy
  enabled: true
  type: LoadBalancer
  loadBalancerClass: ""
  # Configures optional firewall rules and in the VPC network to only allow certain source ranges.
  loadBalancerSourceRanges: []
  # Override proxy Service name
  nameOverride: ""
  # To specify annotations or labels for the proxy service, add them to the respective
  # "annotations" or "labels" dictionaries below.
  annotations: {}
  # If terminating TLS at the ELB, the following annotations can be used
  # "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "*",
  # "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:REGION:ACCOUNT:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "kong-proxy-tls",
  # "service.beta.kubernetes.io/aws-load-balancer-type": "elb"
  labels:
    enable-metrics: "true"

  http:
    # Enable plaintext HTTP listen for the proxy
    enabled: true
    # Set the servicePort: 0 to skip exposing in the service but still
    # let the port open in container to allow https to http mapping for
    # tls terminated at LB.
    servicePort: 80
    containerPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []

  tls:
    # Enable HTTPS listen for the proxy
    enabled: true
    servicePort: 443
    containerPort: 8443
    # Set a target port for the TLS port in proxy service
    # overrideServiceTargetPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    

    # Specify the Service's TLS port's appProtocol. This can be useful when integrating with
    # external load balancers that require the `appProtocol` field to be set (e.g. GCP).
    appProtocol: ""

  # Define stream (TCP) listen
  # To enable, remove "[]", uncomment the section below, and select your desired
  # ports and parameters. Listens are dynamically named after their containerPort,
  # e.g. "stream-9000" for the below.
  # Note: although you can select the protocol here, you cannot set UDP if you
  # use a LoadBalancer Service due to limitations in current Kubernetes versions.
  # To proxy both TCP and UDP with LoadBalancers, you must enable the udpProxy Service
  # in the next section and place all UDP stream listen configuration under it.
  stream: []
    #   # Set the container (internal) and service (external) ports for this listen.
    #   # These values should normally be the same. If your environment requires they
    #   # differ, note that Kong will match routes based on the containerPort only.
    # - containerPort: 9000
    #   servicePort: 9000
    #   protocol: TCP
    #   # Optionally set a static nodePort if the service type is NodePort
    #   # nodePort: 32080
    #   # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
    #   # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
    #   parameters: []

  # Kong proxy ingress settings.
  # Note: You need this only if you are using another Ingress Controller
  # to expose Kong outside the k8s cluster.
  ingress:
    # Enable/disable exposure using ingress.
    enabled: false
    ingressClassName:
    # To specify annotations or labels for the ingress, add them to the respective
    # "annotations" or "labels" dictionaries below.
    annotations: {}
    labels: {}
    # Ingress hostname
    hostname:
    # Ingress path (when used with hostname above).
    path: /
    # Each path in an Ingress is required to have a corresponding path type (when used with hostname above). (ImplementationSpecific/Exact/Prefix)
    pathType: ImplementationSpecific
    # Ingress hosts. Use this instead of or in combination with hostname to specify multiple ingress host configurations
    hosts: []
    # - host: kong-proxy.example.com
    #   paths:
    #   # Ingress path.
    #   - path: /*
    #   # Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix)
    #     pathType: ImplementationSpecific
    # - host: kong-proxy-other.example.com
    #   paths:
    #   # Ingress path.
    #   - path: /other
    #   # Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix)
    #     pathType: ImplementationSpecific
    #     backend:
    #       service:
    #         name: kong-other-proxy
    #         port:
    #           number: 80
    #
    # TLS secret(s)
    # tls: kong-proxy.example.com-tls
    # Or if multiple hosts/secrets needs to be configured:
    # tls:
    # - secretName: kong-proxy.example.com-tls
    #   hosts:
    #   - kong-proxy.example.com
    # - secretName: kong-proxy-other.example.com-tls
    #   hosts:
    #   - kong-proxy-other.example.com

  # Optionally specify a static load balancer IP.
  # loadBalancerIP:

Any help or direction would be appreciated :slight_smile:

Hi @dgellman

Try setting this property: Configuration Reference for Kong Gateway | Kong Docs

I set mine to 4 when I have a long clustering cert chain, and it generally fixes it.

lua_ssl_verify_depth: 4

With Helm:

env:
  lua_ssl_verify_depth: 4

Or in environment variables:

KONG_LUA_SSL_VERIFY_DEPTH=4

Omg you are AMAZING!!! ty ty!! :slight_smile: That solved the issue. After weeks of trial and error + AI driving me up the wall and down incorrect rabbit holes!!

1 Like