Kong Helm Chart and External Postgres database

Gents,

I am experiencing an issue here and I am sure I am just missing something really small. I am in the process of test driving kong in my dev Kubernetes cluster. Kong configuration documentation actually explains that postgres and cassandra are the two databases supported. This chart actually has a postgres database as a dependency which can be used to run kong. The chart configuration runs kong in a databaseless mode using configmaps.

My use case is to run kong using an external already configured postgres database. The documentation states that one needs to add database configuration parameters in the env section of the values.yaml file to achieve that. I have done exactly that but unfortunately, I am not able to get my db initialized by kong. The job that is supposed to initialize my DB hangs on waiting for database. Everything works well when I enabled the database chart and run, and also when I run in databaseless mode. But just hangs when I run using my external database.

Here is my values.yaml file:

# Default values for Kong's Helm Chart.
# Declare variables to be passed into your templates.
#
# Sections:
# - Kong parameters
# - Ingress Controller parameters
# - Postgres sub-chart parameters
# - Miscellaneous parameters
# - Kong Enterprise parameters

# -----------------------------------------------------------------------------
# Kong parameters
# -----------------------------------------------------------------------------

# Specify Kong configurations
# Kong configurations guide https://docs.konghq.com/latest/configuration
# Values here take precedence over values from other sections of values.yaml,
# e.g. setting pg_user here will override the value normally set when postgresql.enabled
# is set below. In general, you should not set values here if they are set elsewhere.
env:
  log_level: "info"
  plugins: "bundled,oidc"
  database: "postgres"
  casandra_contact_points: ${db_host}
  pg_host: ${db_host}
  pg_port: ${db_port}
  pg_user: ${db_username}
  pg_password: ${db_password}
  pg_database: ${db_name}
  pg_ssl: "off"
  pg_ssl_verify: "off"
  nginx_worker_processes: "1"
  proxy_access_log: /dev/stdout
  admin_access_log: /dev/stdout
  admin_gui_access_log: /dev/stdout
  portal_api_access_log: /dev/stdout
  proxy_error_log: /dev/stderr
  admin_error_log: /dev/stderr
  admin_gui_error_log: /dev/stderr
  portal_api_error_log: /dev/stderr
  prefix: /kong_prefix/

# Specify Kong's Docker image and repository details here
image:
  repository: ${repositoryUrl}/${image}
  tag: ${tag}
  # kong-enterprise-k8s image (Kong OSS + Enterprise plugins)
  # repository: kong-docker-kong-enterprise-k8s.bintray.io/kong-enterprise-k8s
  # tag: "2.0.2.0-alpine"
  # kong-enterprise image
  # repository: kong-docker-kong-enterprise-edition-docker.bintray.io/kong-enterprise-edition
  # tag: "1.5.0.0-alpine"

  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## If using the official Kong Enterprise registry above, you MUST provide a secret.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  pullSecrets:
    - ${imagePullSecrets}

# Specify Kong admin API service and listener configuration
admin:
  # Enable creating a Kubernetes service for the admin API
  # Disabling this is recommended for most ingress controller configurations
  # Enterprise users that wish to use Kong Manager with the controller should enable this
  enabled: true
  type: ClusterIP
  # If you want to specify annotations for the admin service, uncomment the following
  # line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
  annotations: {}
  #  service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

  http:
    # Enable plaintext HTTP listen for the admin API
    # Disabling this and using a TLS listen only is recommended for most configuration
    enabled: true
    servicePort: 8001
    containerPort: 8001
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []

  tls:
    # Enable HTTPS listen for the admin API
    enabled: true
    servicePort: 8444
    containerPort: 8444
    # Set a target port for the TLS port in the admin API service, useful when using TLS
    # termination on an ELB.
    # overrideServiceTargetPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters:
    - http2

  # Kong admin ingress settings. Useful if you want to expose the Admin
  # API of Kong outside the k8s cluster.
  ingress:
    # Enable/disable exposure using ingress.
    enabled: false
    # TLS secret name.
    tls: kadmin.${domain}.tls
    # Ingress hostname
    hostname: kadmin.${domain}
    # Map of ingress annotations.
    annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
      nginx.ingress.kubernetes.io/affinity: cookie
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/whitelist-source-range: ${source_range}
      cert-manager.io/cluster-issuer: ${issuer}

    # Ingress path.
    path: /

# Specify Kong status listener configuration
# This listen is internal-only. It cannot be exposed through a service or ingress.
status:
  http:
    # Enable plaintext HTTP listen for the status listen
    enabled: true
    containerPort: 8100

  tls:
    # Enable HTTPS listen for the status listen
    # Kong does not currently support HTTPS status listens, so this should remain false
    enabled: false
    containerPort: 8543

# Specify Kong proxy service and listener configuration
proxy:
  # Enable creating a Kubernetes service for the proxy
  enabled: true
  type: ClusterIP
  # If you want to specify annotations for the proxy service, uncomment the following
  # line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
  annotations: {}
  #  service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

  http:
    # Enable plaintext HTTP listen for the proxy
    enabled: enable
    servicePort: 80
    containerPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []

  tls:
    # Enable HTTPS listen for the proxy
    enabled: enable
    servicePort: 443
    containerPort: 8443
    # Set a target port for the TLS port in proxy service, useful when using TLS
    # termination on an ELB.
    # overrideServiceTargetPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters:
    - http2

  # Define stream (TCP) listen
  # To enable, remove "{}", uncomment the section below, and select your desired
  # ports and parameters. Listens are dynamically named after their servicePort,
  # e.g. "stream-9000" for the below.
  stream: {}
    #   # Set the container (internal) and service (external) ports for this listen.
    #   # These values should normally be the same. If your environment requires they
    #   # differ, note that Kong will match routes based on the containerPort only.
    # - containerPort: 9000
    #   servicePort: 9000
    #   # Optionally set a static nodePort if the service type is NodePort
    #   # nodePort: 32080
    #   # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
    #   # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
    #   parameters: []

  # Kong proxy ingress settings.
  # Note: You need this only if you are using another Ingress Controller
  # to expose Kong outside the k8s cluster.
  ingress:
    enabled: false
    hosts:
      - api.${domain}

    annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
      nginx.ingress.kubernetes.io/affinity: cookie
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/whitelist-source-range: ${source_range}
      cert-manager.io/cluster-issuer: ${issuer}

    tls:
      - secretName: api.${domain}.tls
        hosts:
          - api.${domain}

    # Ingress path.
    path: /

  externalIPs: []

# Custom Kong plugins can be loaded into Kong by mounting the plugin code
# into the file-system of Kong container.
# The plugin code should be present in ConfigMap or Secret inside the same
# namespace as Kong is being installed.
# The `name` property refers to the name of the ConfigMap or Secret
# itself, while the pluginName refers to the name of the plugin as it appears
# in Kong.
# Subdirectories (which are optional) require separate ConfigMaps/Secrets.
# "path" indicates their directory under the main plugin directory: the example
# below will mount the contents of kong-plugin-rewriter-migrations at "/opt/kong/rewriter/migrations".
plugins: {}
  # configMaps:
  # - pluginName: rewriter
  #   name: kong-plugin-rewriter
  #   subdirectories:
  #   - name: kong-plugin-rewriter-migrations
  #     path: migrations
  # secrets:
  # - pluginName: rewriter
  #   name: kong-plugin-rewriter
# Inject specified secrets as a volume in Kong Container at path /etc/secrets/{secret-name}/
# This can be used to override default SSL certificates.
# Be aware that the secret name will be used verbatim, and that certain types
# of punctuation (e.g. `.`) can cause issues.
# Example configuration
# secretVolumes:
# - kong-proxy-tls
# - kong-admin-tls
secretVolumes: []

# Enable/disable migration jobs, and set annotations for them
migrations:
  # Enable pre-upgrade migrations (run "kong migrations up")
  preUpgrade: true
  # Enable post-upgrade migrations (run "kong migrations finish")
  postUpgrade: true
  # Annotations to apply to migrations jobs
  # By default, these disable service mesh sidecar injection for Istio and Kuma,
  # as the sidecar containers do not terminate and prevent the jobs from completing
  annotations:
    sidecar.istio.io/inject: false
    kuma.io/sidecar-injection: "disabled"

# Kong's configuration for DB-less mode
# Note: Use this section only if you are deploying Kong in DB-less mode
# and not as an Ingress Controller.
dblessConfig:
  # Either Kong's configuration is managed from an existing ConfigMap (with Key: kong.yml)
  configMap: ""
  # Or the configuration is passed in full-text below
  config:
    _format_version: "1.1"
    services:
      # Example configuration
      # - name: example.com
      #   url: http://example.com
      #   routes:
      #   - name: example
      #     paths:
      #     - "/example"

# -----------------------------------------------------------------------------
# Ingress Controller parameters
# -----------------------------------------------------------------------------

# Kong Ingress Controller's primary purpose is to satisfy Ingress resources
# created in k8s.  It uses CRDs for more fine grained control over routing and
# for Kong specific configuration.
ingressController:
  enabled: false
  image:
    repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
    tag: 0.8.0
  args: []

  # Specify Kong Ingress Controller configuration via environment variables
  env:
    # The controller disables TLS verification by default because Kong
    # generates self-signed certificates by default. Set this to false once you
    # have installed CA-signed certificates.
    kong_admin_tls_skip_verify: true
    # If using Kong Enterprise with RBAC enabled, uncomment the section below
    # and specify the secret/key containing your admin token.
    # kong_admin_token:
    #   valueFrom:
    #     secretKeyRef:
    #        name: CHANGEME-admin-token-secret
    #        key: CHANGEME-admin-token-key

  admissionWebhook:
    enabled: false
    failurePolicy: Fail
    port: 8080

  ingressClass: kong

  rbac:
    # Specifies whether RBAC resources should be created
    create: true

  serviceAccount:
    # Specifies whether a ServiceAccount should be created
    create: true
    # The name of the ServiceAccount to use.
    # If not set and create is true, a name is generated using the fullname template
    name:
    # The annotations for service account
    annotations: {}

  installCRDs: true

  # general properties
  livenessProbe:
    httpGet:
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 5
    timeoutSeconds: 5
    periodSeconds: 10
    successThreshold: 1
    failureThreshold: 3
  readinessProbe:
    httpGet:
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 5
    timeoutSeconds: 5
    periodSeconds: 10
    successThreshold: 1
    failureThreshold: 3
  resources: {}

# -----------------------------------------------------------------------------
# Postgres sub-chart parameters
# -----------------------------------------------------------------------------

# Kong can run without a database or use either Postgres or Cassandra
# as a backend datatstore for it's configuration.
# By default, this chart installs Kong without a database.

# If you would like to use a database, there are two options:
# - (recommended) Deploy and maintain a database and pass the connection
#   details to Kong via the `env` section.
# - You can use the below `postgresql` sub-chart to deploy a database
#   along-with Kong as part of a single Helm release.

# PostgreSQL chart documentation:
# https://github.com/helm/charts/blob/master/stable/postgresql/README.md

postgresql:
  enabled: false
  # postgresqlUsername: kong
  # postgresqlDatabase: kong
  # service:
  #   port: 5432

# -----------------------------------------------------------------------------
# Miscellaneous parameters
# -----------------------------------------------------------------------------

waitImage:
  repository: busybox
  tag: latest
  pullPolicy: IfNotPresent

# update strategy
updateStrategy: {}
  # type: RollingUpdate
  # rollingUpdate:
  #   maxSurge: "100%"
  #   maxUnavailable: "0%"

# If you want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
resources: {}
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

# readinessProbe for Kong pods
# If using Kong Enterprise with RBAC, you must add a Kong-Admin-Token header
readinessProbe:
  httpGet:
    path: "/status"
    port: metrics
    scheme: HTTP
  initialDelaySeconds: 5
  timeoutSeconds: 5
  periodSeconds: 10
  successThreshold: 1
  failureThreshold: 3

# livenessProbe for Kong pods
livenessProbe:
  httpGet:
    path: "/status"
    port: metrics
    scheme: HTTP
  initialDelaySeconds: 5
  timeoutSeconds: 5
  periodSeconds: 10
  successThreshold: 1
  failureThreshold: 3

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}

# Annotation to be added to Kong pods
podAnnotations: {}

# Kong pod count
replicaCount: 1

# Annotations to be added to Kong deployment
deploymentAnnotations:
  kuma.io/gateway: enabled
  traffic.sidecar.istio.io/includeInboundPorts: ""

# Enable autoscaling using HorizontalPodAutoscaler
autoscaling:
  enabled: false
  minReplicas: 2
  maxReplicas: 5
  ## targetCPUUtilizationPercentage only used if the cluster does not support autoscaling/v2beta
  targetCPUUtilizationPercentage:
  ## Otherwise for clusters that do support autoscaling/v2beta, use metrics
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 80

# Kong Pod Disruption Budget
podDisruptionBudget:
  enabled: false
  maxUnavailable: "50%"

podSecurityPolicy:
  enabled: false
  spec:
    privileged: false
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    runAsGroup:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - 'configMap'
      - 'secret'
      - 'emptyDir'
    allowPrivilegeEscalation: false
    hostNetwork: false
    hostIPC: false
    hostPID: false
    # Make the root filesystem read-only. This is not compatible with Kong Enterprise <1.5.
    # If you use Kong Enterprise <1.5, this must be set to false.
    readOnlyRootFilesystem: true


priorityClassName: ""

# securityContext for Kong pods.
securityContext:
  runAsUser: 1000

serviceMonitor:
  # Specifies whether ServiceMonitor for Prometheus operator should be created
  enabled: false
  # interval: 10s
  # Specifies namespace, where ServiceMonitor should be installed
  # namespace: monitoring
  # labels:
  #   foo: bar

# -----------------------------------------------------------------------------
# Kong Enterprise parameters
# -----------------------------------------------------------------------------

# Toggle Kong Enterprise features on or off
# RBAC and SMTP configuration have additional options that must all be set together
# Other settings should be added to the "env" settings below
enterprise:
  enabled: false
  # Kong Enterprise license secret name
  # This secret must contain a single 'license' key, containing your base64-encoded license data
  # The license secret is required for all Kong Enterprise deployments
  license_secret: you-must-create-a-kong-license-secret
  vitals:
    enabled: true
  portal:
    enabled: false
  rbac:
    enabled: false
    admin_gui_auth: basic-auth
    # If RBAC is enabled, this Secret must contain an admin_gui_session_conf key
    # The key value must be a secret configuration, following the example at
    # https://docs.konghq.com/enterprise/latest/kong-manager/authentication/sessions
    session_conf_secret: you-must-create-an-rbac-session-conf-secret
    # If admin_gui_auth is not set to basic-auth, provide a secret name which
    # has an admin_gui_auth_conf key containing the plugin config JSON
    admin_gui_auth_conf_secret: you-must-create-an-admin-gui-auth-conf-secret
  # For configuring emails and SMTP, please read through:
  # https://docs.konghq.com/enterprise/latest/developer-portal/configuration/smtp
  # https://docs.konghq.com/enterprise/latest/kong-manager/networking/email
  smtp:
    enabled: false
    portal_emails_from: none@example.com
    portal_emails_reply_to: none@example.com
    admin_emails_from: none@example.com
    admin_emails_reply_to: none@example.com
    smtp_admin_emails: none@example.com
    smtp_host: smtp.example.com
    smtp_port: 587
    smtp_starttls: true
    auth:
      # If your SMTP server does not require authentication, this section can
      # be left as-is. If smtp_username is set to anything other than an empty
      # string, you must create a Secret with an smtp_password key containing
      # your SMTP password and specify its name here.
      smtp_username: ''  # e.g. postmaster@example.com
      smtp_password_secret: you-must-create-an-smtp-password

manager:
  # Enable creating a Kubernetes service for Kong Manager
  enabled: false
  type: NodePort
  # If you want to specify annotations for the Manager service, uncomment the following
  # line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
  annotations: {}
  #  service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

  http:
    # Enable plaintext HTTP listen for Kong Manager
    enabled: true
    servicePort: 8002
    containerPort: 8002
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []

  tls:
    # Enable HTTPS listen for Kong Manager
    enabled: true
    servicePort: 8445
    containerPort: 8445
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters:
    - http2

  ingress:
    # Enable/disable exposure using ingress.
    enabled: false
    # TLS secret name.
    # tls: kong-proxy.example.com-tls
    # Ingress hostname
    hostname:
    # Map of ingress annotations.
    annotations: {}
    # Ingress path.
    path: /

  externalIPs: []

portal:
  # Enable creating a Kubernetes service for the Developer Portal
  enabled: false
  type: NodePort
  # If you want to specify annotations for the Portal service, uncomment the following
  # line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
  annotations: {}
  #  service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

  http:
    # Enable plaintext HTTP listen for the Developer Portal
    enabled: true
    servicePort: 8003
    containerPort: 8003
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []

  tls:
    # Enable HTTPS listen for the Developer Portal
    enabled: true
    servicePort: 8446
    containerPort: 8446
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters:
    - http2

  ingress:
    # Enable/disable exposure using ingress.
    enabled: false
    # TLS secret name.
    # tls: kong-proxy.example.com-tls
    # Ingress hostname
    hostname:
    # Map of ingress annotations.
    annotations: {}
    # Ingress path.
    path: /

  externalIPs: []

portalapi:
  # Enable creating a Kubernetes service for the Developer Portal API
  enabled: false
  type: NodePort
  # If you want to specify annotations for the Portal API service, uncomment the following
  # line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
  annotations: {}
  #  service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

  http:
    # Enable plaintext HTTP listen for the Developer Portal API
    enabled: true
    servicePort: 8004
    containerPort: 8004
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters: []

  tls:
    # Enable HTTPS listen for the Developer Portal API
    enabled: true
    servicePort: 8447
    containerPort: 8447
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters:
    - http2

  ingress:
    # Enable/disable exposure using ingress.
    enabled: false
    # TLS secret name.
    # tls: kong-proxy.example.com-tls
    # Ingress hostname
    hostname:
    # Map of ingress annotations.
    annotations: {}
    # Ingress path.
    path: /

  externalIPs: []

Can you provide logs from the init-migrations pod?

Offhand, the Postgres chart sets up the user and database automatically; the migration job alone will not. Double-check that you’ve created your user, database, and granted your user permissions to it.

Honestly, there is not much of any logging and that has been the challenging. It is like shooting in the dark. This is all I see Error from server (BadRequest): container "kong-migrations" in pod "kong-kong-init-migrations-m5grv" is waiting to start: PodInitializing

That is correct, a user and database is already created and that user is the owner of the database kong is supposed to use.

That job runs a basic init to confirm it can establish a connection:

kubectl logs JOB_POD -c wait-for-postgres should get you logs for it. Check kubectl describe pod JOB_POD for the initContainer’s command also–it may not have rendered correctly, though based on the values.yaml it should be able to.

Here is what kubectl describe POD_NAME gives me, there seem to be no error there.

Name:           kong-kong-init-migrations-dbblz
Namespace:      gateway
Priority:       0
Node:           worker3/192.168.1.23
Start Time:     Mon, 20 Apr 2020 16:48:16 -0400
Labels:         app.kubernetes.io/component=init-migrations
                app.kubernetes.io/instance=kong
                app.kubernetes.io/managed-by=Tiller
                app.kubernetes.io/name=kong
                app.kubernetes.io/version=2
                controller-uid=f01223cc-362c-4308-b1e3-48333ffd83ee
                helm.sh/chart=kong-1.5.0
                job-name=kong-kong-init-migrations
Annotations:    kuma.io/sidecar-injection: disabled
                sidecar.istio.io/inject: false
Status:         Pending
IP:             10.233.116.162
IPs:            <none>
Controlled By:  Job/kong-kong-init-migrations
Init Containers:
  wait-for-postgres:
    Container ID:  docker://b1666a1e088b405239448b02861cd7c2d829a8840bdb1d691243ba2ada04e13d
    Image:         busybox:latest
    Image ID:      docker-pullable://busybox@sha256:89b54451a47954c0422d873d438509dae87d478f1cb5d67fb130072f67ca5d25
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      set -u; until nc -zv $KONG_PG_HOST $KONG_PG_PORT -w1; do echo "waiting for db - trying ${KONG_PG_HOST}:${KONG_PG_PORT}"; sleep 1; done
    State:          Running
      Started:      Mon, 20 Apr 2020 16:48:17 -0400
    Ready:          False
    Restart Count:  0
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            0.0.0.0:8001, 0.0.0.0:8444 http2 ssl
      KONG_DATABASE:                postgres
      KONG_LOG_LEVEL:               info
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_HTTP_INCLUDE:      /kong/servers.conf
      KONG_NGINX_WORKER_PROCESSES:  1
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 postgres-postgresql.storage.svc
      KONG_PG_PASSWORD:             ********
      KONG_PG_PORT:                 5432
      KONG_PG_SSL:                  off
      KONG_PG_SSL_VERIFY:           off
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled,oidc
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_NGINX_DAEMON:            off
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9lkh (ro)
Containers:
  kong-migrations:
    Container ID:
    Image:         docker.pkg.github.com/bsakweson/dockerhub/kong:2.0.3-oidc
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations bootstrap
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            0.0.0.0:8001, 0.0.0.0:8444 http2 ssl
      KONG_DATABASE:                postgres
      KONG_LOG_LEVEL:               info
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_HTTP_INCLUDE:      /kong/servers.conf
      KONG_NGINX_WORKER_PROCESSES:  1
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 postgres-postgresql.storage.svc
      KONG_PG_PASSWORD:             ********
      KONG_PG_PORT:                 5432
      KONG_PG_SSL:                  off
      KONG_PG_SSL_VERIFY:           off
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled,oidc
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_NGINX_DAEMON:            off
    Mounts:
      /kong from custom-nginx-template-volume (rw)
      /kong_prefix/ from kong-kong-prefix-dir (rw)
      /tmp from kong-kong-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9lkh (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kong-kong-prefix-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kong-kong-tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  custom-nginx-template-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kong-kong-default-custom-server-blocks
    Optional:  false
  default-token-z9lkh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9lkh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  3m3s  default-scheduler  Successfully assigned gateway/kong-kong-init-migrations-dbblz to worker3
  Normal  Pulled     3m2s  kubelet, worker3   Container image "busybox:latest" already present on machine
  Normal  Created    3m2s  kubelet, worker3   Created container wait-for-postgres
  Normal  Started    3m2s  kubelet, worker3   Started container wait-for-postgres

I cannot get logs for a container if the pod is not initialized properly.

ping @traines …

I honestly can say something just does not look right here. For what it is worth I am using version 1.5.0 of the helm chart. I also noticed that the batch job command uses [ "/bin/sh", "-c", "kong migrations up" ] whereas the docker documentation mandates it to be [ "/bin/sh", "-c", "kong migrations bootstrap" ] unless for kong versions < 0.15. Is that an oversight?

Can someone please share with me a values.yaml file that works for chart version 1.5.0, this chart uses kong version 2.0.3. @traines. Thanks in advance.

You can get logs for initContainers, it’s just that the logs command won’t do it automatically. The container has it be specified explicitly,

kubectl logs JOB_POD -c wait-for-postgres

Can you provide output from that?

I provided this long time ago. See reference. @traines

ping @traines …

Can you show the complete command you are running? Again, what you’ve provided looks like what Kubernetes will return if you attempt to retrieve logs from an initializing pod without specifying the container. The command should look like:

kubectl logs JOB_POD -c wait-for-postgres

The -c wait-for-postgres is critical; you will not get useful information without it.

kubectl logs kong-kong-init-migrations-q64kf -c wait-for-postgres

waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432
waiting for db - trying postgres-postgresql.storage.svc:5432

Once more, can I have a sample external postgresql configuration that works? It looks as if database connection string is not constructed properly from environment variables.

ping @traines

1 Like

Can I suggest that instead of this

# If you would like to use a database, there are two options:
# - (recommended) Deploy and maintain a database and pass the connection
#   details to Kong via the `env` section.
# - You can use the below `postgresql` sub-chart to deploy a database
#   along-with Kong as part of a single Helm release.

Can we show an example configuration for external database. I think the actually use case of this chart will be using an already configured postgres/cassandra database rather than the dependent postgres chart. I ask for a sample section of this configuration several days ago. I am almost at the point of writing my own chart for this but that will be duplication. @hbagdi @traines

1 Like

Does the service hostname and port not match what you’d expect?

waiting for db - trying postgres-postgresql.storage.svc:5432

That’s not a Postgres connection string, just the hostname/port only. The init container is a very basic test to confirm that it can resolve the address and establish a TCP connection. You can mimic it by running a pod directly:

$ kubectl run -it --restart=Never --rm --image busybox:latest test           
If you don't see a command prompt, try pressing enter.                                         

/ # nc -zv -w1 example-postgresql.default.svc:5432
example.default.svc:5432 (10.19.251.158:5432) open
/ # nc -zv -w1 not-listening.default.svc:5432
nc: not-listening.default.svc:5432 (10.19.249.30:5432): Connection timed out
/ # nc -zv -w1 doesntexist.default.svc:5432
nc: bad address 'doesntexist.default.svc:5432'

Those show what you’ll get if the connection succeeds, if the connection fails, and when DNS resolution fails.

The timeout we use is rather aggressive; you may want to test with -w10 to see if that makes any difference. However, that shouldn’t be a factor unless the network quality is quite poor, which I wouldn’t expect intra-cluster.

We don’t provide an example configuration because there isn’t any valid one: the correct configuration is wholly dependent on what your particular database setup is, and it’s necessary to review the options https://docs.konghq.com/2.0.x/configuration/ to see what you’ll need.

It is exactly the same, I have proof tested that and it works on other services I use on this cluster. When I add .cluster.local to it I get this:

postgres-postgresql.storage.svc.cluster.local (10.x.x.x:5432) open,

However connection is still not established. That is the line in the logs, it restarts the pod after that. @traines

How exactly does it restart? Do you see that line repeated It should loop until success, exit the init container once that open line appears, and then proceed with the main container, e.g.

kubectl logs example-kong-init-migrations-x7886 -c wait-for-postgres 
nc: example-postgresql (10.19.245.143:5432): Connection timed out
waiting for db - trying example-postgresql:5432
nc: example-postgresql (10.19.245.143:5432): Connection timed out
waiting for db - trying example-postgresql:5432
example-postgresql (10.19.245.143:5432) open

The open line should never appear more than once, and you should see that the init container stops after it appears, and that you can run kubectl logs PODNAME -c kong-migrations after to see their progress (or any failures beyond a basic connection failure).

If the pod is in fact restarting, do you see anything in the kubectl describe events output for the pod or job? Kubernetes should log any external reasons for a pod restart, although we don’t define a deadline or other reasons for killing the pod–I suspect what you’re running into now is that the main migrations container is starting and exiting unsuccessfully for some other reason, e.g. bad auth credentials or missing database permissions.

I,m getting the same problem, migrating ECS kong to EKS kong but I cannot specify an existing database on RDS. Have you been able to resolve it ?