1.3rc1 db-less errors in log

Installed 1.3rc1 in db-less environment. As soon as kong starts the following messages continually prints in the log. Apologies if this is not the right forum for discussing RC candidates

stack traceback:
coroutine 0:
/usr/local/share/lua/5.1/kong/runloop/balancer.lua: in function ‘callback’
/usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1270: in function </usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1269>, context: ngx.timer
2019/07/31 16:17:22 [error] 38#0: *651567 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/runloop/balancer.lua:249: attempt to index local ‘healthchecker’ (a nil value)
stack traceback:
coroutine 0:

Hello, could you share a bit more about your environment? Perhaps an anonymized version of your db-less config?

Hoping this is what you are looking for …

config - --dry-run genereated --- some redactions

[debug] Created tunnel using local port: '54576'

[debug] SERVER: "127.0.0.1:54576"

[debug] Original chart version: "0.14.1"
[debug] CHART PATH: /home/bob/git/kong/helm/hpa

NAME:   kong-bob
REVISION: 1
RELEASED: Wed Jul 31 15:29:30 2019
CHART: hpa-1.0.0
USER-SUPPLIED VALUES:
controller:
  autoscaling:
    enabled: true
    iata: sbox
    maxReplicas: 10
    minReplicas: 3
    targetCPUUtilizationPercentage: 70
fullnameOverride: kong-hpa
ingressclass: kong-sbox
kong:
  admin:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: kong-admin.sandbox.************
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    externalTrafficPolicy: Local
    type: ClusterIP
  env:
    ac: bos
    database: "off"
    deploy: dev
    pg_password: ****************
    proxy_listen: 0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
    real_ip_header: proxy_protocol
    vitals: true
  image:
    pullPolicy: Always
    repository: *****************
    tag: e27aa0b
  ingressController:
    enabled: true
    ingressClass: kong-sbox
    installCRDs: true
    replicaCount: 3
    resources:
      limits:
        cpu: 4000m
        memory: 8Gi
      requests:
        cpu: 1000m
        memory: 4Gi
  nodeSelector:
    environment: dmz
  podAnnotations:
    ad.datadoghq.com/admin-api.logs: |
      [
        {
          "source": "kong",
          "service": "admin-api",
          "sourcecategory": "kubernetes"
        }
      ]
    ad.datadoghq.com/ingress-controller.logs: |
      [
        {
          "source": "kong",
          "service": "ingress-controller",
          "sourcecategory": "kubernetes"
        }
      ]
    ad.datadoghq.com/kong.check_names: |
      [
        "kong"
      ]
    ad.datadoghq.com/kong.init_configs: |
      [
        {
        }
      ]
    ad.datadoghq.com/kong.instances: |
      [
        {
           "kong_status_url": "https://%%host%%:8444/status/",
           "ssl_validation": false
        }
      ]
    ad.datadoghq.com/kong.logs: |
      [
        {
          "source": "kong",
          "service": "kong",
          "sourcecategory": "kubernetes"
        }
      ]
    ad.datadoghq.com/kong.tags: |
      [
        {
          "ingress": "kong-sbox",
          "namespace": "sandbox"
        }
      ]
  podLabels:
    alert: kong-alerts-dev
    cust: shared
    dc: sbx
    env: dev
    group: via-support-platops
    kube-monkey/enabled: enabled
    kube-monkey/identifier: kong
    kube-monkey/kill-mode: fixed
    kube-monkey/kill-value: "1"
    kube-monkey/mtbf: "1"
    platform: kubernetes
    product: via.core
    role: ingress
    service: kong
    team: platops
    vendor: aws
  postgresql:
    enabled: false
    nodeSelector:
      environment: dmz
    postgresqlPassword: ***************
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: dmz
  proxy:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: kong-proxy.sandbox.*****************
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    externalTrafficPolicy: Local
    http:
      enabled: true
    loadBalancerSourceRanges:
      **** redacted ****
    type: LoadBalancer
  replicaCount: 3
  resources:
    limits:
      cpu: 4000m
      memory: 8Gi
    requests:
      cpu: 1000m
      memory: 4Gi
  runMigrations: false
  tolerations:
  - effect: NoSchedule
    key: dedicated
    operator: Equal
    value: dmz
nameOverride: kong-hpa
namespace: sandbox
postgresql:
  enabled: false

COMPUTED VALUES:
controller:
  autoscaling:
    enabled: true
    iata: sbox
    maxReplicas: 10
    minReplicas: 3
    targetCPUUtilizationPercentage: 70
fullnameOverride: kong-hpa
ingressclass: kong-sbox
kong:
  admin:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: kong-admin.sandbox.**************
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    containerPort: 8444
    externalTrafficPolicy: Local
    ingress:
      annotations: {}
      enabled: false
      hosts: []
      path: /
    servicePort: 8444
    type: ClusterIP
    useTLS: true
  cassandra:
    enabled: false
  enterprise:
    enabled: false
    license_secret: you-must-create-a-kong-license-secret
    portal:
      enabled: false
      portal_auth: basic-auth
      session_conf_secret: you-must-create-a-portal-session-conf-secret
    rbac:
      admin_gui_auth: basic-auth
      enabled: false
      session_conf_secret: you-must-create-an-rbac-session-conf-secret
    smtp:
      admin_emails_from: none@example.com
      admin_emails_reply_to: none@example.com
      auth:
        smtp_password_secret: you-must-create-an-smtp-password
        smtp_username: ""
      enabled: false
      portal_emails_from: none@example.com
      portal_emails_reply_to: none@example.com
      smtp_admin_emails: none@example.com
      smtp_host: smtp.example.com
      smtp_port: 587
      smtp_starttls: true
    vitals:
      enabled: true
  env:
    ac: bos
    admin_access_log: /dev/stdout
    admin_error_log: /dev/stderr
    admin_gui_access_log: /dev/stdout
    admin_gui_error_log: /dev/stderr
    database: "off"
    deploy: dev
    pg_password: SS3+tc7reGwE@Rf_
    portal_api_access_log: /dev/stdout
    portal_api_error_log: /dev/stderr
    proxy_access_log: /dev/stdout
    proxy_error_log: /dev/stderr
    proxy_listen: 0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
    real_ip_header: proxy_protocol
    vitals: true
  global: {}
  image:
    pullPolicy: Always
    repository: ***********
    tag: e27aa0b
  ingressController:
    enabled: true
    image:
      repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
      tag: 0.5.0
    ingressClass: kong-sbox
    installCRDs: true
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 10254
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    rbac:
      create: true
    readinessProbe:
      failureThreshold: 3
      httpGet:
        initialDelaySeconds: 30
        path: /healthz
        port: 10254
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    replicaCount: 3
    resources:
      limits:
        cpu: 4000m
        memory: 8Gi
      requests:
        cpu: 1000m
        memory: 4Gi
    serviceAccount:
      create: true
      name: null
  livenessProbe:
    failureThreshold: 5
    httpGet:
      path: /status
      port: admin
      scheme: HTTPS
    initialDelaySeconds: 30
    periodSeconds: 30
    successThreshold: 1
    timeoutSeconds: 5
  manager:
    annotations: {}
    externalIPs: []
    http:
      containerPort: 8002
      enabled: true
      servicePort: 8002
    ingress:
      annotations: {}
      enabled: false
      hosts: []
      path: /
    tls:
      containerPort: 8445
      enabled: true
      servicePort: 8445
    type: NodePort
  nodeSelector:
    environment: dmz
  podAnnotations:
    ad.datadoghq.com/admin-api.logs: |
      [
        {
          "source": "kong",
          "service": "admin-api",
          "sourcecategory": "kubernetes"
        }
      ]
    ad.datadoghq.com/ingress-controller.logs: |
      [
        {
          "source": "kong",
          "service": "ingress-controller",
          "sourcecategory": "kubernetes"
        }
      ]
    ad.datadoghq.com/kong.check_names: |
      [
        "kong"
      ]
    ad.datadoghq.com/kong.init_configs: |
      [
        {
        }
      ]
    ad.datadoghq.com/kong.instances: |
      [
        {
           "kong_status_url": "https://%%host%%:8444/status/",
           "ssl_validation": false
        }
      ]
    ad.datadoghq.com/kong.logs: |
      [
        {
          "source": "kong",
          "service": "kong",
          "sourcecategory": "kubernetes"
        }
      ]
    ad.datadoghq.com/kong.tags: |
      [
        {
          "ingress": "kong-sbox",
          "namespace": "sandbox"
        }
      ]
  podLabels:
    alert: kong-alerts-dev
    cust: shared
    dc: sbx
    env: dev
    group: ***************8
    kube-monkey/enabled: enabled
    kube-monkey/identifier: kong
    kube-monkey/kill-mode: fixed
    kube-monkey/kill-value: "1"
    kube-monkey/mtbf: "1"
    platform: kubernetes
    product: via.core
    role: ingress
    service: kong
    team: platops
    vendor: aws
  portal:
    annotations: {}
    externalIPs: []
    http:
      containerPort: 8003
      enabled: true
      servicePort: 8003
    ingress:
      annotations: {}
      enabled: false
      hosts: []
      path: /
    tls:
      containerPort: 8446
      enabled: true
      servicePort: 8446
    type: NodePort
  portalapi:
    annotations: {}
    externalIPs: []
    http:
      containerPort: 8004
      enabled: true
      servicePort: 8004
    ingress:
      annotations: {}
      enabled: false
      hosts: []
      path: /
    tls:
      containerPort: 8447
      enabled: true
      servicePort: 8447
    type: NodePort
  postgresql:
    enabled: false
    nodeSelector:
      environment: dmz
    postgresqlDatabase: kong
    postgresqlPassword: SS3+tc7reGwE@Rf_
    postgresqlUsername: kong
    service:
      port: 5432
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: dmz
  proxy:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: kong-proxy.sandbox.************
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    externalIPs: []
    externalTrafficPolicy: Local
    http:
      containerPort: 8000
      enabled: true
      servicePort: 80
    ingress:
      annotations: {}
      enabled: false
      hosts: []
      path: /
    loadBalancerSourceRanges:
      **** redacted ****
    tls:
      containerPort: 8443
      enabled: true
      servicePort: 443
    type: LoadBalancer
  readinessProbe:
    failureThreshold: 5
    httpGet:
      path: /status
      port: admin
      scheme: HTTPS
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  replicaCount: 3
  resources:
    limits:
      cpu: 4000m
      memory: 8Gi
    requests:
      cpu: 1000m
      memory: 4Gi
  runMigrations: false
  tolerations:
  - effect: NoSchedule
    key: dedicated
    operator: Equal
    value: dmz
  waitImage:
    repository: busybox
    tag: latest
nameOverride: kong-hpa
namespace: sandbox
postgresql:
  enabled: false

HOOKS:
MANIFEST:

---
# Source: hpa/charts/kong/templates/config-custom-server-blocks.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kong-default-custom-server-blocks
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
data:
  servers.conf: |
    # Prometheus metrics server
    server {
        server_name kong_prometheus_exporter;
        listen 0.0.0.0:9542; # can be any other port as well
        access_log off;
        location /metrics {
            default_type text/plain;
            content_by_lua_block {
                 local prometheus = require "kong.plugins.prometheus.exporter"
                 prometheus:collect()
            }
        }
        location /nginx_status {
            internal;
            access_log off;
            stub_status;
        }
    }
---
# Source: hpa/charts/kong/templates/controller-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kong-bob-kong
  namespace: 
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
---
# Source: hpa/charts/kong/templates/crd-kongconsumer.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongconsumers.configuration.konghq.com
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongConsumer
    plural: kongconsumers
    shortNames:
    - kc
  additionalPrinterColumns:
  - name: Username
    type: string
    description: Username of a Kong Consumer
    JSONPath: .username
  - name: Age
    type: date
    description: Age
    JSONPath: .metadata.creationTimestamp
  validation:
    openAPIV3Schema:
      properties:
        username:
          type: string
        custom_id:
          type: string
---
# Source: hpa/charts/kong/templates/crd-kongcredential.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongcredentials.configuration.konghq.com
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongCredential
    plural: kongcredentials
  additionalPrinterColumns:
  - name: Credential-type
    type: string
    description: Type of credential
    JSONPath: .type
  - name: Age
    type: date
    description: Age
    JSONPath: .metadata.creationTimestamp
  - name: Consumer-Ref
    type: string
    description: Owner of the credential
    JSONPath: .consumerRef
  validation:
    openAPIV3Schema:
      required:
      - consumerRef
      - type
      properties:
        consumerRef:
          type: string
        type:
          type: string
---
# Source: hpa/charts/kong/templates/crd-kongingress.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongingresses.configuration.konghq.com
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongIngress
    plural: kongingresses
    shortNames:
    - ki
  validation:
    openAPIV3Schema:
      properties:
        upstream:
          type: object
        route:
          properties:
            methods:
              type: array
              items:
                type: string
            regex_priority:
              type: integer
            strip_path:
              type: boolean
            preserve_host:
              type: boolean
            protocols:
              type: array
              items:
                type: string
                enum:
                - http
                - https
        proxy:
          type: object
          properties:
            protocol:
              type: string
              enum:
              - http
              - https
            path:
              type: string
              pattern: ^/.*$
            retries:
              type: integer
              minimum: 0
            connect_timeout:
              type: integer
              minimum: 0
            read_timeout:
              type: integer
              minimum: 0
            write_timeout:
              type: integer
              minimum: 0
        upstream:
          type: object
          properties:
            hash_on:
              type: string
            hash_on_cookie:
              type: string
            hash_on_cookie_path:
              type: string
            hash_on_header:
              type: string
            hash_fallback_header:
              type: string
            hash_fallback:
              type: string
            slots:
              type: integer
              minimum: 10
            healthchecks:
              type: object
              properties:
                active:
                  type: object
                  properties:
                    concurrency:
                      type: integer
                      minimum: 1
                    timeout:
                      type: integer
                      minimum: 0
                    http_path:
                      type: string
                      pattern: ^/.*$
                    healthy: &healthy
                      type: object
                      properties:
                        http_statuses:
                          type: array
                          items:
                            type: integer
                        interval:
                          type: integer
                          minimum: 0
                        successes:
                          type: integer
                          minimum: 0
                    unhealthy: &unhealthy
                      type: object
                      properties:
                        http_failures:
                          type: integer
                          minimum: 0
                        http_statuses:
                          type: array
                          items:
                            type: integer
                        interval:
                          type: integer
                          minimum: 0
                        tcp_failures:
                          type: integer
                          minimum: 0
                        timeout:
                          type: integer
                          minimum: 0
                passive:
                  type: object
                  properties:
                    healthy: *healthy
                    unhealthy: *unhealthy
---
# Source: hpa/charts/kong/templates/crd-kongplugins.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongplugins.configuration.konghq.com
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongPlugin
    plural: kongplugins
    shortNames:
    - kp
  additionalPrinterColumns:
  - name: Plugin-Type
    type: string
    description: Name of the plugin
    JSONPath: .plugin
  - name: Age
    type: date
    description: Age
    JSONPath: .metadata.creationTimestamp
  - name: Disabled
    type: boolean
    description: Indicates if the plugin is disabled
    JSONPath: .disabled
    priority: 1
  - name: Config
    type: string
    description: Configuration of the plugin
    JSONPath: .config
    priority: 1
  validation:
    openAPIV3Schema:
      required:
      - plugin
      properties:
        plugin:
          type: string
        disabled:
          type: boolean
        config:
          type: object
---
# Source: hpa/charts/kong/templates/controller-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
  name:  kong-bob-kong
rules:
  - apiGroups:
      - ""
    resources:
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - "configuration.konghq.com"
    resources:
      - kongplugins
      - kongcredentials
      - kongconsumers
      - kongingresses
    verbs:
      - get
      - list
      - watch
---
# Source: hpa/charts/kong/templates/controller-rbac-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name:  kong-bob-kong
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name:  kong-bob-kong
subjects:
  - kind: ServiceAccount
    name: kong-bob-kong
    namespace: auto
---
# Source: hpa/charts/kong/templates/controller-rbac-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name:  kong-bob-kong
  namespace: 
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<kong-ingress-controller-leader-nginx>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "kong-ingress-controller-leader-kong-sbox-kong-sbox"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
---
# Source: hpa/charts/kong/templates/controller-rbac-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name:  kong-bob-kong
  namespace: auto
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kong-bob-kong
subjects:
  - kind: ServiceAccount
    name: kong-bob-kong
    namespace: auto
---
# Source: hpa/charts/kong/templates/service-kong-admin.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-bob-kong-admin
  annotations:
      external-dns.alpha.kubernetes.io/hostname: "kong-admin.sandbox.************"
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
spec:
  type: ClusterIP
  ports:
  - name: kong-admin
    port: 8444
    targetPort: 8444
    protocol: TCP
  selector:
    app: kong
    release: kong-bob
    component: app
---
# Source: hpa/charts/kong/templates/service-kong-proxy.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-bob-kong-proxy
  annotations:
      external-dns.alpha.kubernetes.io/hostname: "kong-proxy.sandbox.************"
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
  labels:
    app: kong
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
    ******
  externalIPs:
  ports:
  - name: kong-proxy
    port: 80
    targetPort: 8000
    protocol: TCP
  - name: kong-proxy-tls
    port: 443
    targetPort: 8443
    protocol: TCP
  externalTrafficPolicy: Local

  selector:
    app: kong
    release: kong-bob
    component: app
---
# Source: hpa/charts/kong/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: "kong-bob-kong"
  labels:
    app: "kong"
    chart: "kong-0.14.1"
    release: "kong-bob"
    heritage: "Tiller"
    component: app
    alert: kong-alerts-dev
    cust: shared
    dc: sbx
    env: dev
    group: via-support-platops
    kube-monkey/enabled: enabled
    kube-monkey/identifier: kong
    kube-monkey/kill-mode: fixed
    kube-monkey/kill-value: "1"
    kube-monkey/mtbf: "1"
    platform: kubernetes
    product: via.core
    role: ingress
    service: kong
    team: platops
    vendor: aws
    
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kong
      release: kong-bob
      component: app
  template:
    metadata:
      annotations:
        ad.datadoghq.com/admin-api.logs: |
          [
            {
              "source": "kong",
              "service": "admin-api",
              "sourcecategory": "kubernetes"
            }
          ]
        ad.datadoghq.com/ingress-controller.logs: |
          [
            {
              "source": "kong",
              "service": "ingress-controller",
              "sourcecategory": "kubernetes"
            }
          ]
        ad.datadoghq.com/kong.check_names: |
          [
            "kong"
          ]
        ad.datadoghq.com/kong.init_configs: |
          [
            {
            }
          ]
        ad.datadoghq.com/kong.instances: |
          [
            {
               "kong_status_url": "https://%%host%%:8444/status/",
               "ssl_validation": false
            }
          ]
        ad.datadoghq.com/kong.logs: |
          [
            {
              "source": "kong",
              "service": "kong",
              "sourcecategory": "kubernetes"
            }
          ]
        ad.datadoghq.com/kong.tags: |
          [
            {
              "ingress": "kong-sbox",
              "namespace": "sandbox"
            }
          ]
        
      labels:
        app: kong
        release: kong-bob
        component: app
        alert: kong-alerts-dev
        cust: shared
        dc: sbx
        env: dev
        group: via-support-platops
        kube-monkey/enabled: enabled
        kube-monkey/identifier: kong
        kube-monkey/kill-mode: fixed
        kube-monkey/kill-value: "1"
        kube-monkey/mtbf: "1"
        platform: kubernetes
        product: via.core
        role: ingress
        service: kong
        team: platops
        vendor: aws
        
    spec:
      serviceAccountName: kong-bob-kong
      
      containers:
      - name: ingress-controller
        args:
        - /kong-ingress-controller
        # Service from were we extract the IP address/es to use in Ingress status
        - --publish-service=auto/kong-bob-kong-proxy
        # Set the ingress class
        - --ingress-class=kong-sbox
        - --election-id=kong-ingress-controller-leader-kong-sbox
        # the kong URL points to the kong admin api server
        - --kong-url=https://localhost:8444
        - --admin-tls-skip-verify # TODO make this configurable
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.5.0"
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
                limits:
                  cpu: 4000m
                  memory: 8Gi
                requests:
                  cpu: 1000m
                  memory: 4Gi
                
      
      - name: kong
        image: ******
        imagePullPolicy: Always
        env:
        - name: KONG_ADMIN_LISTEN
          value: "0.0.0.0:8444 ssl"
        - name: KONG_NGINX_DAEMON
          value: "off"        
        - name: KONG_AC
          value: "bos"
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_DEPLOY
          value: "dev"
        - name: KONG_PG_PASSWORD
          value: "SS3+tc7reGwE@Rf_"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol"
        - name: KONG_REAL_IP_HEADER
          value: "proxy_protocol"
        - name: KONG_VITALS
          value: "true"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: /kong/servers.conf
        ports:
        - name: admin
          containerPort: 8444
          protocol: TCP
        - name: proxy
          containerPort: 8000
          protocol: TCP
        - name: proxy-tls
          containerPort: 8443
          protocol: TCP
        - name: metrics
          containerPort: 9542
          protocol: TCP
        volumeMounts:
          - name: custom-nginx-template-volume
            mountPath: /kong
        readinessProbe:
          failureThreshold: 5
          httpGet:
            path: /status
            port: admin
            scheme: HTTPS
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
          
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /status
            port: admin
            scheme: HTTPS
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
          
        resources:
          limits:
            cpu: 4000m
            memory: 8Gi
          requests:
            cpu: 1000m
            memory: 4Gi
          
      nodeSelector:
        environment: dmz
        
      tolerations:
        - effect: NoSchedule
          key: dedicated
          operator: Equal
          value: dmz
        
      volumes:
        - name: custom-nginx-template-volume
          configMap:
            name: kong-default-custom-server-blocks
---
# Source: hpa/templates/kong-hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app: kong-hpa
    chart: hpa-1.0.0
    component: ""
    heritage: Tiller
    release: kong-bob
  name: kong-hpa
  namespace: sandbox
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: kong-sbox-kong
  minReplicas: 3
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 70
>

any thoughts here? did i provide incorrect info? rather have a pod describe output?

I am also running into a similar problem.

In addition I am also seeing a lot of these errors:

2019/08/08 21:13:24 [error] 39#0: *15572 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/pdk/private/phases.lua:66: no phase in kong.ctx.core.phase
stack traceback:
coroutine 0:
        [C]: in function 'error'
        /usr/local/share/lua/5.1/kong/pdk/private/phases.lua:66: in function 'check_phase'
        /usr/local/share/lua/5.1/kong/pdk/response.lua:662: in function 'collect'
        content_by_lua(servers.conf:12):3: in main chunk, client: 10.0.80.54, server: kong_prometheus_exporter, request: "GET /metrics HTTP/1.1", host: "10.0.52.33:9542"

The Prometheus bug was resolved in version 0.4.1 of the plugin. About the other error, we are happy to report that have been able to reproduce it and have tracked down the problem! The next release will contain the fix! Thank you for reporting!

1 Like

I can still reproduce the Prometheus bug when using version 0.4.1.

Oh, good to know, thanks for the feedback! Could you share the error log you get when running with 0.4.1 (even if it’s the same error message, slight line number changes may be of interest). And what exactly did you do to trigger it? Was it when accessing the /metrics endpoint? Thank you!

Yeah it occurs when I access the metrics endpoint. (only at port 9542 not the admin port)

servers.conf:
# Prometheus metrics server
server {
server_name kong_prometheus_exporter;
listen 0.0.0.0:9542; # can be any other port as well

    access_log off;
    location /metrics {
        default_type text/plain;
        content_by_lua_block {
             local prometheus = require "kong.plugins.prometheus.exporter"
             prometheus:collect()
        }
    }

    location /nginx_status {
        internal;
        access_log off;
        stub_status;
    }
}
# Health check server
# TODO how to health check kong in dbless?
server {
    server_name kong_health_check;
    listen 0.0.0.0:9001; # can be any other port as well

    access_log off;
    location /health {
      return 200;
    }
}

/plugins:

{
  "next": null,
  "data": [
    {
      "created_at": 1565895104,
      "config": {},
      "id": "86e6f637-64e2-5672-920c-32e17e76e0fe",
      "service": null,
      "enabled": true,
      "tags": null,
      "consumer": null,
      "run_on": "first",
      "name": "prometheus",
      "route": null,
      "protocols": [
        "grpc",
        "grpcs",
        "http",
        "https"
      ]
    }
  ]
}

admin/metrics:

{
  "message": "An unexpected error occurred"
}

server/metrics:

<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>

/etc/kong/kong.conf

nginx_http_include = /kong/servers.conf

kong/logs/error.log:

2019/08/15 21:12:12 [error] 33#0: *209376 [kong] exporter.lua:88 prometheus: plugin is not initialized, please make sure  'prometheus_metrics' shared dict is present in nginx template, client: 127.0.0.1, server: kong_prometheus_exporter, request: "GET /metrics HTTP/1.1", host: "localhost:9542"
2019/08/15 21:12:12 [error] 33#0: *209376 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/pdk/private/phases.lua:66: no phase in kong.ctx.core.phase
stack traceback:
coroutine 0:
        [C]: in function 'error'
        /usr/local/share/lua/5.1/kong/pdk/private/phases.lua:66: in function 'check_phase'
        /usr/local/share/lua/5.1/kong/pdk/response.lua:662: in function 'collect'
        content_by_lua(servers.conf:12):3: in main chunk, client: 127.0.0.1, server: kong_prometheus_exporter, request: "GET /metrics HTTP/1.1", host: "localhost:9542"

@ArnavB
We are releasing Kong 1.3.0rc2 currently.
Docker image takes a little longer to be available on Docker hub.

Once that is available, could you please test this once again using rc2?

I tried reproducing the issue you are facing but had no success in it.

1 Like

@hbagdi

I just booted up the new version of kong (1.3.0-rc2).

It seems as if the prometheus issue is fixed. However there still appears to be a health check issue.

logs:

2019/08/16 23:06:12 [error] 33#0: *937 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/runloop/balancer.lua:249: attempt to index local 'healthchecker' (a nil value)
stack traceback:
coroutine 0:
        /usr/local/share/lua/5.1/kong/runloop/balancer.lua: in function 'callback'
        /usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1241: in function </usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1240>, context: ngx.timer
2019/08/16 23:06:12 [error] 37#0: *944 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/runloop/balancer.lua:249: attempt to index local 'healthchecker' (a nil value)
stack traceback:
coroutine 0:
        /usr/local/share/lua/5.1/kong/runloop/balancer.lua: in function 'callback'
        /usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1241: in function </usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1240>, context: ngx.timer
2019/08/16 23:06:12 [warn] 33#0: *947 [lua] balancer.lua:249: callback(): [healthchecks] balancer service_name.namespace.svc reported health status changed to HEALTHY, context: ngx.timer

© 2018 Kong Inc.    Terms  •  Privacy  •  FAQ