Kong fails to create balancer for upstream

Hi all,

I’m running into a couple of issues when using Kong 1.3.0-rc1 databaseless with the Kubernetes Ingress controller.

#1: This health check issue seems to be present all the time, though it gets worse as more routes are deployed.

2019/08/09 18:34:12 [error] 160#0: *147587 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/runloop/balancer.lua:249: attempt to index local 'healthchecker' (a nil value)
stack traceback:
coroutine 0:
        /usr/local/share/lua/5.1/kong/runloop/balancer.lua: in function 'callback'
        /usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1270: in function </usr/local/share/lua/5.1/resty/dns/balancer/base.lua:1269>, context: ngx.timer
2019/08/09 18:34:42 [crit] 160#0: *101490 [lua] balancer.lua:640: on_upstream_event(): failed creating balancer for cloudproxy.arnavdatabus.svc: timeout waiting for balancer for 00aca459-9400-5dc9-905d-1908b6b36e34, context: ngx.timer
2019/08/09 18:34:42 [error] 160#0: *101490 [lua] balancer.lua:609: on_target_event(): target create: balancer not found for cloudproxy.arnavdatabus.svc, context: ngx.timer
2019/08/09 18:34:42 [error] 160#0: *101490 [lua] events.lua:194: do_handlerlist(): worker-events: event callback failed; source=lua-resty-healthcheck [service.namespace.svc], event=clear, pid=164 error='/usr/local/share/lua/5.1/resty/healthcheck.lua:225: attempt to index field 'targets' (a nil value)
stack traceback:
        /usr/local/share/lua/5.1/resty/healthcheck.lua:225: in function 'get_target'
        /usr/local/share/lua/5.1/resty/healthcheck.lua:942: in function </usr/local/share/lua/5.1/resty/healthcheck.lua:940>
        [C]: in function 'xpcall'
        /usr/local/share/lua/5.1/resty/worker/events.lua:185: in function 'do_handlerlist'
        /usr/local/share/lua/5.1/resty/worker/events.lua:217: in function 'do_event_json'
        /usr/local/share/lua/5.1/resty/worker/events.lua:361: in function 'poll'
        /usr/local/share/lua/5.1/resty/healthcheck.lua:1326: in function 'new'
        /usr/local/share/lua/5.1/kong/runloop/balancer.lua:339: in function 'create_healthchecker'
        /usr/local/share/lua/5.1/kong/runloop/balancer.lua:423: in function 'create_balancer'
        /usr/local/share/lua/5.1/kong/runloop/balancer.lua:760: in function 'init'
        /usr/local/share/lua/5.1/kong/runloop/handler.lua:812: in function </usr/local/share/lua/5.1/kong/runloop/handler.lua:811>', data={}, context: ngx.timer

#2: The official prometheus plugin does not appear to be working. When querying the endpoint, I only receive 500 internal server errors.
/plugins endpoint

{
  "created_at": 1565376474,
  "config": {},
  "id": "86e6f637-64e2-5672-920c-32e17e76e0fe",
  "service": null,
  "enabled": true,
  "tags": null,
  "consumer": null,
  "run_on": "first",
  "name": "prometheus",
  "route": null,
  "protocols": [
    "grpc",
    "grpcs",
    "http",
    "https"
  ]
},

/metrics endpoint:

{
  "message": "An unexpected error occurred"
}

Kong/logs/error.log

2019/08/09 18:51:07 [error] 162#0: *174040 [kong] exporter.lua:88 prometheus: plugin is not initialized, please make sure  'prometheus_metrics' shared dict is present in nginx template, client: 10.0.85.66, server: kong_prometheus_exporter, request: "GET /metrics HTTP/1.1", host: "10.0.48.81:9542"
2019/08/09 18:51:07 [error] 162#0: *174040 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/pdk/private/phases.lua:66: no phase in kong.ctx.core.phase
stack traceback:
coroutine 0:
        [C]: in function 'error'
        /usr/local/share/lua/5.1/kong/pdk/private/phases.lua:66: in function 'check_phase'
        /usr/local/share/lua/5.1/kong/pdk/response.lua:662: in function 'collect'
        content_by_lua(servers.conf:12):3: in main chunk, client: ..., server: kong_prometheus_exporter, request: "GET /metrics HTTP/1.1", host: "...:9542"

#3: Kong fails to create/find upstreams for upstreams. I think this is related to #1, but I’m listing it separately just in case.

kong/logs/error.log

2019/08/09 18:50:44 [error] 193#0: *173385 [lua] balancer.lua:634: on_upstream_event(): upstream not found for 00aca459-9400-5dc9-905d-1908b6b36e34, context: ngx.timer
2019/08/09 18:50:44 [error] 193#0: *173385 [lua] balancer.lua:634: on_upstream_event(): upstream not found for 26c43ca3-80ff-5862-98d5-e63d327ff612, context: ngx.timer
2019/08/09 18:50:44 [error] 193#0: *173385 [lua] balancer.lua:634: on_upstream_event(): upstream not found for be153462-f6dd-5574-8644-80a980f68141, context: ngx.timer
2019/08/09 18:50:44 [error] 193#0: *173385 [lua] balancer.lua:603: on_target_event(): target create: upstream not found for be153462-f6dd-5574-8644-80a980f68141, context: ngx.timer
2019/08/09 18:50:44 [error] 193#0: *173385 [lua] balancer.lua:603: on_target_event(): target create: upstream not found for 26c43ca3-80ff-5862-98d5-e63d327ff612, context: ngx.timer
2019/08/09 18:50:44 [error] 193#0: *173385 [lua] balancer.lua:603: on_target_event(): target create: upstream not found for 00aca459-9400-5dc9-905d-1908b6b36e34, context: ngx.timer

In addition I have noticed that after some time, Kong ‘looses’ all of its configuration data. (making me think that Kong itself has crashed). In these cases I notice errors like this:

kong/logs/error.log

019/08/09 18:50:45 [notice] 1#0: signal 17 (SIGCHLD) received from 161
2019/08/09 18:50:45 [alert] 1#0: worker process 161 exited on signal 9
2019/08/09 18:50:45 [notice] 1#0: start worker process 265
2019/08/09 18:50:45 [notice] 1#0: signal 29 (SIGIO) received

/upstreams

{
  "next": null,
  "data": []
}

/routes

{
  "next": null,
  "data": []
}

/plugins

{
  "next": null,
  "data": []
}

Any help would be appreciated,

Thanks,
Arnav

Thanks for the bug report. I’ll try to reproduce with what you’ve provided but if possible could I also get:

kic version
k8s version
kong.yml
how did you run kong (helm or yaml)

I tried to reproduce with some assumptions and am unable to

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:52:31Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-25T23:41:27Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

helm init --wait
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
helm repo update
helm install stable/kong --set ingressController.enabled=true,postgresql.enabled=false,env.database=off,image.tag=1.3rc1

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/manifests/dummy-application.yaml | kubectl create -f -
export HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
export ADMIN_PORT=$(kubectl get svc --namespace default famous-sheep-kong-admin -o jsonpath='{.spec.ports[0].nodePort}')
export PROXY_PORT=$(kubectl get svc --namespace default famous-sheep-kong-proxy -o jsonpath='{.spec.ports[0].nodePort}')
export PROXY=$HOST:$PROXY_PORT

curl -sL bit.ly/echo-server | kubectl apply -f -

echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: demo
spec:
  rules:
  - http:
      paths:
      - path: /foo
        backend:
          serviceName: echo
          servicePort: 80
" | kubectl apply -f -

curl -i $PROXY/foo

Versions:
KIC: 0.5.0
K8s:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.8-eks-a977ba", GitCommit:"a977bab148535ec195f12edc8720913c7b943f9c", GitTreeState:"clean", BuildDate:"2019-07-29T20:47:04Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

kong.yaml:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongplugins.configuration.konghq.com
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongPlugin
    plural: kongplugins
    shortNames:
    - kp
  additionalPrinterColumns:
  - name: Plugin-Type
    type: string
    description: Name of the plugin
    JSONPath: .plugin
  - name: Age
    type: date
    description: Age
    JSONPath: .metadata.creationTimestamp
  - name: Disabled
    type: boolean
    description: Indicates if the plugin is disabled
    JSONPath: .disabled
    priority: 1
  - name: Config
    type: string
    description: Configuration of the plugin
    JSONPath: .config
    priority: 1
  validation:
    openAPIV3Schema:
      required:
      - plugin
      properties:
        plugin:
          type: string
        disabled:
          type: boolean
        config:
          type: object
        run_on:
          type: string
          enum:
          - first
          - second
          - all
        protocols:
          type: array
          items:
            type: string
            enum:
            - http
            - https
            - tcp
            - tls


---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongconsumers.configuration.konghq.com
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongConsumer
    plural: kongconsumers
    shortNames:
    - kc
  additionalPrinterColumns:
  - name: Username
    type: string
    description: Username of a Kong Consumer
    JSONPath: .username
  - name: Age
    type: date
    description: Age
    JSONPath: .metadata.creationTimestamp
  validation:
    openAPIV3Schema:
      properties:
        username:
          type: string
        custom_id:
          type: string

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongcredentials.configuration.konghq.com
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongCredential
    plural: kongcredentials
  additionalPrinterColumns:
  - name: Credential-type
    type: string
    description: Type of credential
    JSONPath: .type
  - name: Age
    type: date
    description: Age
    JSONPath: .metadata.creationTimestamp
  - name: Consumer-Ref
    type: string
    description: Owner of the credential
    JSONPath: .consumerRef
  validation:
    openAPIV3Schema:
      required:
      - consumerRef
      - type
      properties:
        consumerRef:
          type: string
        type:
          type: string

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kongingresses.configuration.konghq.com
spec:
  group: configuration.konghq.com
  version: v1
  scope: Namespaced
  names:
    kind: KongIngress
    plural: kongingresses
    shortNames:
    - ki
  validation:
    openAPIV3Schema:
      properties:
        upstream:
          type: object
        route:
          properties:
            methods:
              type: array
              items:
                type: string
            regex_priority:
              type: integer
            strip_path:
              type: boolean
            preserve_host:
              type: boolean
            protocols:
              type: array
              items:
                type: string
                enum:
                - http
                - https
        proxy:
          type: object
          properties:
            protocol:
              type: string
              enum:
              - http
              - https
            path:
              type: string
              pattern: ^/.*$
            retries:
              type: integer
              minimum: 0
            connect_timeout:
              type: integer
              minimum: 0
            read_timeout:
              type: integer
              minimum: 0
            write_timeout:
              type: integer
              minimum: 0
        upstream:
          type: object
          properties:
            hash_on:
              type: string
            hash_on_cookie:
              type: string
            hash_on_cookie_path:
              type: string
            hash_on_header:
              type: string
            hash_fallback_header:
              type: string
            hash_fallback:
              type: string
            slots:
              type: integer
              minimum: 10
            healthchecks:
              type: object
              properties:
                active:
                  type: object
                  properties:
                    concurrency:
                      type: integer
                      minimum: 1
                    timeout:
                      type: integer
                      minimum: 0
                    http_path:
                      type: string
                      pattern: ^/.*$
                    healthy: &healthy
                      type: object
                      properties:
                        http_statuses:
                          type: array
                          items:
                            type: integer
                        interval:
                          type: integer
                          minimum: 0
                        successes:
                          type: integer
                          minimum: 0
                    unhealthy: &unhealthy
                      type: object
                      properties:
                        http_failures:
                          type: integer
                          minimum: 0
                        http_statuses:
                          type: array
                          items:
                            type: integer
                        interval:
                          type: integer
                          minimum: 0
                        tcp_failures:
                          type: integer
                          minimum: 0
                        timeout:
                          type: integer
                          minimum: 0
                passive:
                  type: object
                  properties:
                    healthy: *healthy
                    unhealthy: *unhealthy

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kong-serviceaccount
  namespace: namespace

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kong-ingress-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - nodes
  - pods
  - secrets
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - "configuration.konghq.com"
  resources:
  - kongplugins
  - kongcredentials
  - kongconsumers
  - kongingresses
  verbs:
  - get
  - list
  - watch

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: kong-ingress-role
  namespace: namespace
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  resourceNames:
  # Defaults to "<election-id>-<ingress-class>"
  # Here: "<ingress-controller-leader>-<kong>"
  # This has to be adapted if you change either parameter
  # when launching the kong-ingress-controller.
  - "ingress-controller-leader-kong"
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: kong-ingress-role-nisa-binding
  namespace: namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kong-ingress-role
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: namespace

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kong-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: namespace

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: kong-server-blocks
  namespace: namespace
data:
  servers.conf: |
    # Prometheus metrics server
    server {
        server_name kong_prometheus_exporter;
        listen 0.0.0.0:9542; # can be any other port as well

        access_log off;
        location /metrics {
            default_type text/plain;
            content_by_lua_block {
                 local prometheus = require "kong.plugins.prometheus.exporter"
                 prometheus:collect()
            }
        }

        location /nginx_status {
            internal;
            access_log off;
            stub_status;
        }
    }
    # Health check server
    # TODO how to health check kong in dbless?
    server {
        server_name kong_health_check;
        listen 0.0.0.0:9001; # can be any other port as well

        access_log off;
        location /health {
          return 200;
        }
    }

---

apiVersion: extensions/v1beta1
kind: "Deployment"
metadata:
  labels:
    app: "kong"
  name: "-frontend----kong"
  namespace: "namespace"
spec:
  replicas: 1
  selector:
    matchLabels:
      role: frontend----kong
  strategy:
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 0
    type: "RollingUpdate"
  template:
    metadata:
      annotations:
        prometheus.io/port: "9542"
        prometheus.io/scrape: "true"
      labels:
        role: "frontend----kong"
        app: apimanagement
    spec:
      serviceAccountName: "kong-serviceaccount"
      initContainers:
      - name: init-logging
        image: busybox:1.30.1
        command: ['sh', '-c', 'mkdir -p /shared/logs/; chmod -R 777 /shared/logs/']
        volumeMounts:
        - name: temporal
          mountPath: "/shared"
      containers:
      - name: "-kong"
        image: "kong:1.3.0-rc1"
        imagePullPolicy: Always
        resources:
          requests:
            memory: "500Mi"
            cpu: "200m"
          limits:
            memory: "500Mi"
            cpu: "500m"
        env:
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_NGINX_HTTP_INCLUDE
          value: "/kong/servers.conf"
        - name: KONG_ADMIN_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_ERROR_LOG
          value: /dev/stderr
        - name: KONG_ADMIN_LISTEN
          value: 127.0.0.1:8444 ssl
        - name: KONG_LUA_PACKAGE_PATH
          value: "/usr/local/share/lua/5.1/?.lua;;"
        - name: KONG_CUSTOM_PLUGINS
          value: cloudlink-auth
        ports:
        - name: proxy
          containerPort: 8000
          protocol: TCP
        - name: proxy-ssl
          containerPort: 8443
          protocol: TCP
        - name: metrics
          containerPort: 9542
          protocol: TCP
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 9001
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 9001
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        volumeMounts:
        - name: kong-server-blocks
          mountPath: /kong
        - name: temporal
          mountPath: "/shared"
      - name: ingress-controller
        args:
        - /kong-ingress-controller
        # the kong URL points to the kong admin api server
        - --kong-url=https://localhost:8444
        - --admin-tls-skip-verify
        # Service from were we extract the IP address/es to use in Ingress status
        - --publish-service=namespace/kong-external
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: "kong-ingress-controller:0.5.0"
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
      volumes:
      - name: kong-server-blocks
        configMap:
          name: kong-server-blocks
      - name: temporal
        hostPath:
          path: "/shared"
---

apiVersion: v1
kind: Service
metadata:
  name: kong-external
  namespace: namespace
spec:
  type: LoadBalancer
  ports:
  - name: kong-proxy
    port: 80
    targetPort: 8000
    protocol: TCP
  - name: kong-proxy-ssl
    port: 443
    targetPort: 8443
    protocol: TCP
    protocol: TCP
  selector:
    role: frontend----kong
---
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: kong-prometheus
  namespace: namespace
  labels:
    global: "true"
plugin: prometheus

I am using yaml, not helm.

Let me know if there is anything else you need.

Thanks,
Arnav

Hi, did you manage to resolve the “attempt to index local ‘healthchecker’ (a nil value)” error? I am seeing the same error in my deployment, (Kong 1.4 + kong-ingress-controller 0.6.1 + Postgres DB).

Error message specifically is:

lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/runloop/balancer.lua:241: attempt to index local 'healthchecker' (a nil value)

Thanks!