Unable to install Kong on Kubernetes with AWS RDS Postgres due to Time Out Error


#1

I have been trying to install Kong on my KOPS Kubernetes cluster, using AWS RDS as my Postgres datastore. This seems to work locally, but I run into the following error when I try to setup the Kong Deployment.

2018/04/17 11:23:40 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:169: timeout
stack traceback:
	[C]: in function 'error'
	/usr/local/share/lua/5.1/kong/init.lua:169: in function 'init'
	init_by_lua:3: in main chunk
nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:169: timeout
stack traceback:
	[C]: in function 'error'
	/usr/local/share/lua/5.1/kong/init.lua:169: in function 'init'
	init_by_lua:3: in main chunk

When I attempt to install on full debug, this is the error I get:

prefix directory /usr/local/kong not found, trying to create it
2018/04/19 00:47:30 [debug] 1#0: [lua] globalpatches.lua:9: installing the globalpatches
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:431: init(): [dns-client] (re)configuring dns client
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:435: init(): [dns-client] staleTtl = 4
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:438: init(): [dns-client] noSynchronisation = false
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:457: init(): [dns-client] query order = LAST, SRV, A, CNAME
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:489: init(): [dns-client] adding A-record from 'hosts' file: kong-rc-5b5c666c4-f4n4j = 100.96.5.33
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0]
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0]
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:489: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1]
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1]
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1]
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1]
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2]
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:545: init(): [dns-client] nameserver 100.64.0.10
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:550: init(): [dns-client] attempts = 5
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:559: init(): [dns-client] timeout = 2000 ms
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:563: init(): [dns-client] ndots = 5
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:565: init(): [dns-client] search = default.svc.cluster.local, svc.cluster.local, cluster.local, ec2.internal
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:571: init(): [dns-client] badTtl = 30 s
2018/04/19 00:47:30 [debug] 1#0: [lua] client.lua:573: init(): [dns-client] emptyTtl = 1 s

I have been trying to make sense of the error, but I am not getting far. Is this an issue with the DN resolver?

Below is my full Kubernetes Kong Setup

apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
spec:
  type: NodePort
  ports:
  - name: kong-proxy
    port: 8000
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-proxy-ssl
spec:
  type: NodePort
  ports:
  - name: kong-proxy-ssl
    port: 8443
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin
spec:
  type: NodePort
  ports:
  - name: kong-admin
    port: 8001
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin-ssl
spec:
  type: NodePort
  ports:
  - name: kong-admin-ssl
    port: 8444
    protocol: TCP
  selector:
    app: kong

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kong-rc
spec:
  selector:
    matchLabels:
      app: kong
  replicas: 1
  minReadySeconds: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: kong
    spec:
      containers:
      - name: kong
        image: kong
        imagePullPolicy: Always
        ports:
        - name: admin
          containerPort: 8001
          protocol: TCP
        - name: proxy
          containerPort: 8000
          protocol: TCP
        - name: proxy-ssl
          containerPort: 8443
          protocol: TCP
        - name: admin-ssl
          containerPort: 8444
          protocol: TCP
        env:
          - name: KONG_DATABASE
            value: postgres
          - name: KONG_LOG_LEVEL
            value: debug
          - name: KONG_PROXY_LISTEN
            value: "0.0.0.0:8000"
          - name: KONG_PROXY_LISTEN_SSL
            value: "0.0.0.0:8443"
          - name: KONG_ADMIN_LISTEN
            value: "0.0.0.0:8001"
          - name: KONG_ADMIN_LISTEN_SSL
            value: "0.0.0.0:8444"
          - name: KONG_PG_HOST
            value: iris-qa-kong-db.cbrlcgtke3vd.us-east-1.rds.amazonaws.com
          - name: KONG_PG_PORT
            value: "5432"
          - name: KONG_PG_USER
            value: irisadmin
          - name: KONG_PG_PASSWORD
            valueFrom:
              secretKeyRef:
                name: kongpgpassword
                key: KONG_PG_PASSWORD
          - name: KONG_PG_DATABASE
            value: iris_qa_kong_db
          - name: KONG_PG_SSL
            value: "true"
          - name: KONG_PG_SSL_VERIFY
            value: "false"
          - name: KONG_PROXY_ACCESS_LOG
            value: "/dev/stdout"
          - name: KONG_ADMIN_ACCESS_LOG
            value: "/dev/stdout"
          - name: KONG_PROXY_ERROR_LOG
            value: "/dev/stderr"
          - name: KONG_ADMIN_ERROR_LOG
            value: "/dev/stderr"

Any help would be truly appreciated, I am definitely lost here. I would like to add that this works with no issue when I run as a docker container locally on my machine.


#2

Did you find an answer, I am facing the same issue.


#3

Have you ensured that Kong pods can reach the RDS Postgres database? Are they running in the same network and are the security groups setup correctly?