Kong Ingress Controller + Cassandra not working as expected

#1

This issue is regarding an installation of Kong Ingress Controller via helm on top of Kubernetes v1.13.0 the datastore used is Cassandra.

After I install the helm package everything seem to work sucesfully, the command I used to install is

helm install stable/kong --name my-release-kong -f values.yaml

Some snippet from the values.yaml I used

image:
repository: kong
tag: 1.0.3

env:
database: cassandra

cassandra:
enabled: true

postgresql:
enabled: false

ingressController:
enabled: true
image:
repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
tag: 0.3.0

The problem appear when I kill one of my node on the kube environemnt, after a while Kong pod show

   kubectl logs -f my-release-kong-kong-fc57479dc-7s2vl

    2019/04/04 19:48:23 [warn] 36#0: *22221 [lua] cluster.lua:135: set_peer_down(): [lua-cassandra] setting host at 10.244.189.114 DOWN, client: 192.168.19.163, server: kong_admin, request: "GET /status HTTP/1.1", host: "10.244.235.188:8444"
    2019/04/04 19:48:23 [error] 36#0: *22221 [lua] kong.lua:111: fn(): failed to connect to Cassandra during /status endpoint check: [Cassandra error] all hosts tried for query failed. my-release-kong-cassandra: host still considered down. 10.244.189.114: host still considered down, client: 192.168.19.163, server: kong_admin, request: "GET /status HTTP/1.1", host: "10.244.235.188:8444"

It seems that the Kong is not able to pick up the new cassandra peers.

Also logs from the Kong Ingress,

kubectl logs -f my-release-kong-kong-controller-64946d8d6-k5kbf  -c ingress-controller

I0405 13:03:42.414808       6 controller.go:128] syncing Ingress configuration...
E0405 13:03:42.415653       6 controller.go:131] unexpected failure updating Kong configuration: 
500 Internal Server Error {"message":"An unexpected error occurred"}
W0405 13:03:42.415672       6 queue.go:113] requeuing kube-system/resourcequota-controller-token-9r7f8, err 500 Internal Server Error {"message":"An unexpected error occurred"}

Cassandra cluster size is (3) nodetool status from one of the stateful sets shows.

nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.244.235.181  502.08 KiB  256          65.2%             02a42f3e-7ab5-4fce-9cdd-85300e7a2f5d  rack1
UN  10.244.235.183  362.93 KiB  256          66.7%             7b1f5c34-a834-4b67-a67f-dbed52d39486  rack1
UN  10.244.189.115  493.64 KiB  256          68.1%             36652654-5d2d-4309-bc8a-ad2090449a68  rack1

kubectl describe po my-release-kong-kong-fc57479dc-7s2vl  

Environment:
      KONG_ADMIN_LISTEN:              0.0.0.0:8444 ssl
      KONG_PROXY_LISTEN:              0.0.0.0:8000,0.0.0.0:8443 ssl
      KONG_NGINX_DAEMON:              off
      KONG_PROXY_ACCESS_LOG:          /dev/stdout
      KONG_ADMIN_ACCESS_LOG:          /dev/stdout
      KONG_PROXY_ERROR_LOG:           /dev/stderr
      KONG_ADMIN_ERROR_LOG:           /dev/stderr
      KONG_DATABASE:                  cassandra
      KONG_CASSANDRA_CONTACT_POINTS:  my-release-kong-cassandra

/ # hostname
my-release-kong-kong-fc57479dc-7s2vl

/ # ping my-release-kong-cassandra -c 2
PING my-release-kong-cassandra (10.244.235.183): 56 data bytes
64 bytes from 10.244.235.183: seq=0 ttl=63 time=0.102 ms
64 bytes from 10.244.235.183: seq=1 ttl=63 time=0.119 ms

--- my-release-kong-cassandra ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.102/0.110/0.119 ms

Last I will print some info from cassandra cluster

root@my-release-kong-cassandra-1:/# cqlsh
Connected to cassandra at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> describe keyspace kong; 

CREATE KEYSPACE kong WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'} AND durable_writes = true;

The conclusion is after permorming this scenario, kong ingress controller seem to work in a unstable mode, giving timeouts on the ingress previously declare.

Any help/comment is welcome.

Thanks in advance!

0 Likes

#2

Is it possible for you to run the Cassandra cluster outside Kubernetes?
I suspect the issue arises as k8s schedules C* pods around and the IP addresses change and Kong cannot reach the Cassandra.

0 Likes