Kong Ingress Controller + Cassandra not working as expected

This issue is regarding an installation of Kong Ingress Controller via helm on top of Kubernetes v1.13.0 the datastore used is Cassandra.

After I install the helm package everything seem to work sucesfully, the command I used to install is

helm install stable/kong --name my-release-kong -f values.yaml

Some snippet from the values.yaml I used

repository: kong
tag: 1.0.3

database: cassandra

enabled: true

enabled: false

enabled: true
repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
tag: 0.3.0

The problem appear when I kill one of my node on the kube environemnt, after a while Kong pod show

   kubectl logs -f my-release-kong-kong-fc57479dc-7s2vl

    2019/04/04 19:48:23 [warn] 36#0: *22221 [lua] cluster.lua:135: set_peer_down(): [lua-cassandra] setting host at DOWN, client:, server: kong_admin, request: "GET /status HTTP/1.1", host: ""
    2019/04/04 19:48:23 [error] 36#0: *22221 [lua] kong.lua:111: fn(): failed to connect to Cassandra during /status endpoint check: [Cassandra error] all hosts tried for query failed. my-release-kong-cassandra: host still considered down. host still considered down, client:, server: kong_admin, request: "GET /status HTTP/1.1", host: ""

It seems that the Kong is not able to pick up the new cassandra peers.

Also logs from the Kong Ingress,

kubectl logs -f my-release-kong-kong-controller-64946d8d6-k5kbf  -c ingress-controller

I0405 13:03:42.414808       6 controller.go:128] syncing Ingress configuration...
E0405 13:03:42.415653       6 controller.go:131] unexpected failure updating Kong configuration: 
500 Internal Server Error {"message":"An unexpected error occurred"}
W0405 13:03:42.415672       6 queue.go:113] requeuing kube-system/resourcequota-controller-token-9r7f8, err 500 Internal Server Error {"message":"An unexpected error occurred"}

Cassandra cluster size is (3) nodetool status from one of the stateful sets shows.

nodetool status
Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Tokens       Owns (effective)  Host ID                               Rack
UN  502.08 KiB  256          65.2%             02a42f3e-7ab5-4fce-9cdd-85300e7a2f5d  rack1
UN  362.93 KiB  256          66.7%             7b1f5c34-a834-4b67-a67f-dbed52d39486  rack1
UN  493.64 KiB  256          68.1%             36652654-5d2d-4309-bc8a-ad2090449a68  rack1

kubectl describe po my-release-kong-kong-fc57479dc-7s2vl  

      KONG_ADMIN_LISTEN:     ssl
      KONG_PROXY_LISTEN:    , ssl
      KONG_NGINX_DAEMON:              off
      KONG_PROXY_ACCESS_LOG:          /dev/stdout
      KONG_ADMIN_ACCESS_LOG:          /dev/stdout
      KONG_PROXY_ERROR_LOG:           /dev/stderr
      KONG_ADMIN_ERROR_LOG:           /dev/stderr
      KONG_DATABASE:                  cassandra
      KONG_CASSANDRA_CONTACT_POINTS:  my-release-kong-cassandra

/ # hostname

/ # ping my-release-kong-cassandra -c 2
PING my-release-kong-cassandra ( 56 data bytes
64 bytes from seq=0 ttl=63 time=0.102 ms
64 bytes from seq=1 ttl=63 time=0.119 ms

--- my-release-kong-cassandra ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.102/0.110/0.119 ms

Last I will print some info from cassandra cluster

root@my-release-kong-cassandra-1:/# cqlsh
Connected to cassandra at
[cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> describe keyspace kong; 

CREATE KEYSPACE kong WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'} AND durable_writes = true;

The conclusion is after permorming this scenario, kong ingress controller seem to work in a unstable mode, giving timeouts on the ingress previously declare.

Any help/comment is welcome.

Thanks in advance!

Is it possible for you to run the Cassandra cluster outside Kubernetes?
I suspect the issue arises as k8s schedules C* pods around and the IP addresses change and Kong cannot reach the Cassandra.