Kong 0.11 + Cassandra cluster issue


I got a problem for Kong 0.11 + Cassandra cluster, that I have 2 DCs for the Cassandra cluster and 6 nodes of each DC are equipped, following are the configurations that force the Kong to use cassandra DC1 nodes only :

{{ $cassandra_consistency := “LOCAL_QUORUM” }}
{{ $cassandra_lb_policy := “DCAwareRoundRobin” }}
{{ $cassandra_repl_strategy := “NetworkTopologyStrategy” }}
{{ $cassandra_repl_factor := “3” }}
{{ $cassandra_data_centers := “dc1:3,dc2:3” }}
{{ $cassandra_schema_consensus_timeout := “100000” }}
{{ $cassandra_local_datacenter := “dc1” }}
{{ $cassandra_contact_points:= “,,,,,” }}

so when bring down the 6 nodes of DC2 Cassandra, the Kong start throwing exceptions and not functional anymore, following are some of the errors captured:

[notice] 273#0: 12450469 [lua] cluster.lua:701: execute(): [lua-cassandra] SELECT COUNT() FROM keyauth_credentials was not prepared on host, preparing and retrying, client:, server: kong_admin, request: “GET /key-auths HTTP/1.1”, host: “kong.query:8001”

[warn] 314#0: 62 [lua] socket.lua:152: tcp(): no support for cosockets in this context, falling back to LuaSocket, context: init_worker_by_lua

So why Kong get start failing when bring down Cassandra cluster DC2 that Kong is configured with LOCAL_QUORUM, DCAwareRoundRobin and NetworkTopologyStrategy?

BTW, when this issue was happening, we checked the nodes in Cassandra DC1 and all are live and healthy. And the only solution to resolve this issue is to bring down the whole Kong cluster and bring it up again (it is just bring down/up Kong cluster but not the Cassandra cluster, Cassandra cluster just keeps the DC1).

Thanks in advance for any help.

© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ