Kafka-upstream plugin cannot connect to Kafka broker

Hi All,

Does anyone faced an error similar to the below when trying to configure Kafka-upstream plugin. The IP and port configured is plugin was tested and verified for connectivity and ability to push messages using kafkcat tool.

172.16.10.1 - - [18/May/2020:07:52:12 +0000] "GET / HTTP/1.1" 200 61 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
2020/05/18 07:52:12 [notice] 23#0: *5807239 [kong] producers.lua:39 creating a new Kafka Producer for configuration table: table: 0x7f672cdfbdf0, context: ngx.timer, client: 172.16.10.1, server: 0.0.0.0:8443
2020/05/18 07:52:12 [info] 23#0: *5807265 [lua] client.lua:141: _fetch_metadata(): broker fetch metadata failed, err:closedcp-kafka-cp-kafka-rest.dp-common-event-streaming.svc.cluster.local8082, context: ngx.timer, client: 172.16.14.1, server: 0.0.0.0:8000
2020/05/18 07:52:12 [error] 23#0: *5807265 [lua] client.lua:150: _fetch_metadata(): all brokers failed in fetch topic metadata, context: ngx.timer, client: 172.16.14.1, server: 0.0.0.0:8000
2020/05/18 07:52:12 [info] 23#0: *5807265 [lua] client.lua:141: _fetch_metadata(): broker fetch metadata failed, err:closedcp-kafka-cp-kafka-rest.dp-common-event-streaming.svc.cluster.local8082, context: ngx.timer, client: 172.16.14.1, server: 0.0.0.0:8000
2020/05/18 07:52:12 [error] 23#0: *5807265 [lua] client.lua:150: _fetch_metadata(): all brokers failed in fetch topic metadata, context: ngx.timer, client: 172.16.14.1, server: 0.0.0.0:8000
2020/05/18 07:52:12 [error] 23#0: *5807265 [lua] producer.lua:272: buffered messages send to kafka err: not found topic, retryable: true, topic: KongLogTopic, partition_id: -1, length: 1, context: ngx.timer, client: 172.16.14.1, server: 0.0.0.0:8000

Thanks!

hi! did u solve this problem?

No. I am still having this problem. Kafka plugin still cannot connect to the kafka cluster.

Hello @danuka92,

The error is complaining the host cp-kafka-cp-kafka-rest.dp-common-event-streaming.svc.cluster.local is not available to the Kong instance through port number 8082. (Incidentally, I think this particular error message isn’t quite clear. I have just created a pull request to fix the wording of this error).

It is very difficult for me to say exactly what the problem is without knowing more about your infrastructure, but here are some things you can check:

  • If you are using a containerization solution like Docker and/or Kubernetes, make sure that port 8082 is correctly exposed to the Kong instance, and is not remapped to some other port in your Docker config options.
  • You are not using an IP to connect to this broker; you are using a host name (cp-kafka-cp-kafka-rest.dp-common-event-streaming.svc.cluster.local). This thus might be a DNS problem. Does that name resolve to the appropriate IP for the Kong instance executing the plugin?

@antoniott15 my answer for you is pretty much the same I just gave. The kong logs should give you an idea of what the problem might be.


© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ