Upstream healthcheck

How to Support Domain Name for Traffic Distribution(kubernetes) in upstream healthcheck?

Could you elaborate a little bit as to what you are trying to do?
I’m not sure I understand you correctly.

Since the domain name is used for traffic distribution in the Ingress cluster of kubernetes, but IP +port is used in the health check target of kong, the active health check cannot pass all the time. How to deal with this situation or how to configure it?

Active health checks cannot be used in kong1.0.2

IP +port is used in the health check target of kong, the active health check cannot pass all the time

Are you referring to a problem of having a different Host header for an active HTTP or HTTPS health-check than what is expected by your upstream service? Have you considered using Active TCP health-checks?

In the case that host does not match upstream expectation, when configuring upstream in kubernetes, the type of active and passive is changed to TCP instead of HTTP related parameters such as http_path. Is there any other operation needed?

Please refer to the documentation on using Active health checks:
https://docs.konghq.com/1.2.x/health-checks-circuit-breakers/#enabling-active-health-checks

You can use healthchecks.active.type to tcp to enable TCP health-checks.

curl -X PUT http://localhost:8001/upstreams/test.service
–data “healthchecks.active.http_path=/test”
–data “healthchecks.active.timeout=5”
–data “healthchecks.active.concurrency=3”
–data “healthchecks.active.healthy.interval=10”
–data “healthchecks.active.unhealthy.interval=10”
–data “healthchecks.active.healthy.successes=3”
–data “healthchecks.active.unhealthy.tcp_failures=3”
–data “healthchecks.active.unhealthy.timeouts=3”
–data “healthchecks.active.unhealthy.http_failures=3”
–data “healthchecks.passive.healthy.successes=3”
–data “healthchecks.passive.unhealthy.tcp_failures=3”
–data “healthchecks.passive.unhealthy.timeouts=3”
–data “healthchecks.passive.unhealthy.http_failures=3”

curl -i -X POST http://localhost:8001/upstreams/test.service/targets --data “target=127.0.0.1:2000” --data “weight=100”
Hello, I have configured the upstream and destination of the service. 127.0.0.1:2000 corresponds to the entry controller of kubernetes, while the real back-end service needs to forward according to the routing rules of kubernetes (the domain name corresponds to the hosts of route).
Is there a solution for health checks in this case, or is it implemented through custom plug-ins?