Srv record with ringbalancer

Hello,
I am attempting to configure sticky sessions based on IP hash using an SRV record. With debug logging on, I see the following:

2018/05/07 21:09:58 [debug] 12482#0: *12521 [lua] balancer.lua:327: queryDns(): [ringbalancer] querying dns for record.example.com
2018/05/07 21:09:58 [debug] 12482#0: *12521 [lua] balancer.lua:371: queryDns(): [ringbalancer] no dns changes detected for record.example.com, still using ttl=0
2018/05/07 21:09:58 [debug] 12482#0: *12521 [lua] init.lua:354: balancer(): setting address (try 1): 10.0.1.1:30917

Then:

2018/05/07 21:09:59 [debug] 12482#0: *12526 [lua] balancer.lua:327: queryDns(): [ringbalancer] querying dns for record.example.com
2018/05/07 21:09:59 [debug] 12482#0: *12526 [lua] balancer.lua:371: queryDns(): [ringbalancer] no dns changes detected for record.example.com, still using ttl=0
2018/05/07 21:09:59 [debug] 12482#0: *12526 [lua] init.lua:354: balancer(): setting address (try 1): 10.0.1.1:22734

As you can see the next request that comes through is sent to another host. Is what I am attempting possible? If not is there another way to handle sticky sessions?

Thanks!

the logs show that Kong occasionally tests whether the SRV record has changed. But in both cases it wasn’t. This means that the layout of the balancer remained unchanged, and all hashes used for the sticky sessions should end up on the same target IP address.

But… only if you configured the balancer (the Kong upstream entity) to actually use hashing. So please read the docs here: https://getkong.org/docs/0.13.x/loadbalancing/#balancing-algorithms

And please post the configuration of your upstream here.