Kong K8s Lots of Restarts before pods start running

We have updated kong helm chart to latest.

We use Ingress-Controler Proxy and DBless kong. Since the new deployment we have noticed that pods restart 2/3 times before being on running state, and from time to time they just restart.

This is what I get from Container Logs

time="2022-05-30T09:00:09Z" level=info msg="diagnostics server disabled"
time="2022-05-30T09:00:09Z" level=info msg="starting controller manager" commit=1d3dfefefaf1b2fba6d776e91d06e065ec683257 logger=setup release=2.3.1 repo="https://github.com/Kong/kubernetes-ingress-controller.git"
time="2022-05-30T09:00:09Z" level=info msg="getting enabled options and features" logger=setup
time="2022-05-30T09:00:09Z" level=info msg="getting the kubernetes client configuration" logger=setup
W0530 09:00:09.129399       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2022-05-30T09:00:09Z" level=info msg="getting the kong admin api client configuration" logger=setup
time="2022-05-30T09:00:10Z" level=info msg="tag filtering enabled" logger=setup tags="[\"managed-by-ingress-controller\"]"
time="2022-05-30T09:00:10Z" level=info msg="configuring and building the controller manager" logger=setup
time="2022-05-30T09:00:10Z" level=info msg="Database mode detected, enabling leader election" logger=setup
time="2022-05-30T09:00:20Z" level=error msg="Failed to get API Group-Resources" error="Get \"https://10.122.0.1:443/api?timeout=32s\": net/http: TLS handshake timeout"
Error: unable to start controller manager: Get "https://xxxxx:443/api?timeout=32s": net/http: TLS handshake timeout
Error: unable to start controller manager: Get "https://xxxxx:443/api?timeout=32s": net/http: TLS handshake timeout

Any ideas why we get this TLS/handshake timeout ?? We user cert-manager for certificates but not sure this is the same certificate it complains here

Is there some reason it’d be unable to talk to the Kubernetes API server? It looks like it’s failing because of that, though offhand I can’t think of obvious reasons that’d only happen intermittently. Maybe misconfigured networking between the kubelets, but I’m not that familiar with lower-level cluster networking infrastructure.

We notice restart even during normal mid of the day for example.

We made a call to revert back to 2.2 (last version we run before upgrade to 2.8) and we are back on stable kong again no issues to be honest/