Ingress controller fails to update upstreams

Running kong 1.3. and ingress 0.6 every once in a while i see these messages in the logs of ingress-controller.
When this happens, kong doesn’t update the upstream target in the upstreams and as a result all traffic is sent to nonexistent pod. Has anyone seen something like this? Maybe more cpu/memory needs to be added to admin api pod?

W1019 14:19:14.830888       1 queue.go:113] requeuing devops/devops-prometheus-node-exporter, err 8 errors occurred:
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/7c96b654-0740-4a25-bd2d-15442c87dd6b: EOF
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/3d34f0c5-0afc-48ff-9623-fd6674f1ba6f: EOF
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/09cb09d7-9a9b-48c7-a9e5-93c0e330a654: EOF
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/61c6c5e0-0036-4700-b19a-eb3bfda7838c: EOF
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/8b8c6675-fbdc-423c-836d-a255a915c8c9: EOF
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/942a851e-b505-4bd8-a9bf-1b750c033a34: EOF
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/9b498ffb-d3f7-45c9-bf15-75c2c98c8a13: EOF
        while processing event: {Update} failed: making HTTP reqeust: Put http://localhost:8444/upstreams/8b4a2b2b-1873-40b2-b05b-e5a979842b48: EOF

It seems like the Admin HTTP Server in Kong is closing connections and this is the first time we’re seeing this.
Do you see any pattern?

It is unlikely an issue with CPU/memory.

@hbagdi i can’t reproduce anymore.
one thing i did was to set

        - name: KONG_NGINX_WORKER_PROCESSES
          value: "1"

i noticed it was set to auto. I’d imagine it was not doing wonders for performance when we had 100m cpu allocated to the ingress pod.