Declarative configuration appears to load, no resources are created

I’m using a ConfigMap in Kubernetes to pass in the contents of kong.yml and kong.conf to a directory, /etc/kong/ within Pods running as part of the kong-proxy manifest provided directly by Kong via GitHub.

The contents of kong.yml doesn’t matter - I’ve built some basic configs, used simple examples found elsewhere, and even literally used the example given from kong config init if ran from one of the Pods.

If there is a syntax error, Kong will complain about this and won’t start. If I fix it, it will start. I do see this message:

2020/02/26 16:24:58 [notice] 22#0: *1 [kong] init.lua:284 declarative config loaded from /etc/kong/kong.yml, context: init_worker_by_lua*

While this part appears to work, and the file is definitely there if I look directly in the Pod, exposing the admin API and looking for resources yields nothing, and none of the routes work:

curl -v localhost:8001/services | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8001 (#0)
> GET /services HTTP/1.1
> Host: localhost:8001
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 26 Feb 2020 16:26:50 GMT
< Content-Type: application/json; charset=utf-8
< Connection: keep-alive
< Access-Control-Allow-Origin: *
< Server: kong/2.0.1
< Content-Length: 23
< X-Kong-Admin-Latency: 1
<
{ [23 bytes data]
100    23  100    23    0     0    280      0 --:--:-- --:--:-- --:--:--   280
* Connection #0 to host localhost left intact
* Closing connection 0
{
  "next": null,
  "data": []
}

I do not understand where I’ve gone wrong and I’ve tried many ways to fix this. If I simply use Postgres as a backend and go deploy something like Konga, it will use the API and deploy its own resources and they show up just fine in the admin API. I cannot get anything declarative to work.

Any help is appreciated!

I tested this and it works fine.
Do you have services in your kong.yml file defined?
The default file that is generated is entirely commented out.

Yes. I’ve tried several different configurations with varying amounts of config and I cannot get anything to work. I took the comments out of the default file and tweaked the basic resources until they had a generic route that pointed to an nginx service in the cluster.

As an update, I dropped the replica count from 3 to 1 and now I seem to be able to make the admin API calls and can see the resources, and they appear to be working now. Any gotchas with running multiple replicas in the Deployment?

Not that we are aware of. Is the ConfigMap mounted correctly when Kong starts? I don’t remember the volume mount lifecycle but please make sure that the file is in place and readable by the user under which kong runs before kong starts. A bash script as init-container could verify that.

@hbagdi I sorted it out, I think. The replica count doesn’t matter. What ends up happening is that I have to restart the proxy container in each Kong Pod for my declarative configuration to take effect. This is regardless of the fact that the ConfigMap already exists and the volume is available within the container. I always have to restart them to get it take effect. Thoughts?

If that’s working then it means that the declarative config file is not available when Kong first starts and that’s why you have this problem.

That’s odd because the logs indicate the config is loaded from that file on first start, and the ConfigMap definitely exists beforehand.

2020/02/28 01:48:50 [notice] 22#0: *1 [kong] init.lua:284 declarative config loaded from /etc/kong/kong.yml, context: init_worker_by_lua*

I added an Init Container and that passes and the proxy container starts, but I get nothing from curling the admin API until after I restart Kong at least once.

Maybe try mounting the configmap at a different mount point than the default file and then see if that changes anything.

@hbagdi - I took a cue from the Helm chart and I’m now mounting the kong.conf file at /etc/kong/kong.conf which works fine, Kong seems to acknowledge that file when it first starts.

I’m mounting kong.yml at /kong_dbless/kong.yml, which is what the chart seems to do. This didn’t really change anything as I still have to run kong reload after the Pods are up. Not really sure where else to look, but I’ve made sure the ConfigMap has what seems to be correct permissions and I also added an initContainer that checks the same volume mount path to verify that the file is available.

Any thoughts beyond that?

This has been resolved. For future reference for anyone else who comes across this, I was running the ingress controller as part of my k8s manifest. In my situation, removing the ingress controller container from the Pods resolved my issue. The ingress controller was wiping out my configs. Thanks to Kong for helping me sort this out via the Kube slack.

I suppose , you would be able to specify the “kong.yml” for Kong to be loaded at boot time by passing

    - name: KONG_DECLARATIVE_CONFIG
      value: "/etc/kong/kong.yml"

as an environment variable into the deployment

That environment variable, along with KONG_DATABASE=off, were already being passed in.

AFAIK, this configuration just says Kong should be started as db-less by default
I think, if you wanna have the custom kong.yml, so you’d better use that env

I think this issue is since resolved based on the chat I had with @echoboomer in Kubernetes slack server.