Custom plugin using configmap not working as expected

I’m currently integrating a custom plugin (developed for Kong gateway) into our Kubernetes setup.
The current problem is that I have mounted the CM with the LUA files but, it don’t seem to be picked up.

For setting up this solution I followed an older custom plugins question answer from @traines but without success. I followed the following procedure
1.

kubectl create configmap custom-plugins --from-file=kong-upstream-gateway/ --namespace kong

  1. Add the following to the deployment YAML
volumeMounts:
        - mountPath: /kong-plugins/kong/plugins
          name: custom-kong-plugins-volume

- name: custom-kong-plugins-volume
        configMap:
          name: custom-plugins
          items:
            - key: access.lua
              path: access.lua
            - key: bearer_cache.lua
              path: bearer_cache.lua
            - key: gateway_client.lua
              path: gateway_client.lua
            - key: handler.lua
              path: handler.lua
            - key: json.lua
              path: json.lua
            - key: schema.lua
              path: schema.lua
  1. Recreated the pod, which shows the mount.

but it doesn’t seem to work . How can validate the custom plugin is correctly available ? and where is a good description of enabling custom plugins in a kubernetes ingress controller Kong environment ?

Some advice would be great :slight_smile:

One update since that old post: if you’re using the Helm chart, the mounts/configuration can be set up semi-automatically. That method does have some limitations at present (it doesn’t work well with plugins that have been installed into a custom image or plugins that require external libraries), but works well for simpler plugins.

Loading plugins requires several settings on top of the mounts themselves. Your mounts look fine, though you should be able to simplify the volumeMount (specifying individual files should only be necessary if your ConfigMap keys do not match the intended filenames–creating the ConfigMap from a plugin directory, as you’ve done, should set the correct names by default). For example, the generated volume sections I get from the chart look like:

 volumeMounts:
 - mountPath: /opt/kong/plugins/myplugin
   name: kong-plugin-myplugin
   readOnly: true

 volumes:
 - configMap:
     defaultMode: 420
     name: kong-plugin-test
   name: kong-plugin-myplugin

Kong doesn’t do anything with the mounts unless instructed to, however. You’ll need a few environment variables also, e.g. for the above:

- name: KONG_LUA_PACKAGE_PATH
  value: /opt/?.lua;;

- name: KONG_PLUGINS
  value: bundled,myplugin

You’d use /kong-plugins/?.lua;; for your package path and the plugin name (the directory name under /kong-plugins/kong/plugins) in place of myplugin.

You can alternately mount directly under the standard plugin directory (/usr/local/share/lua/5.1/kong/plugins/) and omit the package path variable.

I have created the custom plugin “https://medium.com/swlh/creating-and-installing-custom-lua-plugins-in-kong-ce7fd64d33bf” using this reference, and we are successfully able to build and deploy the plugin in KONG 2.0 using docker image.
Reference: Custom plugin using configmap not working as expected

Now we are trying to deploy the same plugin in the KONG as ingress controller (we are using AKS) but i am facing issue

what we had did till now

  • Created the configMap and successfully able to describe it by following above steps

*Now we are trying to deploy the same plugin but facing if we follow the above steps and deployment file looks like

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: kong-ingress-custom-plugin
namespace: konga-test
spec:
selector:
matchLabels:
app: konga-test
template:
metadata:
name: kong-ingress-custom-plugin
labels:
app: kong-ingress-custom-plugin
spec:
containers:
- name: proxy
env:
- name: KONG_PLUGINS
value: ‘bundled,cutom-auth’
- name: KONG_LUA_PACKAGE_PATH
value: /opt/?.lua;;
volumeMounts:
- name: kong-plugin-cutom-auth
mountPath: /opt/kong/plugins/cutom-auth
volumes:
- defaultMode: 755
- name: kong-plugin-cutom-auth
configMap:
name: kong-plugin-cutom-auth

we are executing the same below command to apply the change

kubectl apply -f miniorange-auth-deployment.yaml --validate=false

and error we got

The Deployment “kong-plugin-miniorange-auth” is invalid: spec.template.spec.containers[0].image: Required value

Here i don’t understand, if we created the ConfigMap for same plugin then why its asking for docker image.

Kindly assist.

Thanks & Regards
Jaiswar Vipin Kumar R.

After updating the YAML finally custom pluing get install but after that got next issue. Now same POD, Deployment and Replicaset throwing “CrashloopOff”. After checking the log of the pod we got following error.

init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:389: [PostgreSQL error] failed to retrieve PostgreSQL server_version_num: connection refused stack traceback:

However Updated YAML is

apiVersion: apps/v1
kind: Deployment
metadata:
name: kong-ingress-custom-plugin
namespace: konga-test
spec:
selector:
matchLabels:
app: konga-test
replicas: 1
template:
metadata:
name: kong-ingress-custom-plugin
labels:
app: kong-ingress-custom-plugin
spec:
containers:
name: proxy
image: 'kong:2.0’
volumeMounts:
- name: kong-plugin-custom-auth
mountPath: /opt/kong/plugins/kong-plugin-custom-auth
- env:
- name: KONG_PG_DATABASE
value: kong
- name: KONG_PG_HOST
value: postgres
- name: KONG_PG_PORT
value: ‘5432’
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_PG_USER
value: kong
- name: KONG_LOG_LEVEL
value: info
- name: KONG_PLUGINS
value: ‘bundled,kong-plugin-custom-auth’
- name: KONG_LUA_PACKAGE_PATH
value: /etc/?./opt/?.lua;;
volumes:
- defaultMode: 755
- name: kong-plugin-custom-auth
configMap:
name: kong-plugin-custom-auth

Kindly Guild /assist, what i am doing wrong. helps will really appreciated.

Thanks & Regards
Jaiswar Vipin Kumar R.


© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ