Hey there,
I am currently trying to update an open source oidc plugin in order for it to work with the latest kong versions (it was used the now deprecated kong plugin api). I baked the plugin inside a custom kong image and tried to load it with kong gateway operator but got this error:
error":"could not build Deployment for DataPlane default/kong-b9txx: could not generate Deployment: unsupported DataPlane image xxxxxx/kong-oidc:1.0.0"
Here is my DockerFile for the custom image:
FROM kong/kong-gateway:3.11
USER root
RUN apt update && apt install -y git luarocks
WORKDIR /tmp
RUN git clone https://github.com/xxxxx/kong-oidc.git
RUN luarocks install lua-resty-openidc
RUN mkdir -p /usr/local/share/lua/5.1/kong/plugins/oidc \
&& cp /tmp/kong-oidc/kong/plugins/oidc/*.lua /usr/local/share/lua/5.1/kong/plugins/oidc/
WORKDIR /tmp
RUN git clone https://github.com/xxxxxx/kong-plugin-jwt-keycloak.git
RUN mkdir -p /usr/local/share/lua/5.1/kong/plugins/jwt-keycloak \
&& cp /tmp/kong-plugin-jwt-keycloak/src/*.lua /usr/local/share/lua/5.1/kong/plugins/jwt-keycloak/ \
&& cp -r /tmp/kong-plugin-jwt-keycloak/src/validators /usr/local/share/lua/5.1/kong/plugins/jwt-keycloak/
ENV KONG_PLUGINS=bundled,oidc,jwt-keycloak
USER kong
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000 8443 8001 8444
STOPSIGNAL SIGQUIT
HEALTHCHECK --interval=10s --timeout=10s --retries=10 CMD kong health
CMD ["kong", "docker-start"]
And here is my GatewayConfiguration:
kind: GatewayConfiguration
apiVersion: gateway-operator.konghq.com/v1beta1
metadata:
name: kong
namespace: kong
spec:
dataPlaneOptions:
network:
services:
ingress:
type: LoadBalancer
name: gateway-lb
annotations:
metallb.io/loadBalancerIPs: 192.168.10.20
deployment:
podTemplateSpec:
spec:
containers:
- name: proxy
image: xxxxx/kong-oidc:1.0.0
env:
- name: KONG_DATABASE
value: "off"
- name: KONG_PLUGINS
value: "bundled,oidc,jwt-keycloak"
controlPlaneOptions:
deployment:
podTemplateSpec:
spec:
containers:
- name: controller
image: kong/kubernetes-ingress-controller:3.4
env:
- name: CONTROLLER_LOG_LEVEL
value: debug
Any help would be appreciated, the up to date versions of the oidc & jwt plugins will be open sourced once fully tested as they are deeply needed considering the official oidc plugin is entreprise only.
I recently ran into this issue myself. It has to do with the behavior of the Gateway Operator. It’s doing a version check on the docker image tag and is expecting a version greater than 3.x.
We’re working through a fix for custom images, but in the meantime, if you tag your image as > than 3.0 it should work. i.e. xxxxx/kong-oidc:3.0.0
My savior ! Thanks a lot, it works now. My only issue is that the plugin configuration is not loaded by the plugin, it simply doesn’t have access to it.
Have you registered the your schema.lua file with the control plane yet? You can do it through the UI or through a curl request. Here’s the script I use:
export CONTROL_PLANE_ID=$(curl -s -X GET "https://us.api.konghq.com/v2/control-planes?filter\[name\]=<your control plane name>" -H "Authorization: Bearer ${PAT}" | jq -r '.data[0].id' )
echo $CONTROL_PLANE_ID
curl -i -X POST \
"https://us.api.konghq.com/v2/control-planes/${CONTROL_PLANE_ID}/core-entities/plugin-schemas" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${PAT}" \
--data "{
\"lua_schema\": $(jq -Rs '.' ./kong/plugins/<your plugin name>/schema.lua)
}"
Once you have added the schema to the control plane, you will need to re-deploy your data planes.
If for some reason you need to update your schema, you will need to delete all instances of the plugin (remove them from all routes), and delete the existing schema, then recreate it. The delete script is as follows:
curl -i -X DELETE \
"https://us.api.konghq.com/v2/control-planes/${CONTROL_PLANE_ID}/core-entities/plugin-schemas/<your control plane name>" \
--header "Authorization: Bearer ${PAT}"
The above scripts point to the US Konnect API endpoint. If you’re not using US infra, simply substitute us.api.konghq.com with whatever is appropriate.
Hope this helps
1 Like
Quick follow up. I did a bit of digging and you can disable image validation when installing your gateway operator. The helm command would look something like this:
helm upgrade --install kgo kong/gateway-operator \
-n kg-operator \
--create-namespace \
--set image.tag=1.6 \
--set kubernetes-configuration-crds.enabled=true \
--set env.ENABLE_CONTROLLER_KONNECT=true \
--set env.VALIDATE_IMAGES=false
The VALIDATE_IMAGES=false is the key here.
1 Like
My savior, again ! Thanks a lot I’ll try this as soon as possible.
So I examined your solution and encountered an issue. I am not using kong konnect but deploying my control and data planes in db less mode on prem, using the kong gateway operator helm chart. it seems from what I’ve read so far that the plugin configuration has to be sent through kubernetes objects (as it will be saved in memory) and that the db less mode causes the admin api to be read only. So far here are my configs :
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: oidc
namespace: monitoring
annotations:
konghq.com/ingress.class: kong
labels:
global: "false"
config:
# Required fields
client_id: kong-gateway
client_secret: your-super-secret-client-secret
realm: example-realm
discovery: http://redacted/.well-known/openid-configuration
scope: openid
response_type: code
introspection_cache_ignore: "no"
bearer_only: "no"
validate_scope: "no"
ssl_verify: "no"
use_jwks: "no"
token_endpoint_auth_method: client_secret_post
bearer_jwt_auth_signing_algs:
- RS256
header_names: []
header_claims: []
# Optional fields
redirect_uri: https://otterstack.local/grafana/_oauth
redirect_after_logout_uri: https://otterstack.local
unauth_action: "auth"
recovery_page_path: "https://google.com"
logout_path: "/logout"
redirect_after_logout_with_id_token_hint: "no"
userinfo_header_name: "X-USERINFO"
id_token_header_name: "X-ID-Token"
access_token_header_name: "X-Access-Token"
access_token_as_bearer: "no"
disable_userinfo_header: "no"
disable_id_token_header: "no"
disable_access_token_header: "no"
revoke_tokens_on_logout: "no"
groups_claim: "groups"
skip_already_auth_requests: "no"
bearer_jwt_auth_enable: "no"
disabled: false
plugin: oidc
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-grafana
namespace: monitoring
annotations:
konghq.com/plugins: oidc
spec:
parentRefs:
- name: kong
kind: Gateway
namespace: default
sectionName: global-https
rules:
- matches:
- path:
type: PathPrefix
value: /grafana
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
backendRefs:
- name: my-grafana
kind: Service
port: 80
namespace: monitoring
kind: GatewayConfiguration
apiVersion: gateway-operator.konghq.com/v1beta1
metadata:
name: kong
namespace: kong
spec:
dataPlaneOptions:
network:
services:
ingress:
type: LoadBalancer
name: gateway-lb
annotations:
metallb.io/loadBalancerIPs: 192.168.10.20
deployment:
podTemplateSpec:
spec:
containers:
- name: proxy
image: armeldemarsac/kong-oidc:4.0.0
env:
- name: KONG_DATABASE
value: "off"
- name: KONG_PLUGINS
value: "bundled,oidc,jwt-keycloak"
- name: KONG_LOG_LEVEL
value: "debug"
controlPlaneOptions:
deployment:
podTemplateSpec:
spec:
containers:
- name: controller
image: kong/kubernetes-ingress-controller:3.5
env:
- name: CONTROLLER_LOG_LEVEL
value: debug
From the dataplane logs, i can see the plugin is correctly loaded and executed, the only problem is that the config is nil.
10.244.1.8 - - [23/Jul/2025:09:53:12 +0000] "GET /status HTTP/1.1" 200 1182 "-" "kong-ingress-controller/3.5.0"
2025/07/23 09:53:15 [notice] 1405#0: [kong] handler.lua:?:158 [oidc]------------------------------------------------------------------------------------------+
2025/07/23 09:53:15 [notice] 1405#0: |{ |
2025/07/23 09:53:15 [notice] 1405#0: | PRIORITY = 1000, |
2025/07/23 09:53:15 [notice] 1405#0: | VERSION = "1.3.0", |
2025/07/23 09:53:15 [notice] 1405#0: | access = <function 1>, |
2025/07/23 09:53:15 [notice] 1405#0: | no_consumer = true |
2025/07/23 09:53:15 [notice] 1405#0: |} "Full conf object in plugin.access" |
2025/07/23 09:53:15 [notice] 1405#0: +------------------------------------------------------------------------------------------------------------------------+
2025/07/23 09:53:15 [debug] 1405#0: *2003 [kong] handler.lua:47 [oidc] Authenticating request via OIDC for path: /grafana/grafana/d/aad65846-87f6-4422-94b2-68c667f54127/swagger-stats-dashboard
2025/07/23 09:53:15 [debug] 1405#0: *2003 [kong] handler.lua:49 [oidc] Unauth action from conf is equal to: nil
2025/07/23 09:53:15 [debug] 1405#0: *2003 [kong] handler.lua:53 [oidc] Unauth action si currently set to: deny
2025/07/23 09:53:15 [debug] 1405#0: *2003 [kong] handler.lua:17 [oidc] session_present=
2025/07/23 09:53:15 [debug] 1405#0: *2003 [kong] handler.lua:56 [oidc] OIDC authenticate returned: res=nil, err=unauthorized request
I see. I’m not sure if the KongPlugin will work in db-less mode or not.
I have only ever run db-less mode when using a kong.yaml file for configuration. I would look at the gateway operator logs and see if there are any errors there.
I will keep looking on my end