Kong bypassing custom plugin - OIDC

Hello Folks,
I’m aware that another user who is using the Kong Ingress controller + Plugin/Kong-ingress objects got this to work by fixing env variable KONG_PLUGINS

I have Kong 1.4 running on my AKS cluster , the ingress to Kong proxy is via our Nginx-Controller (ingress object for kong-ingress-data-plane svc on port 8000)

Steps Taken so far to enable OIDC Plugin

  1. Login into the kong-control-plane pod and execute
  • luarocks install kong-oidc
  • export KONG_PLUGINS=oidc
  • Update the kong_defaults.lua file with the value plugins = bundled,oidc
  • kong prepare && kong reload
  1. I can confirm that OIDC plugin is installed
    curl -i http://localhost:8001
    HTTP/1.1 200 OK
    Date: Tue, 03 Dec 2019 21:25:33 GMT
    Content-Type: application/json; charset=utf-8
    Connection: keep-alive
    Access-Control-Allow-Origin: *
    Server: kong/1.4.0
    Content-Length: 6497
    X-Kong-Admin-Latency: 6

{“plugins”:{“enabled_in_cluster”:[“oidc”],“available_on_server”:{“correlation-id”:true,“pre-function”:true,“cors”:true,“ldap-auth”:true,“loggly”:true,“hmac-auth”:true,“zipkin”:true,“request-size-limiting”:true,“azure-functions”:true,“request-transformer”:true,“oauth2”:true,“response-transformer”:true,“ip-restriction”:true,“statsd”:true,“jwt”:true,“proxy-cache”:true,“basic-auth”:true,“key-auth”:true,“http-log”:true,“oidc”:true,…,…

  1. Create the Service and Route
    curl -i -X POST
    –url http://localhost:8001/services/
    –data ‘name=inventory-service’
    –data ‘url=https://inventory.pet.contoso.com/’

curl -i -X POST
–url http://localhost:8001/services/inventory-service/routes
–data ‘hosts[]=inventory.pet.contoso.com
–data ‘paths=/product-inventory-kafka-proxy/’
–data ‘strip_path=false’
–data ‘preserve_host=true’
–data ‘regex_priority=1’
4) Now enable the ODIC plugin for the inventory-service on the route
POST /services/inventory-service/plugins HTTP/1.1
Host: localhost:8001
Content-Type: application/x-www-form-urlencoded
User-Agent: PostmanRuntime/7.20.1
Accept: /
Cache-Control: no-cache
Postman-Token: 6acf67b4-9bd3-4947-9c5d-92d9572ff11a,5867e0ee-0869-445e-bae5-15284b8e1988
Host: localhost:8001
Accept-Encoding: gzip, deflate
Content-Length: 446
Connection: keep-alive
cache-control: no-cache

name=oidc&config.client_id=kongapi&config.client_secret=#####&config.discovery=https%3A%2F%2Flogin.microsoftonline.com%2F4a5b7942-b0b1-4244-b601-19f8e6c33e48%2Fv2.0%2F.well-known%2Fopenid-configuration&config.introspection_endpoint=https%3A%2F%2Flogin.microsoftonline.com%2F4a5b7942-0a52-nn779-19f8e6c33e48%2Foauth2%2Ftoken&run_on=all&route.id=341e2900-8e44-4457-9aa3-17a857f2c4db

  1. But when i test the proxy to the route , don’t get a 401 unauthorised , i get a 200 basicaly bypassing the OIDC auth_flow/token validation flow

Any help or guidance on what i might have done wrong is much appreciated
cheers,
subbu

You should be deploying Kong as the Ingress controller instead of using Nginx.

Make sure that you’re getting 200 OK from the correct service. It might be that Kong or Nginx in front of Kong are actually routing the traffic to a service different form the one that you’re expecting.

Thanks for the reply on this @hbagdi , The reponse is coming back from the correct upstream service , and we executed the Curl test from within the kong-ingress-data-plane (proxy) pod .

Having said that , i will deploy the kong ingress controller and see if that alter the behaviour in this case

I wanted to get some clarification on this message that we see in the ingress-data-plane pod logs , could you pls help clarify:
8:13 [debug] 32#0: *23901 [lua] init.lua:251: [cluster_events] new event (channel: ‘invalidations’) data: ‘plugins:oidc::33716597-7699-4715-a037-d3162bf91953::’ nbf: ‘none’
2019/12/04 20:18:13 [debug] 32#0: *23901 [lua] cache.lua:204: [DB cache] received invalidate event from cluster for key: ‘plugins:oidc::33716597-7699-4715-a037-d3162bf91953::’
2019/12/04 20:18:13 [debug] 32#0: *23901 [lua] cache.lua:289: invalidate_local(): [DB cache] invalidating (local): ‘plugins:oidc::33716597-7699-4715-a037-d3162bf91953::’

Those are related to cache invalidation happening in a Kong cluster. The data-plane node is receiving the events and purging the local cache as needed.


© 2019 Kong Inc.    Terms  •  Privacy  •  FAQ