[Community Plugins] Optum Public Kong Plugins


Hi all,

We had made a topic in the past regarding our Upstream JWT plugin, but now as we plan to release plenty of plugins into the wild we figured it would be appropriate to make one topic to expose them all! We intend to edit this topic and update with every release we make.

Without further adieu, lets begin:

Kong OIDC Auth - Provides authentication against IDPs following conventional OIDC methodology!

Side-note: We would love community feedback/PRs against this one ^. It is our first time implementing OIDC and the quirks we found working with SPA apps that need to use OIDC. I am sure there is room for improvement :slight_smile: .

Kong Spec Expose - Provides API service provider spec/contract exposure on proxies protected by auth!

Kong Splunk Log - Provides HTTP collector logging in a Splunk consumable format!

Upstream JWT Plugin - Provides AAA for API providers validating transactions from Kong!

Path Based Routing Plugin - A plugin for Kong which dynamically sets the upstream hostname at runtime, based on the request URI path.



Added our latest addition- https://github.com/Optum/kong-path-based-routing , a very special sort of use case :slight_smile: , but may help someone sometime .


As a continuation of our dedication to release helpful tools and plugins/scripts to the community behold our latest additions to the Optum GitHub!

  1. Kong Service Virtualization Plugin

This plugin can help you mock API request and response data before your service is actually available using just the Kong API Gateway!

  1. Kong Expired Token Cleanup (For Cassandra Database)

This will help keep your Cassandra OAuth2 table clean of old expired tokens as well as keep you in the loop when consumers are not behaving properly and caching their generated OAuth2 tokens!

  1. Kong Cluster Drain Plugin

A Plugin that enables the ability for an admin to induce failed health checks against their load-balancers fronting Kong. This can be helpful when you would like to divert traffic away from a given data center without actually taking down your lb (which would hurt active connections at that moment in time). This way you can divert traffic first, and then take your lb down safely as you begin specific cluster work! We are currently using an environment variable we define based on the Splunk plugin in this thread as well, but you can fork it and name your specific environment variable you intend to use separately if you do not want to declare a SPLUNK_HOST :slight_smile: .

-Edit, does not look like I can modify my original post to include these new repo links so I guess people will just have to scroll :smile: .


New plugin hot off the press(special thanks to @thibaultcha and @James_Callahan for suggestions on this one too!) -

This plugin will protect the client from consuming API responses that are deemed too large within the Kong API Gateway that they rather not receive at all for stability concerns or memory constraints on their application end. Small caveat currently being the backend service needs to pass the standard Content-Length header upon response. PRs welcome to further enhance this plugin!


Thanks a lot for sharing @jeremyjpj0916! We’ll eventually be implementing OIDC at my client as well. I was curious, is there a reason you didn’t build off of https://github.com/nokia/kong-oidc?


@jerney Thanks for checking the topic out :slight_smile: . To speak to what we do internally we use a modified nokia one internally that supports multiple OpenID Connect identity providers on top of a single proxy for Introspection code flow driven by a REST header that dictates which OpenID Connect IDP to use. We use the one we built mostly for teams that want the gateway to use the full Authorization code flow albeit we are not really using the plugin that heavily and its probably not as thoroughly vetted as the Nokia plugin. If I was picking one to go with I would probably use the Nokia one if it fit my needs out of the box, if I wanted to implement more custom logic and get granular I would use my plugin more as it has no major dependencies and I personally follow my own plugin code a bit better :stuck_out_tongue: . Just my 2 cents, no real major reason we did not build off nokia’s, matter of fact I have chatted with the Nokia dev a few times, nice fellow!


Now that https://docs.konghq.com/hub/ is live (Woo Hoo!! https://konghq.com/blog/welcome-to-the-new-kong-hub/) we encourage PRs to add community-published Kong plugins and integrations.

Please see https://github.com/Kong/docs.konghq.com/blob/master/CONTRIBUTING.md#contributing-to-kong-documentation-and-the-kong-hub - and join us in https://discuss.konghq.com/c/hub with your questions and comments.


Hello all,

We have a new plugin for the community to tinker with :slight_smile: .

The goal of this plugin for us was to expose L4 errors NGINX throws when proxing with upstream/backend API services. We wanted a way to expose it to lua land and consume it within other logging plugins(which we leveraged our Splunk logging plugin to store the errors).

I have reserves around putting this on the Kong hub in its current state because its very early in design and has a few flaws that I would like to see improved upon before it becomes anything to display on Kong hub(which I would consider to generally be polished plugins).

  1. The current implementation of multiple services are being proxies and say a 500 level error occurs from one backend service whereas the other backend service threw a L4 500 level error at the same instance in time, it is currently possible to mix up which tx log message “claims” the error.

  2. Current implementation is a little heavy on the processing and potential for looping on irrelevant errors.

These issues could eventually be cleaned up with a professional parser of nginx logs and a way for the underlying OpenResty dependency “ngx.errlog” to have a more granular way to capture these on a transaction level basis as opposed to a global shm where a single GET flushes all logs. I figure Kong has plans down the road to integrate more with NGINX generated logs for historical record of why transaction issues may have occurred, but this plugin is a good first shot and generally meets our needs as it does a “best effort” capture tying webserver error logs to tx level logs. We currently just compare the tx timestamp to what the error message timestamp had to know if it needs to be disregarded as irrelevant :laughing: .

We welcome community or Kong PR’s around this plugin, just wanted to get something out there in the wild to get the whole community thinking.

And you probably thought you had already gotten all your Christmas gifts :wink: .