Using Kong As a Forward API Proxy

We are evaluating if we can use Kong as a forward api proxy. I do understand that Kong has a forward api proxy plugin in the enterprise edition but I would like to get more details for the below requirements.

We are integrating external systems through REST api’s which are slow in performance. We would like to understand if Kong can act as a forward api proxy to cache the api responses from the external systems.

  1. How does Kong forward proxy handle API rate limiting errors from external systems? We have around 6000 api resources being called by our application deployments. Can Kong pause the ongoing the api calls once any one of them receives an HTTP 429 error?
  2. How does Kong forward proxy retry api calls on failures? Does it have a backoff policy?
  3. Can Kong be configured such that it prioritizes certain api calls over others?

Kindly let me know your thoughts.

Kong does not really care who is the client and who is the upstream. So it can perfectly well be used for this use case. A common use case is Kong injecting credentials for external services, such that the credentials do not have to be distributed internally.

  1. Regarding “pausing”; what do you expect to happen in these cases? You might be able to use healthchecks for this case?
  1. Retries can be configured per service. So 429 can be marked as not being a failure, and it will pass the error back to the client without a retry. There is no explicit back-off policy. Would like to hear your thoughts on this though, any references of how this may be implemented maybe?

  2. No.


Thanks for the replies.

  1. Let me elaborate the use case. We are expecting Kong to cache the api responses so that they can be served faster when requested by our application. In the background Kong can forward the api call to the external system to refresh the data in the cache.
    We have a requirement to keep the third party system data in sync with the external system in near real time. We are dealing with around ~ 6000 api resources and to keep the cache data refreshed in near real time we are expecting around 20,000 api calls within 30 secs. The external system isn’t able to handle this load. So it returns HTTP 429 errors. A HTTP 429 error on would also result in a subsequent on api call on resource and and so on. Hence I was thinking that Kong could pause the on-going and subsequent api calls to the external system for a short duration (i.e. retry interval) since making more api calls when the server is already overloaded would not be of any help.

  2. When any api call fails with HTTP 429 Kong could retry it with certain back-off policy. Here’s an example of a java library that provides handles retries with multiple back-off policies - Kong could manage the retries by itself as there is nothing special that the client is going to do if the api calls fail other than retry.

  3. If prioritization cannot be done, are there any other ways that can workaround the problem I want to solve? There are two categories of api calls that I am foreseeing - one that are made in the background to keep the cache up to date and the other api calls are the ones that are triggered by user actions. If the later ones compete with the former category, the user would receive timeout errors on the UI. Hence when a user triggers some actions that initiate api calls, there needs to be some way to pause and keep the background api calls in a waiting queue and let the user action api call hit the external system so as to get the api response. Note that we are dealing with high api rates and hence the need of smart handling which api calls should be made.


You could use the proxy cache plugin to cache results and serve directly from Kong without invoking the backend again