How to route requests based on 'model' field in body

Dear all,

I wonder whether Kong support routing requests based on ‘model’ field in request body.
We’ve built a LLM API Service and take Kong as AI Proxy. Suppose we have a request to out LLM API Service like this.
We can see that there is ‘model’ field in request body. The only thing we want to do is route request to different LLM Service.
The demo model name is ‘Qwen’, of course it can be other names, such like ‘gpt4’, ‘gpt3.5’ or anything else.

curl https://ourllmapiservice.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
    "model": "Qwen",
    "messages": [
      {
        "role": "developer",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ]
  }'

We known that route request based ‘model’ field in request header is very easy to implement. And we already achieve it. But now, we want to conform the openai api specification, which means just put ‘model’ field in body, not header. I’d like to share our current request path:

A request to our LLM API ----> LoadBalancer ----> Kong Gateway Service ----> Ingress(there are several ingresses, each of them corresponds to a different model, requests will be routed to different ingress based on ‘model’ field in header) ----> service behind ingress(every service bind with an ai-proxy KongPlugin, so that request will be forwarded to LLM service such like Qwen or GPT) ----> Qwen, GP, etc.

Hope my question is enough clear.