We’ve noticed that tables ratelimiting_metrics & response_ratelimiting_metrics from Rate Limiting plugin are getting big when using Cassandra datastore.
With set per-second limits, the UPDATE queries are creating a new entry for every consumer access every new second. This can after a while e.g. slow down the admin /status call, as it does aggregation query without partition key (SELECT COUNT(*)).
Would it make sense to introduce some maintenance script for regular cleaning of old records?
Or perhaps try to adjust the table definition to use Cassandra TTL?