We use the Camel Throttle processor in our bus routes to control throttling with asyncDelayed=false. The advantage of using this approach is that rate limiting via throttling is visible to synchronous clients which allows them to adjust as necessary to avoid timeouts.
Sorry but I have to somehow disagree
I've seen a lot of cases in where under this scenario, a client will start opening more and more connections to the server (synchronous clients being acknowledge as you say) to try to reach the performance they are expecting. They can not envision whether the performance degradation they experience is due to slowness on the service execution or on a throttling policy being applied, and most of them don't expect the second, so they open more connections trying to get the juice out of the platform.
And for some reason, I don't really see a point in having this policy applied to every binding a service has. Suppose I have a service with 2 bindings, one for internal clients, and one for external clients (which I would like to throttle). With this scenario I would have to create 2 different services, or 2 different service proxys.
I would really enhance this feature, and I will open a JIRA for it, if you don't mind
My feature requests would be:
- Make throttling per binding (default).
- Have a domain property to make throttling global (for every binding or per binding).
I would add more features requests, but probably they fit with an Information Processor, as the SLA quickstart for discarding requests.
No worries if you don't agree. Keep in mind that I said the current approach to rate limiting has some advantages, but I didn't say it has no disadvantages. ;-)
I think it would be nice to enhance the Camel throttler to add the reject at limit option. I don't agree with setting this per binding as I personally view this as a service quality policy vs. a binding policy. As you mentioned, promoting a service twice is an easy way to have two different policies for two different bindings.