3 Replies Latest reply on Mar 13, 2014 4:36 PM by Keith Babo

    How does SwitchYard throttling work?

    Jorge Morales Master


      I, ve read the docs on SwitchYard throttling (Throttling - SwitchYard - Project Documentation Editor), and done some testing, and it seems that SwitchYard accepts all the requests that come in, but only process them at the specified rate. This could cause some resources exhaustion easily.

      It would be nice to make it configurable whether the service would return a "ThrottlingExceededException" to the client and releasing the resources, so there is no resource starvation. This could be configurable, whether developer wants to enqueue or discard.


      I guess this is a functionality provided by the framework providing the binding, otherwise it would be more than possible to do it. But now that I think more on it, it states than throttling is shared between bindings, so, it may be SwitchYard's feature.


      Any comment on internals would be nice.

        • 1. Re: How does SwitchYard throttling work?
          Keith Babo Master

          We use the Camel Throttle processor in our bus routes to control throttling with asyncDelayed=false.  The advantage of using this approach is that rate limiting via throttling is visible to synchronous clients which allows them to adjust as necessary to avoid timeouts. 

          • 2. Re: How does SwitchYard throttling work?
            Jorge Morales Master

            Hi Keith,

            Sorry but I have to somehow disagree


            I've seen a lot of cases in where under this scenario, a client will start opening more and more connections to the server (synchronous clients being acknowledge as you say) to try to reach the performance they are expecting. They can not envision whether the performance degradation they experience is due to slowness on the service execution or on a throttling policy being applied, and most of them don't expect the second, so they open more connections trying to get the juice out of the platform.


            And for some reason, I don't really see a point in having this policy applied to every binding a service has. Suppose I have a service with 2 bindings, one for internal clients, and one for external clients (which I would like to throttle). With this scenario I would have to create 2 different services, or 2 different service proxys.


            I would really enhance this feature, and I will open a JIRA for it, if you don't mind


            My feature requests would be:

            - Make throttling per binding (default).

            - Have a domain property to make throttling global (for every binding or per binding).


            I would add more features requests, but probably they fit with an Information Processor, as the SLA quickstart for discarding requests.

            • 3. Re: How does SwitchYard throttling work?
              Keith Babo Master

              No worries if you don't agree.  Keep in mind that I said the current approach to rate limiting has some advantages, but I didn't say it has no disadvantages.  ;-)  


              I think it would be nice to enhance the Camel throttler to add the reject at limit option.  I don't agree with setting this per binding as I personally view this as a service quality policy vs. a binding policy.  As you mentioned, promoting a service twice is an easy way to have two different policies for two different bindings.