1 Reply Latest reply on May 15, 2003 3:38 PM by marc fleury

    Oneway invocations/queues/SEDA

    Sanne de Novice

      Hi All,

      I've been following the AOP a bit, and came across the notion of one way invocations in AOP. One way invocations have an asynchronous feel to it, and the feel of persistence. Automatically the question arises what to do with the persistent invocations, and how to schedule them.

      Below are two links that put together an idea I had a year ago and that I was planning to make into a Phd. thesis, couldn't get the proposal through though (was talking to a brick wall/ or then again I am loud).

      http://www.eecs.harvard.edu/~mdw/proj/seda/index.html

      http://www.jpaulmorrison.com/fbp/2001paper.htm

      I'll sum up what I made of these two papers, and I don't know if they conform with reality.

      The first links actually states that it is possible to architecture a multistaged process in a very efficient and robust manner by decomposing the stages (read interceptors) and putting them on queues .... with each que having a numer of threads dedicated to it to empty them ... Imagine this: if JBoss would have 1000 clients there would not be 1000 threads running around, there would be this huge queue in the beginning, gradually sippling through later stages/queues in an orderly fashion: the response time would certainly go up, but there would be no thread madness.

      The second paper talks about a software archtecture which is described as a network of a synchronous nodes/processes having 'ports' to connect to each other. The process can be very practical like a printing service, the can also be more abstract like a switch, a router, a buffer, a queue. This has an almost electronic component feel to it.

      Anyway one could imagine the 1000 clients putting in calls on a queue, and the queue sending the invocations through to the next stage, while an other queue is mixing in the one-way invocations.

      Software with the feel of hardware.

      Regards,

      Sanne

        • 1. Re: Oneway invocations/queues/SEDA
          marc fleury Master

          > Hi All,

          Hello Sanne,

          > Below are two links that put together an idea I had a
          > year ago and that I was planning to make into a Phd.
          > thesis, couldn't get the proposal through though (was
          > talking to a brick wall/ or then again I am loud).
          >
          > http://www.eecs.harvard.edu/~mdw/proj/seda/index.html
          >

          Yes, the seda paper. Matt Welsh.


          > http://www.jpaulmorrison.com/fbp/2001paper.htm

          Interesting.

          > The first links actually states that it is possible
          > to architecture a multistaged process in a very
          > efficient and robust manner by decomposing the stages
          > (read interceptors) and putting them on queues ....
          > with each que having a numer of threads dedicated to
          > it to empty them ... Imagine this: if JBoss would
          > have 1000 clients there would not be 1000 threads
          > running around, there would be this huge queue in the
          > beginning, gradually sippling through later
          > stages/queues in an orderly fashion: the response
          > time would certainly go up, but there would be no
          > thread madness.

          yeah... I am not sure I agree with the paper. It says that decomposing in stages and adding a thread per stage will speed up (says the java webserver they wrote is faster than apache). Frankly I have already mentioned that but I believe they are looking at the speed of java :)

          I am serious.

          Java is fast for server side repetitive stuff and that is what they are doing. But the work that needs to be done (in this case serve pages) is the same and thread scheduling is an expense that needs to be checked.

          I think I DO buy the feedback control feature of the thread pool. Meaning that the robutness comes from it... maybe. The speed feature? it's bogus.

          On the pool the dev list has gone through an extensive discussion of it, and what came out of it is this: WHO CARES!!!!!! the reason is simple... already jboss.org is sitting at 30% utilization with 300 users in parallel ON A 600MHz BOX FROM THE YEAR 2000 and at 5% utilization on the new DELL box we have at a 4 gig... Sacha made a great point on the list... you need more hardware and the threads are running low??? THROW MORE HARDWARE... so the bottom line here is double whammy 1- you don't need fancy protection for 'solidity' in most cases as you will never run in these limits 2- if you do throw more hardware at your problem.

          even though it is probably a dead issue we would want to see it at the ports, meaning use 'stage' on the invokers only.

          But making aop aspects independent pools, blah.

          > The second paper talks about a software archtecture
          > which is described as a network of a synchronous
          > nodes/processes having 'ports' to connect to each
          > other. The process can be very practical like a
          > printing service, the can also be more abstract like
          > a switch, a router, a buffer, a queue. This has an
          > almost electronic component feel to it.
          >
          > Anyway one could imagine the 1000 clients putting in
          > calls on a queue, and the queue sending the
          > invocations through to the next stage, while an other
          > queue is mixing in the one-way invocations.
          >
          > Software with the feel of hardware.

          I have to read the paper, as you seem to describe a pool in the entry points, like described above. Asynchronous calls are really useful, the one-way calls are really useful as it would enable people to do away with all the JMS/MDB design madness that is prevailing in J2EE land today.

          Viva J2EE++

          >
          > Regards,
          >
          > Sanne