This content has been marked as final. Show 1 reply
The performance tests should cover the following scenarios.
non persistent with no selectors
persistent with no selectors
non persistent with selectors
persistent with selectors
non persistent no selectors
dups_ok persistent no selectors
auto_ack persistent no selectors
durable persistent selectors
we should be able to configure multiple clients and/or servers. Each client should have n connections which create n sessions which create n producers and/or consumers and send n messages of size n, this should of course be completely configurable. The client will keep its own statistics and we will have a central coordinator that will coordinate the work of each client and collect each clients statistics.
How will we run the tests,
we could use the DTF, however i'm not sure whether its an overhead we dont need. The DTF needs maintaining and since we dont have dedicated machines we would have to configure/start/stop the dtf nodes, coordinator and manager every time we ran them. This being the case we could just write our own coordinator that does what we need and start each client manually. the coordinator would listen for clients connecting and instruct them what scenarios to run. We could maybe use jgroups for this. This would also mean that its easy to run standalone on any machine without needing DTF knowledge.
I'm looking to make a start on this next week hopefully so comments welcome.
If DTF is going to help us here, it's worth looking at the JBM performance framework - it does pretty much all the things mentioned in your post: