17 seconds to insert 50,000 messages using JMS with blockondurable removed
results:[java] INFO [main] (PelicanRunner.java:42) - Starting up Pelican Test Code[java] 2010-09-30 09:05:43[java] 0 messages inserted[java] 10000 messages inserted[java] 20000 messages inserted[java] 30000 messages inserted[java] 40000 messages inserted[java] COMPLETE[java] 2010-09-30 09:06:00
[java] INFO [main] (PelicanRunner.java:42) - Starting up Pelican Test Code
[java] 2010-09-30 09:05:43
[java] 0 messages inserted
[java] 10000 messages inserted
[java] 20000 messages inserted
[java] 30000 messages inserted
[java] 40000 messages inserted
[java] 2010-09-30 09:06:00Code:What do you need from me to proceed to the next step?
That figure would be impossibly high with a normal disk.
Probably you have disk write cache turned on http://hornetq.sourceforge.net/docs/hornetq-2.1.2.Final/user-manual/en/html/persistence.html#disk-write-cache
So shouldn't I see better performance with STOMP if disk cache is on? If I'm trying to write non-persistent messages I fail to see why disk cache would come into play. If I'm sending non-persistent messages why does it touch disk at all?
I will ask the question again... How do I get faster inserts with STOMP or am I seeing the max speed that hornetq can provide at this time for the STOMP protocol for non-persistent messages?
Would the REST API provide better performance out of the box than the STOMP implementation?
I'm confused about what you're doing here. First you're using the PHP stomp extension, next your using STOMPy (not sure which one you're using now).
Initially you're using durable messages, now you're using non durable?
I was asking about disk write cache since I wanted to find out what the *real* insert performance you should expect from your disk is.
If you're now talking about non durable then we need to look at other things.
Please bear in mind that we have no control over how these third party clients are written.
If you can restate your problem using the PHP STOMP extension (since that is what I have setup here) I will take a look tomorrow when I can find some time.
Please state the problem as clearly as possible with a fully working client example, so I don't have to waste cycles trying to second guess what you're doing.
1. I asked about performance of Stomp saying I had slow insert performance with both PHP AND Python clients to rule out one specific client being the issue
2. you told me I was using persistent inserts so I should expect slow performance
3. I said ok and changed my code to non-persistent and updated the jira issue with my test case in php and jms with my server configurations and I still saw poor performance
4. You tell me it's because my JMS client connection is set to blockOnDurable false so I'm not really testing it accurately.
5. I say ok so I remove that line and retested JMS client performance to show must faster inserts.
6. You then tell me that it's because disk sync is enabled
7. I question that because I don't see why hornetq would hit disk if I'm trying to write non-persistent messages to memory.
8. So I asked again based on the client and server configs I provided in my jira issue do I have something set up incorrectly in my client or server config or is this the expected performance of hornetq when using stomp because it's decoding text based messages.
OK fine. Now just waiting for your PHP test program as requested so I can look at this tomorrow.
Otherwise there will be a futher 24 hour lag (since you seem to be in US, and I am in Europe)
Ok, running this now.
Initial observations: Yes it's slow!
The current STOMP decoding really sucks. It actually calles decode() twice, once just to see if the whole packet has arrived, and secondly to actually decode it. The first decode result is thrown away.
If that wasn't bad enough the actual decoding takes, gulp, 6ms *per message*. This is an age.
The decoding code needs ripping out and rewriting.
It's my fault, I should have reviewed this code when it was "completed".
There are also other issues brought up by other issues, such as
1) No cleanup on connection-ttl for STOMP connections
2) No flow control, which can lead to OOM.
I will have to fix these issues too.
Apologies for this.
I am rewriting it now. The code is doing really crazy things like allocating a 10 MiB buffer for each message received! (Which will clearly kill performance),
Anyway.. rewriting it now.
I have rewritten the decoding part of the codec.
On my machine it is *several hundred* times faster now
Please try it out.
A couple of things to note about STOMP performance:
1) STOMP will probably always be slower than the core protocol, since it involves extra coding/decoding converting to core messages on the server. We don't have to do this with the core protocol.
2) Performance is a function of both the server *and* the client. A poorly written STOMP client, or a STOMP client written in a slow language for example can destroy performance irrespective of the quality of the implementation on the server.
3) There is scope for further optimisation. Especially around the encoding for delivery and conversion between headers.
glad you were able to resolve the the issue to a usable state. Is there a nightly build I can download or should I pull from trunk?
kudos to Tim,
.76 seconds to insert 50,000 items now with the latest trunk code! I ran the same script that I used on the prior version. Definitely acceptable performance nowphp stomptest_nondurable.php0.764438867569 seconds to insert 50,000 items
0.764438867569 seconds to insert 50,000 items
I copied my same config files and run.sh file over to the new build so it should be an exact match setup wise. Same box, etc...
hrm, getting some strange behaviors with the new build.
When enqueing 2000 items from java via JMS, then trying to dequeue those items from PHP using stomp the dequeue client freezes at 752 items and the JMX commands no longer work as seen in this screen shot which used to always work. Are the unit tests passing for the new build?
javax.jms.JMSException: Timed out waiting for response when sending packet 40
It sounds something around flow control, or window size?