Try it with 3.2.2
There was a fix in that version for a long standing problem where the
connection.start() wasn't handled correctly. The mdb's subscription
was never enabled.
Thanks for your reply.
In your response you have stated "The mdb's subscription was never enabled. ".
In the scenario that we have the MDB consumes messages without any problem for days if not weeks and on one fine day stops to do so for no apparent reason (console shows it to be active).
Does the fix that you have mentioned cover this scenario?
If yes we would be ok as we are planning to move to JBoss 3.2.3 in near future.
No it does not, but there is another fix in 3.2.3 that does.
Sounds good. Will move to 3.2.3 as early as possible.
Thanks again Adrian
With reference to the previous posts.
We moved to JBoss3.2.3 a month back. All was going fine till today morning when we received alerts stating messages building up inspite of the MDB being active.
On investigating the logs I found
2004-03-26 07:50:04,217 INFO Thread Pool Worker-496 MDB received JMS object message with JMS ID ID:31-1080287404179103
2004-03-26 07:50:04,429 INFO Thread Pool Worker-496 Processing of message [JMSMessageId=ID:31-1080287404179103] succeeded
2004-03-26 07:56:00,542 INFO Thread Pool Worker-498 MDB received JMS object message with JMS ID ID:29-1080287760462649
The single instance of MDB that we have was using different threads from Thread Pool Worker pool. Most of the time the switch was quick however in this case the MDB took 6 minutes to move to a new thread.
During these 6 minutes there were around 800 messages piled up on the queue.
To make things worst there were similar pauses immediately afterwards too (ranging from 40 sec to 3 mins).
Is this a case of an instance of MDB being destroyed by the container to start a new instance?
Or is it the server starving of execute threads?
Any help would be highly appreciated.
I checked in a newer version of concurrent.jar that will fix the problem. It will be bundled with JBoss 3.2.4 when released. If you like, you can download this jar from CVS or build it yourself.
This fixes this problem:
"PooledExecutor: Create new threads if needed when terminating. (Thanks to Bruno Dumon), and replace dying thread if it is only one."
I will have give this a try.
As we are having the same problem, I would like to hear if this is working for you. Adrian wrote "Anyway, the bug still exists in concurrent.jar (even the latest version)." in http://jboss.org/index.html?module=bb&op=viewtopic&p=3837215, is this correct?
Martin Husted Hartvig
Can't see there is wrong about asking, beside I did make my own thread almost 3 months ago:
No one posted a reply! WHAT AM I SUPPOSED TO DO? WAIT? I know you think I violate the "DO NOT POST USER QUESTIONS HERE" and the "DO NOT HIJACK", but I'm/was getting no where.
Your original post is a long winded version of
"I am trying an old version of JBoss and MDBs don't work. I will try a newer version".
i.e. it falls into the "IT DOES NOT WORK" category.
It is not very suprising you got ignored.
We are experiencing the same problem with JBoss 3.2.2 and JBoss 3.2.3. The symptom is that a MDB that has been working fine for days suddenly stops receiving messages. From the jmx-console, the queue lists 0 receivers at that time. The queue would start building up.
Based on the posts that I read, I'm going to download a new concurrent.jar that comes with 3.2.4 and see if it works. I'm also thinking about increasing the pool size of my MDB so hopefully this won't happen to all the MDB's. Is that a reasonable thing to do or am I being naive here?
Thanks Adrian for your patience. Thanks for previous postings. I will post my result after I tried my solution. Did not see anybody post their results about this problem.
Dropping the new Concurrent.jar into JBoss-3.2.3 does not work. The message driven bean stopped picking up messages after running for a while. This is really disappointing.
Has anybody tried this on 3.2.4? I know Adrian said it is not completely fixed even with the new code, but I just wonder how much it improves.
There are a few more fixes I made for 3.2.4 that may have fixed this problem for you. I had the same problem you describe and this problem hasn't come back under a (heavily patched) 3.2.3 build I use.