3 Replies Latest reply on Jan 4, 2010 1:49 AM by Yong Yao

    Problem for Clustered JMS lookup.

    Yong Yao Newbie

      Hi, All,

       

      I just began to work on jboss to setup the JMS cluster.

       

      I copied the server/all to server/tmp.

      I enabled the boss by the following cmd:

      ./bin/run.sh -c tmp -g NICEPartition -u 239.255.100.100 -b 10.131.21.10 -Djboss.messaging.ServerPeerID=1 &

       

      And I have the client prog which send the msg to one queue, and the code as followed:

       

                              Properties ht = new Properties();
                              ht.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
                              ht.put(Context.PROVIDER_URL, "jnp://10.131.21.10:1100");
                              ht.put(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces");

                              Context jndi = new javax.naming.InitialContext(ht);
                              System.out.println("##### JMS Context initialized!");
                              queueFactory_ = (QueueConnectionFactory) jndi.lookup("ConnectionFactory");                                   

       

      But I always got the error as followed:

      javax.naming.CommunicationException: Could not obtain connection to any of these urls: 10.131.21.10:1100 and discovery failed with error: javax.naming.CommunicationException: Receive timed out [Root exception is java.net.SocketTimeoutException: Receive timed out] [Root exception is javax.naming.CommunicationException: Failed to retrieve stub from server 10.131.21.10:1100 [Root exception is java.io.StreamCorruptedException: unexpected block data]]
              at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1727)
              at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:680)
              at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:673)
              at javax.naming.InitialContext.lookup(InitialContext.java:392)

      But if I changed PROVIDER_URL to 10.131.21.10:1099, everything is OK.

       

      As we knew, 1099 is not for clustered JMS. But I need the clustered one, that is, the port is 1100.

       

      I worked with Jboss 5.0 and used the default hypersonic datasource.

       

      Regards

      Yong

        • 1. Re: Problem for Clustered JMS lookup.
          Yong Hao Gao Master

          Please read JBoss documentation on how to use HA JNDI.

           

          JBM clustering doesn't need HA-JNDI to work. Just make sure you setup JBM cluster correctly as per the doc and get connections using a clustered connection factory. You still can use normal JNDI (1099 port) to get a clustered connection factory if you want.

           

          Howard

          • 2. Re: Problem for Clustered JMS lookup.
            Yong Yao Newbie

            Hi, Howard,

             

            Thanks a lot for your reply.

             

            I also found 1099 can work, for example, I can set provider_url to 10.131.21.9:1099,10.131.21.10:1099. And when one node is down, the message can be handled by another. But the question is that it is not the real cluster, there is no load balance, because only one node can handle the message. And only if the master one is down, the another node can begin to handle the message. Did this means I made the wrong cluster configuration?

             

            I used message-driven bean, But I didnot find any configuration in Jboss clustering guider for it. it only included the session bean...

             

            For example, in the section -- 1.1.4. EJB Session Bean Clustering Quick Start, there is no clustered child element for message-driven bean, but only for session bean.

             

            Regards

            Yong

            • 3. Re: Problem for Clustered JMS lookup.
              Yong Yao Newbie

              Hi, Howard,

               

              You are right. I made the mistake that I didnot config the post office bean as clustered. After fixing it, it can work now.

               

              Regards

              Yong