4 Replies Latest reply on Apr 7, 2009 7:59 AM by Frank Henry

    Improvement suggestions: Messaging in a clustered environmen

    Frank Henry Novice

      I am currently setting up a system where we will have multiple JBoss AppServers (4.2.3) and one DB in the backend.

      I have configured everything to use the single DB and the DefaultDS is also mapped to it.

      The AppServers will each have the following:
      * one Queue deployed in deploy-hasingleton for 'internal' messages (communication between services, we only want one service to handle the event)
      * one Topic deployed in deploy-hasingleton for 'external' messages (multiple possible listeners)
      * Our services ((M)Beans)

      I have one convenience Class that supplies a method to send out messages to a certain destination and used by all services.
      Due to the clustered nature of the setup and that the the destinations are ha-singletons, I (after much cursing and browsing) use the HAJNDI lookup (using the bind.address + port, the example uses 'localhost' but that might not be the bind address) to get the ConnectionFactory and that then sends out the (sessioned) message.
      Otherwise if RoundRobin sends the RMI connection to the AppServer without the ConnectionFactory, it will not be able to find the destinations and throw an exception.

      So far everything is working (as far as I can tell) but am wondering if there are any improvements I could do?

      F.i. the lookup of the ConnectionFactory. Is this the correct way to do it?

      Thanks

        • 1. Re: Improvement suggestions: Messaging in a clustered enviro
          Adrian Brock Master

          bind.address:1100 should always be able to see things bound into jndi somewhere in the cluster.

          Improvements:

          * Use hajndi-jms-ds.xml to create a pool of connections (this is what is used in the "all" config anyway) bound at connection factory java:/JmsXA.

          * Use jndi tricks to use the in-memory connector when co-located with the server
          in which the singleton jms server is running

           <mbean code="org.jboss.naming.LinkRefPairService"
           name="jboss.jms:alias=MyConnectionFactory">
          
          <!-- jndi binding MyConnectionFactory -->
          
           <attribute name="JndiName">MyConnectionFactory</attribute>
          
          <!-- Use TCP/IP when remote -->
          
           <attribute name="RemoteJndiName">ConnectionFactory</attribute>
          
          <!-- Use the in-memory connector when local -->
          
           <attribute name="LocalJndiName">java:/ConnectionFactory</attribute>
           <depends>jboss:service=Naming</depends>
           </mbean>
          


          * Use JBoss Messaging to get a proper clustered solution


          • 2. Re: Improvement suggestions: Messaging in a clustered enviro
            Frank Henry Novice

             

            "adrian@jboss.org" wrote:
            bind.address:1100 should always be able to see things bound into jndi somewhere in the cluster.

            Yes, it works with bind.addres.
            I was just mentioning that the example uses 'localhost' which would lead to false results if JBoss is bound to a different address.

            "adrian@jboss.org" wrote:

            * Use JBoss Messaging to get a proper clustered solution

            Would this save me the hassle of doing all the below?

            "adrian@jboss.org" wrote:

            Improvements:

            * Use hajndi-jms-ds.xml to create a pool of connections (this is what is used in the "all" config anyway) bound at connection factory java:/JmsXA.

            Might you have an example for this?
            Or need I not do anything except use it (java:/JmsXA) instead of ConnectionFactory?

            "adrian@jboss.org" wrote:

            * Use jndi tricks to use the in-memory connector when co-located with the server
            in which the singleton jms server is running
             <mbean code="org.jboss.naming.LinkRefPairService"
             name="jboss.jms:alias=MyConnectionFactory">
            
            <!-- jndi binding MyConnectionFactory -->
            
             <attribute name="JndiName">MyConnectionFactory</attribute>
            
            <!-- Use TCP/IP when remote -->
            
             <attribute name="RemoteJndiName">ConnectionFactory</attribute>
            
            <!-- Use the in-memory connector when local -->
            
             <attribute name="LocalJndiName">java:/ConnectionFactory</attribute>
             <depends>jboss:service=Naming</depends>
             </mbean>
            


            Thanks for the help!

            • 3. Re: Improvement suggestions: Messaging in a clustered enviro
              Adrian Brock Master

               

              "FrankTheTank" wrote:

              * Use hajndi-jms-ds.xml to create a pool of connections (this is what is used in the "all" config anyway) bound at connection factory java:/JmsXA.

              Might you have an example for this?


              Its in docs/examples/jca in the jboss download.

              • 4. Re: Improvement suggestions: Messaging in a clustered enviro
                Frank Henry Novice

                 

                "adrian@jboss.org" wrote:
                "FrankTheTank" wrote:

                * Use hajndi-jms-ds.xml to create a pool of connections (this is what is used in the "all" config anyway) bound at connection factory java:/JmsXA.

                Might you have an example for this?


                Its in docs/examples/jca in the jboss download.

                Great, thanks.

                I have already tried out JBossMessaging for 4.2.3 and after following the guide it seems to run ok.

                Two issues had popped up but I was able to resolve them.
                1) each node needs a unique id.
                2) I had already configured the datasources for clustering (no longer using hypersonic) and I still had to replace my *-persistence-service.xml with the one from the examples folder because of DB configuration issues.

                Just a heads up for anyone that might think of doing a similar migration.

                Thanks for the help!