3 Replies Latest reply on Feb 27, 2013 1:11 AM by drisal

    HornetQ Clustering on two JBOSS instances on same machine

    ybhate

      Hi,

      I have searched the forum and did not specifically find answer to my particular problem, though there were lot of other pointers that i got.

       

      It would be nice if someone can clearly provide me some direction.

       

      Here is my problem

       

      I have one windows desktop and i am running two JBOSS instances in it (using port bindings i am running these two instances with different ports). I have hornetq on both these instances within JBOSS. I have configured unique acceptor ports for both the hornetq servers so they dont clash. I am providing my hostname (which is say xyz , same machine thats why) in the acceptor and connectors in both configuration.

       

      Now i turn on the clustered property to true and provide clustering settings. First instances points to second and second points to first - note that i am using static connectors and not UDP.

       

      Now i have third JBOSS instance on my same machine which works as a client for these two clustered instances. This client instance sends messages to one of the clustered servers using Core Bridge (i am using local queues on client instance which forward message to the remote queues on instance 1). Since instance 1 and 2 are clustered i expect that the messages should be load balanced round robin. But unfortunately only instance 1 recieves all the messages and instance 2 does not recieve messages at all.(note that i have consumers on both instances)

       

      The following is what i have verified.

       

      1. Both hornetq instances are running on unique ports.

      2. Both can individually accept messages

      3. While starting JBOSS it correctly shows that the instances are started with clustered=true

      4. connection between both the instance is good (becos when i shutdown any instance the other instance automatically complains that it cannot connect)

       

      now i think because i am using same hostname in both instance connectors and acceptors the balancing/distribution might not be working. I want to know whether i am right.

       

      So to work with clustering hornetq needs that both the hornetq servers are on different IP addresses and cannot be on the same IP address.is this statement true ??

       

      I hope i have explained my problem satisfactorily. Also i have read hornetq thoroughly and am able to get lot of things working, so even if i am a newbie i know what problems to look for. But this particular problem is beyond my understanding and hence i request someone from the hornetq dev team to help me out.

       

      Thanks for your time and help in advance.

       

      Message was edited by: Yogesh Bhate Had some irrelavant content at the end..removed that.

        • 1. Re: HornetQ Clustering on two JBOSS instances on same machine
          ataylor

          So to work with clustering hornetq needs that both the hornetq servers are on different IP addresses and cannot be on the same IP address.is this statement true ??

          No that is not true, you can have themall on the same IP address, as long asthey are on different ports

          1 of 1 people found this helpful
          • 2. Re: HornetQ Clustering on two JBOSS instances on same machine
            ybhate

            Thanks Andy. Yeah i thought so , that it should work on the same IP provided that i have different listening ports. But still it did not work. Then i realized that the way i created multiple hornetq server profiles on my JBOSS instance was by copying the folder default-with-hornetq twice and renaming it default-with-hornetq-1 and default-with-hornetq-2. That was the real problem. I still had the same bindings and journal files.

             

            Somewhere on this forum i saw that if i copy the folders then i should clear the data folder in the copied folders. I did that i.e i deleted all files under the data folder and the restarted all my instances.

             

            And now it works like a charm !

             

             

            Thanks for your help and others on this forum.

             

            BTW i am using HornetQ 2.2.5 and the user guide mentions that there is a backup connector facility for bridges with static connectors. But the truth is that your XSD does not allow that option and obviously its not implemented. But that will be a very important feature needed.

             

             

            -Yogesh

            • 3. Re: HornetQ Clustering on two JBOSS instances on same machine
              drisal

              Hello Andy,

               

              From my last few days..i am getting trouble to run two instances of hornetq and also failed to run clustered example of hornetq.

              The steps i gone through to run are:

              1. Downloaded hornetq 2.2.5
              2. Created two instances as hornetqa and hornetqb
              3. change run.sh of hornetqa as

                         export CLUSTER_PROPS="-Djnp.port=1099 -Djnp.rmiPort=1098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=5445 -     Dhornetq.remoting.netty.batch.port=5455"

                 4.  change run.sh of hornetqb as

                       export CLUSTER_PROPS="-Djnp.port=2099 -Djnp.rmiPort=2098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=6445 -Dhornetq.remoting.netty.batch.port=6455"

               

                  5.configured hornetq-configuration.xml file for both hornetqa and hornetqb instances as default only changing the port address for connector and acceptor of hornetqb as 6445 and 6455

                       

              <connectors>     

                    <connector name="netty">

                       <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>

                       <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

                       <param key="port"  value="${hornetq.remoting.netty.port:6445}"/>

                    </connector>

                   

                    <connector name="netty-throughput">

                       <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>

                       <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

                       <param key="port"  value="${hornetq.remoting.netty.batch.port:6455}"/>

                       <param key="batch-delay" value="50"/>

                    </connector>

                 </connectors>

               

               

                 <acceptors>

                    <acceptor name="netty">

                       <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>

                       <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

                       <param key="port"  value="${hornetq.remoting.netty.port:6445}"/>

                    </acceptor>

                   

                    <acceptor name="netty-throughput">

                       <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>

                       <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

                       <param key="port"  value="${hornetq.remoting.netty.batch.port:6455}"/>

                       <param key="batch-delay" value="50"/>

                       <param key="direct-deliver" value="false"/>

                    </acceptor>

                 </acceptors>

               

               

              <broadcast-groups>, <discovery-groups>,<cluster-connections> configuration are as same as default for both instances

               

              6. Now, running both instances i got no any cluster information...

                  Both logs look like this....

               

               

              for hornetqa:

              [jboss@RHEL_direintegration bin]$ ./run.sh

              ***********************************************************************************

              java -Djnp.port=1099 -Djnp.rmiPort=1098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=4445 -Dhornetq.remoting.netty.batch.port=4455 -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Dhornetq.config.dir=../config/stand-alone/clustered -Djava.util.logging.config.file=../config/stand-alone/clustered/logging.properties -Djava.library.path=. -classpath ../lib/twitter4j-core.jar:../lib/netty.jar:../lib/jnpserver.jar:../lib/jnp-client.jar:../lib/jboss-mc.jar:../lib/jboss-jms-api.jar:../lib/hornetq-twitter-integration.jar:../lib/hornetq-spring-integration.jar:../lib/hornetq-logging.jar:../lib/hornetq-jms.jar:../lib/hornetq-jms-client-java5.jar:../lib/hornetq-jms-client.jar:../lib/hornetq-jboss-as-integration.jar:../lib/hornetq-core.jar:../lib/hornetq-core-client-java5.jar:../lib/hornetq-core-client.jar:../lib/hornetq-bootstrap.jar:../config/stand-alone/clustered:../schemas/ org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml

              ***********************************************************************************

              [main] 11:01:03,818 INFO [org.hornetq.integration.bootstrap.HornetQBootstrapServer]  Starting HornetQ Server

              [main] 11:01:05,267 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  live server is starting with configuration HornetQ Configuration (clustered=true,backup=false,sharedStore=true,journalDirectory=../data/journal,bindingsDirectory=../data/bindings,largeMessagesDirectory=../data/large-messages,pagingDirectory=../data/paging)

              [main] 11:01:05,268 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Waiting to obtain live lock

              [main] 11:01:05,310 INFO [org.hornetq.core.persistence.impl.journal.JournalStorageManager]  Using AIO Journal

              [main] 11:01:05,711 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Waiting to obtain live lock

              [main] 11:01:05,711 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Live Server Obtained live lock

              [main] 11:01:08,916 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.DLQ

              [main] 11:01:08,947 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExpiryQueue

              [main] 11:01:08,968 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExampleQueue

              [main] 11:01:08,974 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.DeepThoughtTask

              [main] 11:01:08,998 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.TxPersistenceTask

              [main] 11:01:09,003 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.NoOpTask

              [main] 11:01:09,020 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.RedirectTask

              [main] 11:01:09,042 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.Error

              [main] 11:01:09,047 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.CifReportingTask

              [main] 11:01:09,068 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.WhatIfSequencerTask

              [main] 11:01:09,119 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.topic.ExampleTopic

              [main] 11:01:09,247 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:4445 for CORE protocol

              [main] 11:01:09,248 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:4455 for CORE protocol

              [main] 11:01:09,279 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Server is now live

              [main] 11:01:09,280 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  HornetQ Server version 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) [e24b5373-c4ef-11e0-928a-f07bcb6cb57a] started

              [hornetq-discovery-group-thread-dg-group1] 11:01:26,311 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:29,262 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:31,314 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:34,263 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:36,315 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:39,265 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

               

               

               

              for hornetqb:

              [jboss@RHEL_direintegration bin]$ ./run.sh

              ***********************************************************************************

              java -Djnp.port=2099 -Djnp.rmiPort=2098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=6445 -Dhornetq.remoting.netty.batch.port=6455 -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Dhornetq.config.dir=../config/stand-alone/clustered -Djava.util.logging.config.file=../config/stand-alone/clustered/logging.properties -Djava.library.path=. -classpath ../lib/twitter4j-core.jar:../lib/netty.jar:../lib/jnpserver.jar:../lib/jnp-client.jar:../lib/jboss-mc.jar:../lib/jboss-jms-api.jar:../lib/hornetq-twitter-integration.jar:../lib/hornetq-spring-integration.jar:../lib/hornetq-logging.jar:../lib/hornetq-jms.jar:../lib/hornetq-jms-client-java5.jar:../lib/hornetq-jms-client.jar:../lib/hornetq-jboss-as-integration.jar:../lib/hornetq-core.jar:../lib/hornetq-core-client-java5.jar:../lib/hornetq-core-client.jar:../lib/hornetq-bootstrap.jar:../config/stand-alone/clustered:../schemas/ org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml

              ***********************************************************************************

              [main] 11:01:16,387 INFO [org.hornetq.integration.bootstrap.HornetQBootstrapServer]  Starting HornetQ Server

              [main] 11:01:18,054 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  live server is starting with configuration HornetQ Configuration (clustered=true,backup=false,sharedStore=true,journalDirectory=../data/journal,bindingsDirectory=../data/bindings,largeMessagesDirectory=../data/large-messages,pagingDirectory=../data/paging)

              [main] 11:01:18,054 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Waiting to obtain live lock

              [main] 11:01:18,097 INFO [org.hornetq.core.persistence.impl.journal.JournalStorageManager]  Using AIO Journal

              [main] 11:01:18,468 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Waiting to obtain live lock

              [main] 11:01:18,469 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Live Server Obtained live lock

              [main] 11:01:21,002 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.DLQ

              [main] 11:01:21,033 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExpiryQueue

              [main] 11:01:21,038 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExampleQueue

              [main] 11:01:21,044 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.DeepThoughtTask

              [main] 11:01:21,059 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.TxPersistenceTask

              [main] 11:01:21,063 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.NoOpTask

              [main] 11:01:21,081 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.RedirectTask

              [main] 11:01:21,093 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.Error

              [main] 11:01:21,098 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.CifReportingTask

              [main] 11:01:21,111 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.WhatIfSequencerTask

              [main] 11:01:21,167 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.topic.ExampleTopic

              [main] 11:01:21,302 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:6445 for CORE protocol

              [main] 11:01:21,304 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:6455 for CORE protocol

              [main] 11:01:21,328 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Server is now live

              [main] 11:01:21,329 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  HornetQ Server version 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) [e24b5373-c4ef-11e0-928a-f07bcb6cb57a] started

              [hornetq-discovery-group-thread-dg-group1] 11:01:26,311 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:29,261 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:31,313 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:34,262 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:36,315 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:39,264 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:41,316 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

              [hornetq-discovery-group-thread-dg-group1] 11:01:44,278 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

               

              Please help me...why i am getting trouble while running two different instances of hornetq. is it working or i have miss something

              and what should i have to consider running hornetq  clustered in two different machine...

              with lots of hope i am looking forward your help

               

              -deep