1 Reply Latest reply on Feb 27, 2013 3:23 AM by drisal

    Problem in running two different instance of HornetQ for clustering in same machine

    drisal

      From my last few days..i am getting trouble to run two instances of hornetq and also failed to run clustered example of hornetq.

      The steps i gone through to run are:

       

      1. Downloaded hornetq 2.2.5
      2. Created two instances as hornetqa and hornetqb
      3. Changed run.sh of hornetqa as
      run.sh
      export CLUSTER_PROPS="-Djnp.port=1099 -Djnp.rmiPort=1098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=5445 -     Dhornetq.remoting.netty.batch.port=5455"

       

               

         4.  Changed run.sh of hornetqb as

       

      run.sh

      export CLUSTER_PROPS="-Djnp.port=2099 -Djnp.rmiPort=2098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=6445 -Dhornetq.remoting.netty.batch.port=6455"

       

       

       

          5.configured hornetq-configuration.xml file for both hornetqa and hornetqb instances as default only changing the port address for connector and acceptor of hornetqb as 6445 and 6455

       

      hornetq-configuration.xml (hornetqb instance)

      ......

      <connectors>    

            <connector name="netty">

               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>

               <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

               <param key="port"  value="${hornetq.remoting.netty.port:6445}"/>

            </connector>

       

            <connector name="netty-throughput">

               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>

               <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

               <param key="port"  value="${hornetq.remoting.netty.batch.port:6455}"/>

               <param key="batch-delay" value="50"/>

            </connector>

         </connectors>

       

       

         <acceptors>

            <acceptor name="netty">

               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>

               <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

               <param key="port"  value="${hornetq.remoting.netty.port:6445}"/>

            </acceptor>

       

            <acceptor name="netty-throughput">

               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>

               <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

               <param key="port"  value="${hornetq.remoting.netty.batch.port:6455}"/>

               <param key="batch-delay" value="50"/>

               <param key="direct-deliver" value="false"/>

            </acceptor>

         </acceptors>

      ..........

       

      <broadcast-groups>, <discovery-groups>,<cluster-connections> configuration are as same as default for both instances

       

      6. Now, running both instances i got no any cluster information...

          Both logs look like this....

       

       

      for hornetqa:

      [jboss@RHEL_direintegration bin]$ ./run.sh

      ***********************************************************************************

      java -Djnp.port=1099 -Djnp.rmiPort=1098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=4445 -Dhornetq.remoting.netty.batch.port=4455 -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Dhornetq.config.dir=../config/stand-alone/clustered -Djava.util.logging.config.file=../config/stand-alone/clustered/logging.properties -Djava.library.path=. -classpath ../lib/twitter4j-core.jar:../lib/netty.jar:../lib/jnpserver.jar:../lib/jnp-client.jar:../lib/jboss-mc.jar:../lib/jboss-jms-api.jar:../lib/hornetq-twitter-integration.jar:../lib/hornetq-spring-integration.jar:../lib/hornetq-logging.jar:../lib/hornetq-jms.jar:../lib/hornetq-jms-client-java5.jar:../lib/hornetq-jms-client.jar:../lib/hornetq-jboss-as-integration.jar:../lib/hornetq-core.jar:../lib/hornetq-core-client-java5.jar:../lib/hornetq-core-client.jar:../lib/hornetq-bootstrap.jar:../config/stand-alone/clustered:../schemas/ org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml

      ***********************************************************************************

      [main] 11:01:03,818 INFO [org.hornetq.integration.bootstrap.HornetQBootstrapServer]  Starting HornetQ Server

      [main] 11:01:05,267 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  live server is starting with configuration HornetQ Configuration (clustered=true,backup=false,sharedStore=true,journalDirectory=../data/journal,bindingsDirectory=../data/bindings,largeMessagesDirectory=../data/large-messages,pagingDirectory=../data/paging)

      [main] 11:01:05,268 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Waiting to obtain live lock

      [main] 11:01:05,310 INFO [org.hornetq.core.persistence.impl.journal.JournalStorageManager]  Using AIO Journal

      [main] 11:01:05,711 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Waiting to obtain live lock

      [main] 11:01:05,711 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Live Server Obtained live lock

      [main] 11:01:08,916 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.DLQ

      [main] 11:01:08,947 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExpiryQueue

      [main] 11:01:08,968 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExampleQueue

      [main] 11:01:08,974 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.DeepThoughtTask

      [main] 11:01:08,998 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.TxPersistenceTask

      [main] 11:01:09,003 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.NoOpTask

      [main] 11:01:09,020 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.RedirectTask

      [main] 11:01:09,042 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.Error

      [main] 11:01:09,047 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.CifReportingTask

      [main] 11:01:09,068 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.WhatIfSequencerTask

      [main] 11:01:09,119 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.topic.ExampleTopic

      [main] 11:01:09,247 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:4445 for CORE protocol

      [main] 11:01:09,248 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:4455 for CORE protocol

      [main] 11:01:09,279 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Server is now live

      [main] 11:01:09,280 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  HornetQ Server version 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) [e24b5373-c4ef-11e0-928a-f07bcb6cb57a] started

      [hornetq-discovery-group-thread-dg-group1] 11:01:26,311 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:29,262 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:31,314 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:34,263 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:36,315 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:39,265 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

       

       

       

      for hornetqb:

      [jboss@RHEL_direintegration bin]$ ./run.sh

      ***********************************************************************************

      java -Djnp.port=2099 -Djnp.rmiPort=2098 -Djnp.host=192.168.72.11 -Dhornetq.remoting.netty.host=192.168.72.11 -Dhornetq.remoting.netty.port=6445 -Dhornetq.remoting.netty.batch.port=6455 -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Dhornetq.config.dir=../config/stand-alone/clustered -Djava.util.logging.config.file=../config/stand-alone/clustered/logging.properties -Djava.library.path=. -classpath ../lib/twitter4j-core.jar:../lib/netty.jar:../lib/jnpserver.jar:../lib/jnp-client.jar:../lib/jboss-mc.jar:../lib/jboss-jms-api.jar:../lib/hornetq-twitter-integration.jar:../lib/hornetq-spring-integration.jar:../lib/hornetq-logging.jar:../lib/hornetq-jms.jar:../lib/hornetq-jms-client-java5.jar:../lib/hornetq-jms-client.jar:../lib/hornetq-jboss-as-integration.jar:../lib/hornetq-core.jar:../lib/hornetq-core-client-java5.jar:../lib/hornetq-core-client.jar:../lib/hornetq-bootstrap.jar:../config/stand-alone/clustered:../schemas/ org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml

      ***********************************************************************************

      [main] 11:01:16,387 INFO [org.hornetq.integration.bootstrap.HornetQBootstrapServer]  Starting HornetQ Server

      [main] 11:01:18,054 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  live server is starting with configuration HornetQ Configuration (clustered=true,backup=false,sharedStore=true,journalDirectory=../data/journal,bindingsDirectory=../data/bindings,largeMessagesDirectory=../data/large-messages,pagingDirectory=../data/paging)

      [main] 11:01:18,054 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Waiting to obtain live lock

      [main] 11:01:18,097 INFO [org.hornetq.core.persistence.impl.journal.JournalStorageManager]  Using AIO Journal

      [main] 11:01:18,468 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Waiting to obtain live lock

      [main] 11:01:18,469 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager]  Live Server Obtained live lock

      [main] 11:01:21,002 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.DLQ

      [main] 11:01:21,033 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExpiryQueue

      [main] 11:01:21,038 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.ExampleQueue

      [main] 11:01:21,044 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.DeepThoughtTask

      [main] 11:01:21,059 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.TxPersistenceTask

      [main] 11:01:21,063 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.NoOpTask

      [main] 11:01:21,081 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.RedirectTask

      [main] 11:01:21,093 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.Error

      [main] 11:01:21,098 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.CifReportingTask

      [main] 11:01:21,111 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.queue.com.bloodhound.rtn.task.WhatIfSequencerTask

      [main] 11:01:21,167 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  trying to deploy queue jms.topic.ExampleTopic

      [main] 11:01:21,302 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:6445 for CORE protocol

      [main] 11:01:21,304 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor]  Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 192.168.72.11:6455 for CORE protocol

      [main] 11:01:21,328 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  Server is now live

      [main] 11:01:21,329 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  HornetQ Server version 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) [e24b5373-c4ef-11e0-928a-f07bcb6cb57a] started

      [hornetq-discovery-group-thread-dg-group1] 11:01:26,311 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:29,261 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:31,313 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:34,262 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:36,315 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:39,264 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:41,316 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

      [hornetq-discovery-group-thread-dg-group1] 11:01:44,278 WARNING [org.hornetq.core.cluster.impl.DiscoveryGroupImpl]  There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=e24b5373-c4ef-11e0-928a-f07bcb6cb57a

       

      Please help me...why i am getting trouble while running two different instances of hornetq. is it working or i have miss something

      and what should i have to consider running hornetq  clustered in two different machine...

      with lots of hope i am looking forward your help

       

      -deep