1 2 Previous Next 27 Replies Latest reply on Dec 30, 2014 10:23 AM by jbertram

    JMS clustering replication mode : Cluster doesn't start with security disabled

    abhiram123

      I am trying to configure JMS cluster with HA replication mode and the backup server doesn't announce itself as backup if the security is disabled in the messaging subsytem confguration.

      Do we need to provide a cluster user password? Is this is a bug in HornetQ?

        • 1. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
          gaohoward

          Did you get any error / warning messages in the server log? what's the version of your AS and HornetQ? and what's you configuration looks like?

          • 2. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
            abhiram123

            I am using JBoss 7.2.0 Final with HornetQ 2.3.0 CR1. I came across this issue https://issues.jboss.org/browse/HORNETQ-1120 and the fix version is 2.3.0 Final. Does this mean its not supposed to work with 2.3.0 CR1. Please confirm.

            • 3. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
              jbertram

              I came across this issue https://issues.jboss.org/browse/HORNETQ-1120 and the fix version is 2.3.0 Final. Does this mean its not supposed to work with 2.3.0 CR1. Please confirm.

              I can confirm that the fix for HORNETQ-1120 is in HornetQ 2.3.0.Final and not in 2.3.0.CR1.

              • 4. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                gaohoward

                Yes I think you need 2.3.0.Final.

                • 5. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                  abhiram123

                  Thanks for the info. Does changing the HornetQ version to 2.3.0 Final would cause any problems? I have no problem providing the cluster user and password, but I need to know if the clients connecting to the cluster should have these credentials while creating the context. I have tested by providing the cluster user and password but I have kept the security-enabled to false. The cluster started without any problems and backup was announced. I have a MDB subscribed to the topics deployed on these servers. Do I need to provide the credentials in the pooled-connection-factory of the MDB?

                  • 6. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                    gaohoward

                    Updating to 2.3.0.Final should be a good thing to your app, and it shouldn't be difficult. The cluster connection credentials are only used for communications between nodes within a cluster. Clients should not use it. It is recommended that you use a cluster connection user/password in your production env (of course security should be enabled too).

                    • 7. Re: Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                      abhiram123

                      I have provided the cluster connection credentials on both the servers but the security is disabled and the backup is getting announced and failover happens as expected when I kill the live server. When I start the live server now, the old live becomes live again but the following statement is logged on the backup server :

                      22:15:49,511 INFO  [org.hornetq.core.server] (Thread-126) HQ221004: HornetQ Server version 2.3.0.CR1 (buzzzzz!, 122) [d1bbb609-89a0-11e4-9101-e1df20dd3171] stopped

                      22:15:49,511 WARN  [org.hornetq.core.server] (Thread-126) HQ222217: Server is being completely stopped, since this was a replicated backup there may be journal files that need cleaning up. The HornetQ server will have to be manually restarted.

                      22:16:28,604 WARN  [org.hornetq.jms.server] (Periodic Recovery) HQ122018: Can not connect to XARecoveryConfig [transportConfiguration = [TransportConfiguration(name=c339c340-89a1-11e4-813d-0d489455fb9b, factory=org-hornetq-core-remoting-impl-invm-InVMConnectorFactory) ?server-id=0], discoveryConfiguration = null, username=null, password=null] on auto-generated resource recovery: HornetQException[errorType=NOT_CONNECTED message=HQ119026: Cannot connect to server(s). Tried with all available servers.]

                        at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:850) [hornetq-core-client-2.3.0.CR1.jar:]

                        at org.hornetq.jms.server.recovery.HornetQXAResourceWrapper.connect(HornetQXAResourceWrapper.java:378) [hornetq-jms-server-2.3.0.CR1.jar:]

                        at org.hornetq.jms.server.recovery.HornetQXAResourceWrapper.getDelegate(HornetQXAResourceWrapper.java:287) [hornetq-jms-server-2.3.0.CR1.jar:]

                        at org.hornetq.jms.server.recovery.HornetQXAResourceWrapper.recover(HornetQXAResourceWrapper.java:75) [hornetq-jms-server-2.3.0.CR1.jar:]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryStart(XARecoveryModule.java:520) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.resourceInitiatedRecoveryForRecoveryHelpers(XARecoveryModule.java:476) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.bottomUpRecovery(XARecoveryModule.java:378) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkSecondPass(XARecoveryModule.java:166) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:789) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:371) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                       

                       

                      22:16:28,604 WARN  [org.hornetq.jms.server] (Periodic Recovery) HQ122010: XA Recovery can not connect to any hornetq server on recovery [XARecoveryConfig [transportConfiguration = [TransportConfiguration(name=c339c340-89a1-11e4-813d-0d489455fb9b, factory=org-hornetq-core-remoting-impl-invm-InVMConnectorFactory) ?server-id=0], discoveryConfiguration = null, username=null, password=null]]

                      22:16:28,604 WARN  [com.arjuna.ats.jta] (Periodic Recovery) ARJUNA016027: Local XARecoveryModule.xaRecovery got XA exception XAException.XAER_RMERR: javax.transaction.xa.XAException: Error trying to connect to any providers for xa recovery

                        at org.hornetq.jms.server.recovery.HornetQXAResourceWrapper.getDelegate(HornetQXAResourceWrapper.java:314) [hornetq-jms-server-2.3.0.CR1.jar:]

                        at org.hornetq.jms.server.recovery.HornetQXAResourceWrapper.recover(HornetQXAResourceWrapper.java:75) [hornetq-jms-server-2.3.0.CR1.jar:]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryStart(XARecoveryModule.java:520) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.resourceInitiatedRecoveryForRecoveryHelpers(XARecoveryModule.java:476) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.bottomUpRecovery(XARecoveryModule.java:378) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkSecondPass(XARecoveryModule.java:166) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:789) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                        at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:371) [jbossjts-jacorb-4.17.3.Final.jar:4.17.3.Final (revision: 74343b48951c0fdab92316e56bfcaed605d620f6)]

                      Caused by: HornetQException[errorType=NOT_CONNECTED message=null]

                        at org.hornetq.jms.server.recovery.HornetQXAResourceWrapper.connect(HornetQXAResourceWrapper.java:427) [hornetq-jms-server-2.3.0.CR1.jar:]

                        at org.hornetq.jms.server.recovery.HornetQXAResourceWrapper.getDelegate(HornetQXAResourceWrapper.java:287) [hornetq-jms-server-2.3.0.CR1.jar:]

                        ... 7 more

                       

                       

                      Do we need to restart the backup server once the old live becomes live after failover. This is not the expected behavior right? The backup server should announce the backup once again and wait to become live once the live server is down. Isn't that the expected behavior?

                      One more thing , if the original live server goes out of network , the failover happens and the backup becomes live. But when I bring the original live server into the network the following message gets logged continuously on the three (Live server, Backup server, MDB client) server logs:

                       

                      22:15:49,612 WARN  [org.hornetq.core.client] (hornetq-discovery-group-thread-dg-group1) HQ212050: There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=d1bbb609-89a0-11e4-9101-e1df20dd3171

                       

                      This statement keeps on getting logged until I am able to access the log anymore. I have been trying to implement JMS clustering High Availability for quite sometime now without any success. I have tried the shared store mode and now I am trying the replication mode. I am attaching the configuration files. Please let me know if I missed any setting or any particular setting in case of failover by network failure. Any help would be greatly appreciated.

                      • 8. Re: Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                        gaohoward

                        What you saw are not related to security configuration. Basically I think you want your live-backup to support automatic 'fail-back', i.e. when you start the live server again, you want the current live to return to backup mode again. If this is what you want, I'd suggest you read the user manual chapter 40. High Availability and Failover. There are a few options for you to choose, and a few extra parameters to go with them. I'd also suggest you to upgrade the hornetq to latest release, because you would have a lot fixes especially in the replication area.

                         

                        Howard

                        • 9. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                          abhiram123

                          Automatic failback is happening but for only the first time. The original live server becomes live once I start it but if I kill the original live server now the backup doesn't become live which was the question I asked earlier. I tried updating my HornetQ version to 2.3.0 Final but the server didn't start due to some errors. Basically I copied the following jars into the modules folder of JBoss in the respective package:

                          hornetq-commons.jar

                          hornetq-core-client.jar

                          hornetq-jms-client.jar

                          hornetq-jms-server.jar

                          hornetq-journal.jar

                          hornetq-ra.jar

                          hornetq-server.jar

                           

                          Is there any documentation on how to update the HornetQ version in JBoss? I also want to know if failover works the same way if there is network failure? From what I am seeing, it is not working with 2.3.0 CR1. What version do you think I should update to keeping the JBoss 7.2.0 Final version in mind. I would appreciate if you provide me with any links on how to update the HornetQ version.

                          • 10. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                            gaohoward

                            Well basically updating a component in AS (jboss) is not a simple task and the steps for it vary case by case. But for simple case as this, I think you will be fine by replacing every hornetq jars in the jboss. I'm not sure what error you got but there is one thing you need be aware of, there may be a big jboss-client.jar located under bin/client directory in AS. This jar packaged hornetq classes used by a jms client. So if you update hornetq, make sure this jar is behind all new hornetq jars in clients classpath.

                             

                            Re: I also want to know if failover works the same way if there is network failure?

                            Can you explain where is the network failure? Is it happening between your client and server or between cluster nodes?

                             

                            Howard

                            • 11. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                              abhiram123

                              In a practical scenario, like in the customer environment if the live server goes out of network (i.e. network between the cluster nodes and with the client as well). In this scenario the backup server is becoming live and failover occurs , but once the live server comes into network I see the same node id message logged continuously on the three servers (Live , Backup and Client[MDB] ). The original live server is not becoming live once it comes back into the network. This is the practical scenario for high availability right? Are there any configuration properties specifically in case of network failure occurs between the cluster nodes. I could not find any documentation related to network failure.

                              • 12. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                                gaohoward

                                I see your configuration commented this out:

                                 

                                <!--allow-failback>true</allow-failback-->

                                 

                                If you want a auto fail back you need it. Can you uncommented and retry?

                                 

                                One possible reason you are seeing the same node id log is that you may have some corrupted data directory. When you test make sure you have a clean data directory.

                                 

                                Howard

                                • 13. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                                  abhiram123

                                  The behavior is the same if I set the property <allow-failback> to true on the backup server. I delete the tmp,log and data directories of JBoss for all the three servers before starting the deployment.I think the default value for this property is true according to this post The element "allow-failback"'s document is wrong HornetQ-2.2.5

                                  • 14. Re: JMS clustering replication mode : Cluster doesn't start with security disabled
                                    gaohoward

                                    Yes that's a problem with the document and need fix.

                                    Do you have a test that can be uploaded here?

                                     

                                    Howard

                                    1 2 Previous Next