1 2 Previous Next 19 Replies Latest reply on Jun 1, 2018 2:40 AM by mnovak

    Co-located replication failover configuration in standalone-ha.xml EAP  7

    kavinthamaduranga

      Is that possible to configure two messaging server nodes which that a single node have live,backup pair configured in standalone-ha.xml and demonstrate fail-over and fail back scenarios. If yes please post the xml files.

        • 1. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
          kavinthamaduranga

          Could manage two instances as

          group 1 ( co-locate default server in node 1 with backup server in node2)

          group 2 ( co-locate default server in node 2 with backup server in node1)

           

          And activemq subsystem in standalone-ha.xml is as follows,

           

           

          ----------------------------------------------------

           

          <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                      <server name="default">

                          <cluster password="12345"/>

                          <replication-master check-for-live-server="true" cluster-name="my-cluster" group-name="group1"/>

                          <security-setting name="#">

                              <role name="admin" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>

                          </security-setting>

                          <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10" redistribution-delay="1000" max-delivery-attempts="-1"/>

                          <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>

                          <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">

                              <param name="batch-delay" value="50"/>

                          </http-connector>

                          <remote-connector name="netty" socket-binding="messaging"> 

                              <param name="use-nio" value="true"/> 

                              <param name="use-nio-global-worker-pool" value="true"/> 

                          </remote-connector> 

                          <in-vm-connector name="in-vm" server-id="0"/>

                          <http-acceptor name="http-acceptor" http-listener="default"/>

                          <http-acceptor name="http-acceptor-throughput" http-listener="default">

                              <param name="batch-delay" value="50"/>

                              <param name="direct-deliver" value="false"/>

                          </http-acceptor>

                          <remote-acceptor name="netty" socket-binding="messaging"> 

                              <param name="use-nio" value="true"/> 

                          </remote-acceptor>

                          <in-vm-acceptor name="in-vm" server-id="0"/>

                          <broadcast-group name="bg-group1" connectors="netty" jgroups-channel="activemq-cluster"/>

                          <discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/>

                          <cluster-connection name="my-cluster" address="jms" connector-name="netty" discovery-group="dg-group1"/>

                          <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>

                          <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>

                          <jms-queue name="ConnectPublish_error" entries="java:jboss/exported/jms/queue/ConnectPublish_error"/>

                          <connection-factory name="InVmConnectionFactory" connectors="in-vm" entries="java:/ConnectionFactory"/>

                          <connection-factory name="RemoteConnectionFactory" ha="true" block-on-acknowledge="true" reconnect-attempts="-1" connectors="netty" entries="java:jboss/exported/jms/RemoteConnectionFactory"/>

                          <pooled-connection-factory name="activemq-ra" transaction="xa" connectors="in-vm" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"/>

                      </server>

                      <server name="backup"> 

                          <security enabled="false"/> 

                          <cluster password="12345"/> 

                          <replication-slave cluster-name="my-cluster" group-name="group2" allow-failback="true" restart-backup="true"/>

                          <bindings-directory path="activemq/bindings-B"/>

                          <journal-directory path="activemq/journal-B"/>

                          <large-messages-directory path="activemq/largemessages-B"/>

                          <paging-directory path="activemq/paging-B"/>

                          <security-setting name="#"> 

                              <role name="guest" manage="true" delete-non-durable-queue="true" create-non-durable-queue="true" delete-durable-queue="true" create-durable-queue="true" consume="true" send="true"/> 

                          </security-setting> 

                          <address-setting name="#" redistribution-delay="0" page-size-bytes="524288" max-size-bytes="1048576" max-delivery-attempts="200"/> 

                          <remote-connector name="netty-backup" socket-binding="messaging-backup"/> 

                          <in-vm-connector name="in-vm" server-id="0"/> 

                          <remote-acceptor name="netty-backup" socket-binding="messaging-backup"/> 

                          <broadcast-group name="bg-group-backup" connectors="netty-backup" broadcast-period="1000" jgroups-channel="activemq-cluster"/> 

                          <discovery-group name="dg-group-backup" refresh-timeout="1000" jgroups-channel="activemq-cluster"/> 

                          <cluster-connection name="my-cluster" retry-interval="1000" connector-name="netty-backup" address="jms" discovery-group="dg-group-backup"/> 

                      </server>

                  </subsystem>

           

           

          -----------------------------------------------------

           

          But Still fail back scenario does not occur from node 2 to node 1 when node 1 is up running back.

          • 2. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
            mnovak

            Hi,

             

            I've attached standalone-full-ha.xml configs for EAP7.1/WF11 for both of the servers. I did review of yours but could not find any issues in messaging subsystem. There might be issue with JGroups config but it's hard say. If you attach the whole config maybe I'll be able to find the issue.

             

            Thanks,

            Mirek

            1 of 1 people found this helpful
            • 3. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
              kavinthamaduranga

              Hi Miroslav,

               

              I've used a separate remote-acceptor and remote-connector with separate socket-bindings. Please find the attached xml files.

              1 of 1 people found this helpful
              • 4. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                mnovak

                I tried your config and it worked...I could do failover and failback. I believe that I might see possible problem in your case.

                 

                I believe that problem is that max-saved-replicated-journal-size is not set in <replication-slave ... /> in configuration of backup servers. By default it's set to 2 which means after 2 failover-failbacks when live server is starting again, backup does not restart itself and does sync with live server again. At this moment when live is stopped/killed then backup does not start. It's not exactly what you're describing as issue but I believe that you're hitting this issue.

                 

                Could you try to delete journal directories and set <replication-slave max-saved-replicated-journal-size="-1" ... /> , please?

                 

                WF10 has number of issues when configuring replicated journal in colocated topology and I recommend to shared store instead if possible. Problem here is that everytime backup syncs with live server, it moves its journal aside in journal directory and create new one (with up-to-date copy of live's journal). Then futher during failback when live is starting then live is also moving aside its journal and create new one (with up-to-date copy of backup's journal). So those additional copies of journal directories are summing up until out of disc space happens. It can be limited just on backup servers by setting max-saved-replicated-journal-size but unfortunately not on live servers. This annoying thing was fixed in WF11. WF11 also contains lots of fixes for other issues with replicated journal. I strongly recommend to move to this version.

                 

                Let me know if setting max-saved-replicated-journal-size helped, please? There might other issues :-)

                 

                Thanks.

                Mirek

                1 of 1 people found this helpful
                • 5. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                  kavinthamaduranga

                  Hi Miroslav,

                   

                  Thanks for the help and i could manage a failover-failback using max-saved-replicated-journal-size="-1" as well. And as you said backup restart issue was there. Anyway with previous configurations I could create a simple two node cluster with graceful client switching mechanism that balances the load when a server fails. So both nodes act as identical servers and once a failover occurs it get synced and the second node process the new messages + uncommited _from_node1.  But if you want the process get back to node 1, you have to manually break the client connection with node 2. And this behavior is pretty ok for now. Again thanks for the reply and i'll get back if anything goes wrong.

                  • 6. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                    mnovak

                    Happy to help, writing standalone JMS client which is able to handle failover without loosing/duplicating message is quite hard. I would recommend to have another WF server with client application deployed where consuming or sending messages is part of XA transaction (which is managed by transaction manager). Here the complexity is left on server.

                    • 7. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                      vamshi1413

                      Hey mnovak, in replication-colocated use-case model do we need to have 2 different servers one for live and another for backup ...? I see found 2 different sources both don't specify about having backup server. Can you please confirm ..?

                       


                      WildFly 10.1 Model Reference

                      wildfly/subsystem_1_0_ha-policy.xml at master · wildfly/wildfly · GitHub

                       

                       

                      I am using full-ha profile in domain mode.

                       

                                  <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                                      <server name="default">

                                          <cluster password="activemq"/>

                                          <replication-colocated backup-port-offset="500" max-backups="2" request-backup="true">

                                              <master cluster-name="activemq-cluster"/>

                                              <slave allow-failback="false" cluster-name="activemq-cluster"/>

                                          </replication-colocated>

                                          <bindings-directory/>

                                          <journal-directory/>

                                          <large-messages-directory/>

                                          <paging-directory/>

                                          <security-setting name="#">

                                              <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>

                                          </security-setting>

                                          <address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>

                                          <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>

                                          <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">

                                              <param name="batch-delay" value="50"/>

                                          </http-connector>

                                          <in-vm-connector name="in-vm" server-id="0"/>

                                          <http-acceptor name="http-acceptor" http-listener="default"/>

                                          <http-acceptor name="http-acceptor-throughput" http-listener="default">

                                              <param name="batch-delay" value="50"/>

                                              <param name="direct-deliver" value="false"/>

                                          </http-acceptor>

                                          <in-vm-acceptor name="in-vm" server-id="0"/>

                                          <broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster"/>

                                          <discovery-group name="dg-group1" jgroups-channel="activemq-cluster" jgroups-stack="tcp"/>

                                          <cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/>

                                          <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>

                                          <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>

                                          <jms-queue name="testQueue2" entries="jms/queue/testQueue2 java:/jboss/exported/jms/queue/testQueue2"/>

                                          <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>

                                          <connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>

                                          <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>

                                      </server>

                                  </subsystem>

                      • 8. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                        mnovak

                        Hi Vamshi,

                         

                        <replication-colocated... > is quite new feature. If I remember there was issue with backup port offset and http connectors/acceptors. Created backup server simply could not start another http acceptor in Wilfly/EAP 7 with given port-offset. However it could work well with remote connectors/acceptors but needs to be tested.

                         

                        Configuration with replication-master and replication-slave (for 2nd master/pair) collocated on one machine is supported in EAP 7. Configuration with <replication-colocated... > is not supported.

                         

                        Also I would like to point that Artemis in Wildfly 10 had lots of issues in HA replication. Artemis HA replication in Wildfly 12 is much more stable and I would recommend to update to this version if possible.

                         

                        Mirek

                        • 9. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                          vamshi1413

                          <replication-colocated... > is quite new feature. If I remember there was issue with backup port offset and http connectors/acceptors. Created backup server simply could not start another http acceptor in Wilfly/EAP 7 with given port-offset. However it could work well with remote connectors/acceptors but needs to be tested.

                           

                          I'll change the connectors/acceptors and see how it behaves.

                           

                           

                          Configuration with replication-master and replication-slave (for 2nd master/pair) collocated on one machine is supported in EAP 7. Configuration with <replication-colocated... > is not supported.

                           

                          Also I would like to point that Artemis in Wildfly 10 had lots of issues in HA replication. Artemis HA replication in Wildfly 12 is much more stable and I would recommend to update to this version if possible.

                           

                          I am using EAP7 and running my jboss instances in domain mode, does that mean <replication-colocated> ....</replication-colocated> doesn't work in EAP7 and I have to use <replication-master> and <replication-slave> ..?? 

                          (FYI: I have 6 nodes with 2 in green, 2 blue and 2 DR with only 2 nodes running at any instant of time and setting up HA.)

                           

                          Are there any issues with HA replication in EAP7 as well if so in what version it is expected to be fixed.

                          • 10. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                            mnovak

                            <replication-colocated> is not supported in EAP 7 (it's mentioned in release notes). You need to use <replication-master> and <replication-slave> for collocated HA topology.

                             

                            HA replication in EAP 7.1 is working well. Major issues were fixed in this version. However EAP 7.0 contained number of issues in HA replication.

                             

                            | (FYI: I have 6 nodes with 2 in green, 2 blue and 2 DR with only 2 nodes running at any instant of time and setting up HA.)

                             

                            Which ones are forming master/slave pairs? EAP 7 supports one slave per master.

                            • 11. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                              vamshi1413

                              I believe the configuration that I posted in my earlier post looks to be working fine(I don't see any errors) based on the server.logs but in terms of functionality I don't know yet. Here is the information from the logs that I see when i bring up my servers with <replication-colocated>

                               

                               

                              Server1

                              2018-05-22 14:03:38,368 INFO  [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 68) AMQ221007: Server is now live

                              2018-05-22 14:03:38,369 INFO  [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 68) AMQ221001: Apache ActiveMQ Artemis Message Broker version 1.1.0.SP24-redhat-1 [nodeID=c869c816-5df0-11e8-bd9d-357bd70a9cb9]

                               

                               

                              After starting Server2

                              2018-05-22 14:04:24,768 INFO  [org.apache.activemq.artemis.core.server] (default I/O-2) AMQ221049: Activating Replica for node: 6ea64fd5-57aa-11e8-9ff9-7d30e8dfbf51

                              2018-05-22 14:04:25,640 INFO  [org.apache.activemq.artemis.core.server] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) AMQ221109: Apache ActiveMQ Artemis Backup Server version 1.1.0.SP24-redhat-1 [null] started, waiting live to fail before it gets active

                              2018-05-22 14:04:27,102 INFO  [org.apache.activemq.artemis.core.server] (Thread-1 (ActiveMQ-client-netty-threads-715618282)) AMQ221024: Backup server ActiveMQServerImpl::serverUUID=6ea64fd5-57aa-11e8-9ff9-7d30e8dfbf51 is synchronized with live-server.

                              2018-05-22 14:04:27,636 INFO  [org.apache.activemq.artemis.core.server] (Thread-2 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@3c3a3cad-112487089)) AMQ221031: backup announced


                              Server2

                              2018-05-22 14:04:19,693 INFO  [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 69) AMQ221007: Server is now live

                              2018-05-22 14:04:19,694 INFO  [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 69) AMQ221001: Apache ActiveMQ Artemis Message Broker version 1.1.0.SP24-redhat-1 [nodeID=6ea64fd5-57aa-11e8-9ff9-7d30e8dfbf51]

                               

                              2018-05-22 14:04:23,626 INFO  [org.apache.activemq.artemis.core.server] (default I/O-2) AMQ221049: Activating Replica for node: c869c816-5df0-11e8-bd9d-357bd70a9cb9

                              2018-05-22 14:04:24,334 INFO  [org.apache.activemq.artemis.core.server] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) AMQ221109: Apache ActiveMQ Artemis Backup Server version 1.1.0.SP24-redhat-1 [null] started, waiting live to fail before it gets active

                              2018-05-22 14:04:25,134 INFO  [org.apache.activemq.artemis.core.server] (Thread-5 (ActiveMQ-client-netty-threads-412967046)) AMQ221024: Backup server ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9 is synchronized with live-server.

                               

                              2018-05-22 14:04:26,358 INFO  [org.apache.activemq.artemis.core.server] (Thread-2 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@aea2c7f-547533483)) AMQ221031: backup announced

                              • 12. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                                mnovak

                                This looks good. If you kill one of the servers then does backup on other node activate? You can check it in CLI by reading attribute "active" of given Artemis server.

                                • 13. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                                  vamshi1413

                                  I am not sure how to check that, can you please let me know..? Also when I stopped the server1, here is the log from server2.

                                   

                                  2018-05-23 11:57:51,681 WARN  [org.apache.activemq.artemis.core.client] (Thread-6 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                  2018-05-23 11:57:51,687 WARN  [org.apache.activemq.artemis.core.client] (Thread-9 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                  2018-05-23 11:57:51,688 WARN  [org.apache.activemq.artemis.core.server] (Thread-9 (ActiveMQ-client-global-threads-1951057230)) AMQ222095: Connection failed with failedOver=false

                                  2018-05-23 11:57:51,691 WARN  [org.apache.activemq.artemis.core.client] (Thread-8 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                  2018-05-23 11:57:51,690 WARN  [org.apache.activemq.artemis.core.client] (Thread-12 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                  2018-05-23 11:57:51,690 WARN  [org.apache.activemq.artemis.core.client] (Thread-11 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                  2018-05-23 11:57:51,694 WARN  [org.apache.activemq.artemis.core.client] (Thread-7 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                  2018-05-23 11:57:51,700 WARN  [org.apache.activemq.artemis.core.client] (Thread-13 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                  2018-05-23 11:57:51,707 WARN  [org.apache.activemq.artemis.core.server] (Thread-9 (ActiveMQ-client-global-threads-1951057230)) AMQ222095: Connection failed with failedOver=false

                                  2018-05-23 11:57:51,708 INFO  [org.apache.activemq.artemis.core.server] (Thread-19 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@850d3aa-1113717546)) AMQ221029: stopped bridge sf.my-cluster.c869c816-5df0-11e8-bd9d-357bd70a9cb9

                                  2018-05-23 11:57:51,725 WARN  [org.apache.activemq.artemis.core.client] (Thread-14 (ActiveMQ-client-global-threads-1951057230)) AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

                                   

                                  2018-05-23 11:57:51,728 INFO  [org.apache.activemq.artemis.core.server] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) AMQ221037: ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9 to become 'live'

                                  2018-05-23 11:57:51,745 WARN  [org.apache.activemq.artemis.core.client] (Thread-11 (ActiveMQ-client-global-threads-1951057230)) AMQ212004: Failed to connect to server.

                                  2018-05-23 11:57:51,774 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-14,ee,stg-dmz-app25:stg-app24.Member2) ISPN000094: Received new cluster view for channel server: [stg-dmz-app25:stg-app24.Member2|3] (2) [stg-dmz-app25:stg-app24.Member2, stg-dmz-app26:stg-app24.Member3]

                                  2018-05-23 11:57:51,775 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-14,ee,stg-dmz-app25:stg-app24.Member2) ISPN000094: Received new cluster view for channel web: [stg-dmz-app25:stg-app24.Member2|3] (2) [stg-dmz-app25:stg-app24.Member2, stg-dmz-app26:stg-app24.Member3]

                                  2018-05-23 11:57:51,779 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-14,ee,stg-dmz-app25:stg-app24.Member2) ISPN000094: Received new cluster view for channel hibernate: [stg-dmz-app25:stg-app24.Member2|3] (2) [stg-dmz-app25:stg-app24.Member2, stg-dmz-app26:stg-app24.Member3]

                                  2018-05-23 11:57:51,781 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-14,ee,stg-dmz-app25:stg-app24.Member2) ISPN000094: Received new cluster view for channel ejb: [stg-dmz-app25:stg-app24.Member2|3] (2) [stg-dmz-app25:stg-app24.Member2, stg-dmz-app26:stg-app24.Member3]

                                  2018-05-23 11:57:51,811 INFO  [org.infinispan.CLUSTER] (transport-thread--p6-t21) ISPN000310: Starting cluster-wide rebalance for cache routing, topology CacheTopology{id=8, rebalanceId=5, currentCH=DefaultConsistentHash{ns=80, owners = (2)[stg-dmz-app25:stg-app24.Member2: 40+13, stg-dmz-app26:stg-app24.Member3: 40+13]}, pendingCH=DefaultConsistentHash{ns=80, owners = (2)[stg-dmz-app25:stg-app24.Member2: 40+40, stg-dmz-app26:stg-app24.Member3: 40+40]}, unionCH=null, actualMembers=[stg-dmz-app25:stg-app24.Member2, stg-dmz-app26:stg-app24.Member3]}

                                  2018-05-23 11:57:51,850 INFO  [org.infinispan.CLUSTER] (remote-thread--p4-t4) ISPN000336: Finished cluster-wide rebalance for cache routing, topology id = 8

                                  2018-05-23 11:57:51,955 INFO  [org.apache.activemq.artemis.core.server] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) AMQ221007: Server is now live

                                  2018-05-23 11:57:51,981 INFO  [org.apache.activemq.artemis.core.server] (Thread-14 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@aea2c7f-547533483)) AMQ221027: Bridge ClusterConnectionBridge@3ca62aaf [name=sf.my-cluster.6ea64fd5-57aa-11e8-9ff9-7d30e8dfbf51, queue=QueueImpl[name=sf.my-cluster.6ea64fd5-57aa-11e8-9ff9-7d30e8dfbf51, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9]]@3261d34b targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@3ca62aaf [name=sf.my-cluster.6ea64fd5-57aa-11e8-9ff9-7d30e8dfbf51, queue=QueueImpl[name=sf.my-cluster.6ea64fd5-57aa-11e8-9ff9-7d30e8dfbf51, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9]]@3261d34b targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8180&host=stg-dmz-app25-wernerds-net], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1326026831[nodeUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8680&host=stg-dmz-app25-wernerds-net, address=jms, server=ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8180&host=stg-dmz-app25-wernerds-net], discoveryGroupConfiguration=null]] is connected

                                  2018-05-23 11:57:52,009 INFO  [org.apache.activemq.artemis.core.server] (Thread-15 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@aea2c7f-547533483)) AMQ221027: Bridge ClusterConnectionBridge@735cd970 [name=sf.my-cluster.77121815-5527-11e8-90bd-15c27e91d4e9, queue=QueueImpl[name=sf.my-cluster.77121815-5527-11e8-90bd-15c27e91d4e9, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9]]@7ea266e6 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@735cd970 [name=sf.my-cluster.77121815-5527-11e8-90bd-15c27e91d4e9, queue=QueueImpl[name=sf.my-cluster.77121815-5527-11e8-90bd-15c27e91d4e9, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9]]@7ea266e6 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8180&host=stg-dmz-app26-wernerds-net], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1326026831[nodeUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8680&host=stg-dmz-app25-wernerds-net, address=jms, server=ActiveMQServerImpl::serverUUID=c869c816-5df0-11e8-bd9d-357bd70a9cb9])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8180&host=stg-dmz-app26-wernerds-net], discoveryGroupConfiguration=null]] is connected

                                  • 14. Re: Co-located replication failover configuration in standalone-ha.xml EAP  7
                                    vamshi1413

                                    I don't think if we kill the server, it will report/update the other server, also are the WARN messages are expected or do I need to be concerned about those.

                                    1 2 Previous Next