6 Replies Latest reply on Mar 15, 2017 6:51 AM by Giacomo Genovese

    Error clustering using a backup node

    Giacomo Genovese Newbie

      Hi all,

      I'm trying to set up in CapeDwarf a cluster with a live and a backup node(actually in the same machine using two different addresses), as described in Wildfly CookBook, Chapter 13, Messaging with Wildfly-Clustering HornetQ using message replication.

       

      I use to run the live instance first and then the backup one.

      Whatever is the way I deploy my application, using deployments folder, deploying manually first in live node than in backup one and through all the ways to do it I know. I get always error.

      Below the error when I try to deploy application on the backup node:

      12:46:38,694 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) JBAS014613: Operation ("deploy") failed - address: ([("deployment" => "app.war")]) - failure description: {

          "JBAS014771: Services with missing/unavailable dependencies" => ["jboss.undertow.deployment.default-server.default-host./app.UndertowDeploymentInfoService is missing [jboss.naming.context.java.JmsXA, jboss.naming.context.java.queue.capedwarf]"],

          "JBAS014879: One or more services were unable to start due to one or more indirect dependencies not being available." => {

              "Services that were unable to start:" => [

                  "jboss.capedwarf.warmup.app.default",

                  "jboss.deployment.unit.\"app.war\".deploymentCompleteService",

                  "jboss.undertow.deployment.default-server.default-host./app"

              ],

              "Services that may be the cause:" => [

                  "jboss.naming.context.java.ConnectionFactory",

                  "jboss.naming.context.java.JmsXA",

                  "jboss.naming.context.java.queue.capedwarf"

              ]

          }

      }

      Here the setting of my xml:

      LIVE node

       

      <hornetq-server>

                      <jmx-management-enabled>true</jmx-management-enabled>

                      <cluster-password>${jboss.messaging.cluster.password:XXXX}</cluster-password>

                      <persistence-enabled>true</persistence-enabled>

        <backup>${jboss.messaging.hornetq.backup:false}</backup>

        <failover-on-shutdown>true</failover-on-shutdown>

        <check-for-live-server>true</check-for-live-server>

        <shared-store>false</shared-store>

                      <journal-file-size>102400</journal-file-size>

                      <journal-min-files>2</journal-min-files>

                      <connectors>

      BACKUP node     

          

      <hornetq-server>

                      <persistence-enabled>true</persistence-enabled>

                      <cluster-password>${jboss.messaging.cluster.password:XXXX}</cluster-password>

                      <jmx-management-enabled>true</jmx-management-enabled>

                      <backup>${jboss.messaging.hornetq.backup:true}</backup>

                      <shared-store>false</shared-store>

                      <allow-failback>true</allow-failback>

                      <failover-on-shutdown>true</failover-on-shutdown>

                      <journal-file-size>102400</journal-file-size>

                      <journal-min-files>2</journal-min-files>

       

      Differently, I have no problem using two live nodes in cluster.

       

      Please, Could someone explain why this happens?

       

      Thanks a lot in advance for your help.

       

      Best,

      Giacomo.