5 Replies Latest reply on Mar 3, 2015 2:40 AM by valsaraj007

    Issue with starting multiple standalone nodes on single machine

    valsaraj007

      Hi,

       

      I tried to start 2 standalone nodes of WildFly-8.2.0.Final on a single machine. Node are started as:

       

      Node 1:

      standalone.bat --debug 8787 --server-config=standalone-full-ha.xml -Djboss.node.name=node1

       

      Node 2:

      standalone.bat --debug 8887 --server-config=standalone-full-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node2

       

      First started Node 2 and then started Node 1.

       

      Node 2 started fine and while starting node 1 following log appeared in node 2:

      18:05:22,697 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport](Incoming-8,shared=udp) ISPN000094: Received new cluster view: [node2/ejb|3] (2) [node2/ejb, node1/ejb]

       

      After showing schema update complted message, node 1 stuck. There was no error in both server logs.

      So I stopped node 2, then node 1 continued with a warning message and started fine. The message is shown below:

      INFO  [org.jboss.messaging] (MSC service thread 1-3) JBAS011615: Registered HTTP upgrade for hornetq-remoting protocol handled by http-acceptor acceptor

      INFO  [org.jboss.messaging] (MSC service thread 1-2) JBAS011615: Registered HTTP upgrade for hornetq-remoting protocol handled by http-acceptor-throughput acceptor

      WARN  [org.hornetq.core.client] (hornetq-discovery-group-thread-dg-group1) HQ212034: There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=38ffb84c-8b43-11e4-9ca9-5b406c32b9eb

       

      What may be the problem?

       

      Thanks!

        • 1. Re: Issue with starting multiple standalone nodes on single machine
          jbertram

          Did you copy the configuration/data from one server to the other?  If so, then you likely have a duplicate UUID in your HornetQ journal which is causing this (because the UUID is generated when the journal is first created).  If this is your problem then you should simply delete the HornetQ journal (in the data directory) from one of the servers.

          • 2. Re: Issue with starting multiple standalone nodes on single machine
            valsaraj007

            Hi acxjbertr,

             

            Yes, I took copy of node 1. I will check by clearing data folder.

             

            Thanks much!

            • 3. Re: Issue with starting multiple standalone nodes on single machine
              valsaraj007

              Thanks much acxjbertr. That error gone when I removed messagingjournal folder in the data folder of node 2 but am getting the following error on startup and it goes when I restart again. It comes again when I shutdown all node and start the second node. In this case I stop and start the second node and it works fine.

               

              ERROR [org.hornetq.core.server] (default I/O-1) HQ224018: Failed to create session: HornetQClusterSecurityException[errorType=CLUSTER_SECURITY_EXCEPTION message=HQ119099: Unable to authenticate cluster user: HORNETQ.CLUSTER.ADMIN.USER]

                at org.hornetq.core.security.impl.SecurityStoreImpl.authenticate(SecurityStoreImpl.java:122)

                at org.hornetq.core.server.impl.HornetQServerImpl.createSession(HornetQServerImpl.java:1020)

                at org.hornetq.core.protocol.core.impl.HornetQPacketHandler.handleCreateSession(HornetQPacketHandler.java:149)

                at org.hornetq.core.protocol.core.impl.HornetQPacketHandler.handlePacket(HornetQPacketHandler.java:77)

                at org.hornetq.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:641)

                at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:556)

                at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:532)

                at org.hornetq.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:658)

                at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.channelRead(HornetQChannelHandler.java:73)

                at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:338)

                at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:324)

                at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:153)

                at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:338)

                at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:324)

                at io.netty.handler.codec.ByteToMessageDecoder.handlerRemoved(ByteToMessageDecoder.java:110)

                at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:524)

                at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved(DefaultChannelPipeline.java:518)

                at io.netty.channel.DefaultChannelPipeline.remove0(DefaultChannelPipeline.java:348)

                at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:319)

                at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:296)

                at org.hornetq.core.protocol.ProtocolHandler$ProtocolDecoder.decode(ProtocolHandler.java:168)

                at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:226)

                at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:139)

                at org.hornetq.core.protocol.ProtocolHandler$ProtocolDecoder.channelRead(ProtocolHandler.java:111)

                at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:338)

                at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:324)

                at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)

                at org.xnio.netty.transport.AbstractXnioSocketChannel$ReadListener.handleEvent(AbstractXnioSocketChannel.java:435)

                at org.xnio.netty.transport.AbstractXnioSocketChannel$ReadListener.handleEvent(AbstractXnioSocketChannel.java:371)

                at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) [xnio-api-3.3.0.Final.jar:3.3.0.Final]

                at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66) [xnio-api-3.3.0.Final.jar:3.3.0.Final]

                at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:88) [xnio-nio-3.3.0.Final.jar:3.3.0.Final]

                at org.xnio.nio.WorkerThread.run(WorkerThread.java:539) [xnio-nio-3.3.0.Final.jar:3.3.0.Final]

              • 4. Re: Issue with starting multiple standalone nodes on single machine
                jbertram

                Check this out.

                1 of 1 people found this helpful
                • 5. Re: Issue with starting multiple standalone nodes on single machine
                  valsaraj007

                  Thanks much Justin. It worked when I set password!