3 Replies Latest reply on May 6, 2014 1:58 PM by pferraro

    LOCK protocol in JGroups stack on Wildfly 8.0.0.Final

    hchiorean

      Hi,

       

      ModeShape's clustering capability (in the Wildfly kit) is based on forking a JGroups channel (via the FORK protocol) off of the main Infinispan channel at runtime, using a programatic approach similar to http://belaban.blogspot.ch/2013/08/how-to-hijack-jgroups-channel-inside.html

       

      We have an Arquillian integration test, that basically performs these steps:

      1) packages & deploys a war file in WF - via Arquillian Remote - the first time this happens on a fresh container the repository is also created & initialized - i.e. the FORK channel is created and added on top of the JGroups stack

      2) runs some tests

      3) undeploys the war - again, via Arquillian

       

      The problem occurs when the container isn't stopped/started between test runs. In other words, steps 1-3 from above are performed on a repository that has already been initialized and that has added the FORK protocol dynamically (via the API). The first time the tests run fine, but from there onwards any attempts to redeploy the web application fail with:

       

      15:29:33,738 DEBUG [org.jboss.as.controller.management-operation] (management-handler-thread - 6) JBAS014616: Operation ("read-resource") failed - address: ([
          ("subsystem" => "jgroups"),
          ("channel" => "modeshape")
      ]) - failure description: "JBAS014737: No child registry for (protocol, FORK)"
      

      The only solution is to stop (restart) the container.

       

      Is there anything special we need to be doing when creating FORK channels other than

       

        Channel mainChannel = cache.getRpcManager().getTransport().getChannel()
        Protocol topProtocol = mainChannel.getProtocolStack().getTopProtocol();
        new ForkChannel(mainChannel, "modeshape-stack", FORK_CHANNEL_NAME, true, ProtocolStack.ABOVE, topProtocol.getClass());
      

       

      Thanks

        • 1. Re: LOCK protocol in JGroups stack on Wildfly 8.0.0.Final
          belaban

          Ales recently ran into the same issue. The quickfix for now is to add FORK to the WF configuration for the cache you'll use. Apparently, adding FORK dynamically doesn't currently work.

          Paul is looking into this specific issue.

          • 2. Re: LOCK protocol in JGroups stack on Wildfly 8.0.0.Final
            rachmato

            Hello Horia

             

            Could you please attach the code which deploys the app and modifies the channel. It seems that the problem might be with the stack processing that is done when presenting the channel run-time metrics (channel=*).

            I'll try to reproduce locally and have a look at it.

             

            Richard 

            • 3. Re: LOCK protocol in JGroups stack on Wildfly 8.0.0.Final
              pferraro

              OK - I see the problem.  WF8 uses a custom management resource that interrogates the installed channel services to allow management of channels.  Consequently, it expects that the protocol stack of the running service match that of the protocol stack as defined in the domain model.  This all stems from the fact that, in WF8, the jgroups subsystem only defines protocol stacks, while the infinispan subsystem is responsible for creating the actual channels.

               

              In WF9, I plan to have all manageable channels explicitly enumerated in the jgroups subsystem - which will not only allow you to share channels across infinispan cache containers, but will also make this custom resource obsolete.