4 Replies Latest reply on Jul 27, 2017 5:20 AM by waouni

    Infinispan cache not visible between the two servers using JGroups WildFly10

    waouni

      Hi,

      I would like to implement cache replication between two servers on the same network using infinispan, jgroups and WildFly 10.

      With the settings I made, I see very well that the TCP packet is sent correctly and the nodes exchange the data packets.

      On the other hand, when I put informaton in the cache of the first server (server 1) using "put", and I start the second server (server 2), server 2 does not detect the information that was replicated by the first server (server 1).

      (Server 2 only sees the cache who was created by himself but not the one created by server 1).

      And so my "getCache" does not return the information that has already been replicated.

      I wonder if there is an additional strategy to be applied or a missing setting.

      Your help is welcome.

       

      All my conf are in standalone-full-ha.xml file :

       

      *** infinispan Cache ***

       

      <cache-container name="server" aliases="singleton cluster" default-cache="dist" module="org.wildfly.clustering.server">
                      <transport lock-timeout="60000"/>
                      <replicated-cache name="dist" mode="SYNC">
                          <transaction locking="OPTIMISTIC" mode="FULL_XA"/>
                          <eviction strategy="NONE"/>
                      </replicated-cache>
      </cache-container>
      

       

      ****  JGroups Conf ****

       

      <subsystem xmlns="urn:jboss:domain:jgroups:4.0">
                  <channels default="ee">
                      <channel name="ee" stack="tcp"/>
                  </channels>
                  <stacks>
                      <stack name="udp">
                          <transport type="UDP" socket-binding="jgroups-udp"/>
                          <protocol type="PING"/>
                          <protocol type="MERGE3"/>
                          <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
                          <protocol type="FD_ALL"/>
                          <protocol type="VERIFY_SUSPECT"/>
                          <protocol type="pbcast.NAKACK2"/>
                          <protocol type="UNICAST3"/>
                          <protocol type="pbcast.STABLE"/>
                          <protocol type="pbcast.GMS"/>
                          <protocol type="UFC"/>
                          <protocol type="MFC"/>
                          <protocol type="FRAG2"/>
                      </stack>
                      <stack name="tcp">
                          <transport type="TCP" socket-binding="jgroups-tcp"/>
                          <protocol type="MPING" socket-binding="jgroups-mping"/>
                          <protocol type="MERGE3"/>
                          <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
                          <protocol type="FD"/>
                          <protocol type="VERIFY_SUSPECT"/>
                          <protocol type="pbcast.NAKACK2"/>
                          <protocol type="UNICAST3"/>
                          <protocol type="pbcast.STABLE"/>
                          <protocol type="pbcast.GMS"/>
                          <protocol type="MFC"/>
                          <protocol type="FRAG2"/>
                      </stack>
                  </stacks>
      </subsystem>
      

       

      *** Sokcet Binding Conf ***

       

      <interfaces>
              ....
              <interface name="private">
                  <inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
              </interface>
             ....
      </interfaces>
      

       

      <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
              <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
              <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
              <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
              <socket-binding name="http" port="${jboss.http.port:8080}"/>
              <socket-binding name="https" port="${jboss.https.port:8443}"/>
              <socket-binding name="iiop" interface="unsecure" port="3528"/>
              <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/>
              <socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
              <socket-binding name="jgroups-tcp" interface="private" port="7600"/>
              <socket-binding name="jgroups-tcp-fd" interface="private" port="57600"/>
              <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
              <socket-binding name="jgroups-udp-fd" interface="private" port="54200"/>
              <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/>
              <socket-binding name="txn-recovery-environment" port="4712"/>
              <socket-binding name="txn-status-manager" port="4713"/>
              <outbound-socket-binding name="mail-smtp">
                  <remote-destination host="localhost" port="25"/>
              </outbound-socket-binding>
      </socket-binding-group>
      

       

      *** My server launch settings ***

       

      -Djava.net.preferIPv4Stack=true -Djboss.server.default.config=standalone-full-ha.xml -Djboss.bind.address.private=10.203.16.98
      

       

       

      (10.203.16.98 =  The servers 1's adress and the second server is lunching with his IP adress).

       

       

       

       

      **** Java Code  ****

         

      private void initCache() {
              try {
                  InitialContext context = new InitialContext();
                  String infinispan_path = "java:jboss/infinispan/container/server";
                  container = (CacheContainer) context.lookup(infinispan_path);
                  this.cache = container.getCache();
      
      
              } catch (NamingException e) {
                  LOGGER.error("Initialization Cache Error : ", cache.getName());
              }
          }
      

       

      I can not use @Resource in my case because of the environment I use has some constraints. (i don't think that my problem is related to that).

       

      This is how i put on cache :

      @Override
      @CacheEntryCreated
      public SecretInfo createSecret(String login, String application, Long timeStamp) {
       ...  
         this.getCache().put("key", data);
      ....
      
      }
      

       

       

       

      **** Log's Example *****

       

      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.TCP] (TransferQueueBundler,ee,ac-auz-w7-84425) dest=10.203.17.88:7600 (4169 bytes)
      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (thread-15,ee,ac-auz-w7-84425) ac-auz-w7-84425: received ac-auz-w7-84425#491
      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (thread-15,ee,ac-auz-w7-84425) ac-auz-w7-84425: delivering ac-auz-w7-84425#491-491 (1 messages)
      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.MFC] (thread-15,ee,ac-auz-w7-84425) ac-auz-w7-84425 used 4096 credits, 1212946 remaining
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (thread-2,ee,ac-auz-w7-84425) ac-auz-w7-84425: received [dst: ac-auz-w7-84425, src: ptavinnhan-pc (2 headers), size=0 bytes, flags=INTERNAL], headers are FD: heartbeat, TP: [cluster_name=ee]
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.FD] (thread-2,ee,ac-auz-w7-84425) ac-auz-w7-84425: received are-you-alive from ptavinnhan-pc, sending response
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (thread-2,ee,ac-auz-w7-84425) ac-auz-w7-84425: sending msg to ptavinnhan-pc, src=ac-auz-w7-84425, headers are FD: heartbeat ack, TP: [cluster_name=ee]
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (TransferQueueBundler,ee,ac-auz-w7-84425) ac-auz-w7-84425: sending 1 msgs (71 bytes (355,00% of max_bundle_size) to 1 dests(s): [ee:ptavinnhan-pc]
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (TransferQueueBundler,ee,ac-auz-w7-84425) dest=10.203.17.88:7600 (74 bytes)
      
      

       

      Thank you !

        • 1. Re: Infinispan cache not visible between the two servers using JGroups WildFly10
          pferraro

          What's missing is a dependency between your application and the requisite cache that you are trying to access.

          There are a couple of ways to properly reference a server-managed cache in your application.

           

          1. Injects the default cache of the "server" cache container.

          @Resource(lookup = "java:jboss/infinispan/cache/server/default")
          Cache<?, ?> cache;
          

           

          2.  Establish the dependency to default cache of "server" cache container via deployment descriptor and use vanilla JNDI lookup of cache.

          <resource-ref>
              <res-ref-name>infinispan/cache</res-ref-name>
              <lookup-name>java:jboss/infinispan/cache/server/default</lookup-name>
          </resource-ref>
          

           

          Cache<?, ?> cache = (Cache<?, ?>) new InitialContext().lookup("java:comp/env/infinispan/cache")
          
          1 of 1 people found this helpful
          • 2. Re: Infinispan cache not visible between the two servers using JGroups WildFly10
            waouni

            Thank u pferraro for your reply.

            After invetigation, my problem is that the "put" was not done correctly. And since there is no error catched, it was difficult to understand why the replication is not done well.

            So, my real problem was a serialization error of the object passed in the infinispan cache. (The object was not really serializable)

            I corrected it and replication is done perfectly

            in my example, i didn't need to establish dependency via deplyment descriptor, only a simple lookup "java:jboss/infinispan/container/server" was enough.

             

             

            • 3. Re: Infinispan cache not visible between the two servers using JGroups WildFly10
              pferraro

              waouni You still need to establish a dependency (as I mentioned above).  The reason this appears to work is because standalone-full-ha.xml starts the default JGroups channel by default.  Each cache container is configured to start passively following the startup of the channel, and cache configurations start passively following startup of their respective cache containers.  Without the proper dependencies, there exists a race condition between the startup of your deployment and the availability of your cache configuration.

              • 4. Re: Infinispan cache not visible between the two servers using JGroups WildFly10
                waouni

                This is good to know

                 

                i will do that, Thank u Paul !