4 Replies Latest reply on Jul 27, 2017 5:20 AM by Walid AOUNI

    Infinispan cache not visible between the two servers using JGroups WildFly10

    Walid AOUNI Newbie


      I would like to implement cache replication between two servers on the same network using infinispan, jgroups and WildFly 10.

      With the settings I made, I see very well that the TCP packet is sent correctly and the nodes exchange the data packets.

      On the other hand, when I put informaton in the cache of the first server (server 1) using "put", and I start the second server (server 2), server 2 does not detect the information that was replicated by the first server (server 1).

      (Server 2 only sees the cache who was created by himself but not the one created by server 1).

      And so my "getCache" does not return the information that has already been replicated.

      I wonder if there is an additional strategy to be applied or a missing setting.

      Your help is welcome.


      All my conf are in standalone-full-ha.xml file :


      *** infinispan Cache ***


      <cache-container name="server" aliases="singleton cluster" default-cache="dist" module="org.wildfly.clustering.server">
                      <transport lock-timeout="60000"/>
                      <replicated-cache name="dist" mode="SYNC">
                          <transaction locking="OPTIMISTIC" mode="FULL_XA"/>
                          <eviction strategy="NONE"/>


      ****  JGroups Conf ****


      <subsystem xmlns="urn:jboss:domain:jgroups:4.0">
                  <channels default="ee">
                      <channel name="ee" stack="tcp"/>
                      <stack name="udp">
                          <transport type="UDP" socket-binding="jgroups-udp"/>
                          <protocol type="PING"/>
                          <protocol type="MERGE3"/>
                          <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
                          <protocol type="FD_ALL"/>
                          <protocol type="VERIFY_SUSPECT"/>
                          <protocol type="pbcast.NAKACK2"/>
                          <protocol type="UNICAST3"/>
                          <protocol type="pbcast.STABLE"/>
                          <protocol type="pbcast.GMS"/>
                          <protocol type="UFC"/>
                          <protocol type="MFC"/>
                          <protocol type="FRAG2"/>
                      <stack name="tcp">
                          <transport type="TCP" socket-binding="jgroups-tcp"/>
                          <protocol type="MPING" socket-binding="jgroups-mping"/>
                          <protocol type="MERGE3"/>
                          <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
                          <protocol type="FD"/>
                          <protocol type="VERIFY_SUSPECT"/>
                          <protocol type="pbcast.NAKACK2"/>
                          <protocol type="UNICAST3"/>
                          <protocol type="pbcast.STABLE"/>
                          <protocol type="pbcast.GMS"/>
                          <protocol type="MFC"/>
                          <protocol type="FRAG2"/>


      *** Sokcet Binding Conf ***


              <interface name="private">
                  <inet-address value="${jboss.bind.address.private:}"/>


      <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
              <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
              <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
              <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
              <socket-binding name="http" port="${jboss.http.port:8080}"/>
              <socket-binding name="https" port="${jboss.https.port:8443}"/>
              <socket-binding name="iiop" interface="unsecure" port="3528"/>
              <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/>
              <socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="${jboss.default.multicast.address:}" multicast-port="45700"/>
              <socket-binding name="jgroups-tcp" interface="private" port="7600"/>
              <socket-binding name="jgroups-tcp-fd" interface="private" port="57600"/>
              <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:}" multicast-port="45700"/>
              <socket-binding name="jgroups-udp-fd" interface="private" port="54200"/>
              <socket-binding name="modcluster" port="0" multicast-address="" multicast-port="23364"/>
              <socket-binding name="txn-recovery-environment" port="4712"/>
              <socket-binding name="txn-status-manager" port="4713"/>
              <outbound-socket-binding name="mail-smtp">
                  <remote-destination host="localhost" port="25"/>


      *** My server launch settings ***


      -Djava.net.preferIPv4Stack=true -Djboss.server.default.config=standalone-full-ha.xml -Djboss.bind.address.private=



      ( =  The servers 1's adress and the second server is lunching with his IP adress).





      **** Java Code  ****


      private void initCache() {
              try {
                  InitialContext context = new InitialContext();
                  String infinispan_path = "java:jboss/infinispan/container/server";
                  container = (CacheContainer) context.lookup(infinispan_path);
                  this.cache = container.getCache();
              } catch (NamingException e) {
                  LOGGER.error("Initialization Cache Error : ", cache.getName());


      I can not use @Resource in my case because of the environment I use has some constraints. (i don't think that my problem is related to that).


      This is how i put on cache :

      public SecretInfo createSecret(String login, String application, Long timeStamp) {
         this.getCache().put("key", data);




      **** Log's Example *****


      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.TCP] (TransferQueueBundler,ee,ac-auz-w7-84425) dest= (4169 bytes)
      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (thread-15,ee,ac-auz-w7-84425) ac-auz-w7-84425: received ac-auz-w7-84425#491
      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (thread-15,ee,ac-auz-w7-84425) ac-auz-w7-84425: delivering ac-auz-w7-84425#491-491 (1 messages)
      2017-07-12 11:19:07,632 TRACE [org.jgroups.protocols.MFC] (thread-15,ee,ac-auz-w7-84425) ac-auz-w7-84425 used 4096 credits, 1212946 remaining
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (thread-2,ee,ac-auz-w7-84425) ac-auz-w7-84425: received [dst: ac-auz-w7-84425, src: ptavinnhan-pc (2 headers), size=0 bytes, flags=INTERNAL], headers are FD: heartbeat, TP: [cluster_name=ee]
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.FD] (thread-2,ee,ac-auz-w7-84425) ac-auz-w7-84425: received are-you-alive from ptavinnhan-pc, sending response
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (thread-2,ee,ac-auz-w7-84425) ac-auz-w7-84425: sending msg to ptavinnhan-pc, src=ac-auz-w7-84425, headers are FD: heartbeat ack, TP: [cluster_name=ee]
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (TransferQueueBundler,ee,ac-auz-w7-84425) ac-auz-w7-84425: sending 1 msgs (71 bytes (355,00% of max_bundle_size) to 1 dests(s): [ee:ptavinnhan-pc]
      2017-07-12 11:19:07,682 TRACE [org.jgroups.protocols.TCP] (TransferQueueBundler,ee,ac-auz-w7-84425) dest= (74 bytes)


      Thank you !