3 Replies Latest reply on Jul 30, 2015 9:55 AM by rbenkovitz

    Cache data not being distributed in WildFly 8.2

    rbenkovitz

      Hello:

       

      Trying to set up a simple distributed cache in a full-ha WildFly deployment.  I've followed the rules (I believe) and have set up a simple distributed cache.  The application boots fine, and I'm seeing what I believe are proper JGroup messages stating that the cluster is working.  However, when I attempt to use it, each node in the cluster seems to have its own local copy.

       

      Included are some details:

       

      Cache config in the infinispan subsystem:

       

                     <cache-container name="processTracker" default-cache="dist" jndi-name="java:jboss/infinispan/processTracker">

                          <transport lock-timeout="60000"/>

                          <distributed-cache name="dist" batching="true" mode="SYNC" owners="2" l1-lifespan="0">

                              <locking isolation="REPEATABLE_READ" acquire-timeout="15000" concurrency-level="1000"/>

                              <file-store/>

                          </distributed-cache>

                      </cache-container>

       

      Singleton EJB code snippet that sets it up.  I initialize the cache by seeding it with Maps based upon values in an enum (ProcessTracker.Process).  There are different methods which will put and remove values in these underlying Maps:

       

      @Singleton

      @Startup

      @AccessTimeout(value = 60, unit = TimeUnit.SECONDS)

      public class ProcessTrackerCache implements ProcessTracker {

       

        @Resource(lookup = "java:jboss/infinispan/cache/processTracker/dist")

        private Cache<Process, Map<String, DateTime>> processMap;

       

        @PostConstruct

        public void initialize() {

           System.out.println("ProcessTracker Constructor");

           System.out.println("processMap=" + this.processMap);

       

         if (this.processMap != null) {

             for (Process process : ProcessTracker.Process.values()) {

                processMap.put(process, new HashMap<String,DateTime>());

              }

           }

        }

        ... other transactional methods put values in cache and read cache.

       


      When starting up each node, it seems to attempt to initialize it in four threads twice (could this be some clue?) - the first attempt the cache is null - the second time it is set to the proper cache.   Here is the output from the server log:


      2015-07-28 18:19:50,578 INFO  [stdout] (ServerService Thread Pool -- 70) ProcessTracker Constructor

      2015-07-28 18:19:50,579 INFO  [stdout] (ServerService Thread Pool -- 72) ProcessTracker Constructor

      2015-07-28 18:19:50,579 INFO  [stdout] (ServerService Thread Pool -- 58) ProcessTracker Constructor

      2015-07-28 18:19:50,580 INFO  [stdout] (ServerService Thread Pool -- 62) ProcessTracker Constructor

      2015-07-28 18:19:50,583 INFO  [stdout] (ServerService Thread Pool -- 58) processMap=null

      2015-07-28 18:19:50,585 INFO  [stdout] (ServerService Thread Pool -- 72) processMap=null

      2015-07-28 18:19:50,585 INFO  [stdout] (ServerService Thread Pool -- 70) processMap=null

      2015-07-28 18:19:50,586 INFO  [stdout] (ServerService Thread Pool -- 62) processMap=null

      2015-07-28 18:19:50,628 INFO  [stdout] (ServerService Thread Pool -- 58) ProcessTracker Constructor

      2015-07-28 18:19:50,628 INFO  [stdout] (ServerService Thread Pool -- 72) ProcessTracker Constructor

      2015-07-28 18:19:50,628 INFO  [stdout] (ServerService Thread Pool -- 62) ProcessTracker Constructor

      2015-07-28 18:19:50,629 INFO  [stdout] (ServerService Thread Pool -- 70) ProcessTracker Constructor

      2015-07-28 18:19:50,629 INFO  [stdout] (ServerService Thread Pool -- 62) processMap=Cache 'dist'@bpc-111:node1/processTracker

      2015-07-28 18:19:50,629 INFO  [stdout] (ServerService Thread Pool -- 72) processMap=Cache 'dist'@bpc-111:node1/processTracker

      2015-07-28 18:19:50,630 INFO  [stdout] (ServerService Thread Pool -- 58) processMap=Cache 'dist'@bpc-111:node1/processTracker

      2015-07-28 18:19:50,630 INFO  [stdout] (ServerService Thread Pool -- 70) processMap=Cache 'dist'@bpc-111:node1/processTracker


      Also, I seem to be getting correct JGroup messages about the cache being clustered (bpc-111 is the machine name - cluster has two nodes, node1 and node2, both on the same box):


      2015-07-28 18:20:19,907 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,shared=udp) ISPN000094: Received new cluster view: [bpc-111:node1/processTracker|1] (2) [bpc-111:node1/processTracker, bpc-111:node2/processTracker]


      There was an earlier thread on a similar topic (https://developer.jboss.org/thread/194878) - the answer was something about "bundling the infinispan libraries. When I changed the scope of the infinispan-core to provided, cache values were replicated on all nodes." I'm not sure what that means or how to go about doing that.


      Any help would be appreciated - seem to be at a dead end.


      - Rob

        • 1. Re: Cache data not being distributed in WildFly 8.2
          rbenkovitz

          I set the org.jgroups log level to TRACE - it seems that a heartbeat is going back and forth between the two nodes:

           

          2015-07-29 13:24:55,165 TRACE [org.jgroups.protocols.UDP] (Timer-5,shared=udp) null: sending msg to null, src=bpc-111:node1/processTracker, headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

          2015-07-29 13:24:55,166 TRACE [org.jgroups.protocols.UDP] (Timer-5,shared=udp) null: looping back message [dst: <null>, src: bpc-111:node1/processTracker (2 headers), size=0 bytes, flags=INTERNAL]

          2015-07-29 13:24:55,166 TRACE [org.jgroups.protocols.UDP] (INT-1,shared=udp) null: received [dst: <null>, src: bpc-111:node1/processTracker (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

          2015-07-29 13:24:55,196 TRACE [org.jgroups.protocols.UDP] (Timer-2,shared=udp) null: sending 2 msgs (84 bytes (0.27% of max_bundle_size), collected in 30ms)  to 2 destination(s) (dests=[processTracker, server])

          2015-07-29 13:24:55,196 TRACE [org.jgroups.protocols.UDP] (INT-1,shared=udp) null: received [dst: <null>, src: bpc-111:node1/processTracker (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

          2015-07-29 13:24:55,961 TRACE [org.jgroups.protocols.UDP] (INT-1,shared=udp) null: received [dst: <null>, src: bpc-111:node2/processTracker (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

          2015-07-29 13:24:58,166 TRACE [org.jgroups.protocols.UDP] (Timer-3,shared=udp) null: sending msg to null, src=bpc-111:node1/processTracker, headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

          2015-07-29 13:24:58,166 TRACE [org.jgroups.protocols.UDP] (Timer-3,shared=udp) null: looping back message [dst: <null>, src: bpc-111:node1/processTracker (2 headers), size=0 bytes, flags=INTERNAL]

          2015-07-29 13:24:58,166 TRACE [org.jgroups.protocols.UDP] (INT-1,shared=udp) null: received [dst: <null>, src: bpc-111:node1/processTracker (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

          2015-07-29 13:24:58,196 TRACE [org.jgroups.protocols.UDP] (Timer-2,shared=udp) null: sending 2 msgs (84 bytes (0.27% of max_bundle_size), collected in 30ms)  to 2 destination(s) (dests=[processTracker, server])

          2015-07-29 13:24:58,196 TRACE [org.jgroups.protocols.UDP] (INT-1,shared=udp) null: received [dst: <null>, src: bpc-111:node1/processTracker (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

          2015-07-29 13:24:58,961 TRACE [org.jgroups.protocols.UDP] (INT-1,shared=udp) null: received [dst: <null>, src: bpc-111:node2/processTracker (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, UDP: [channel_name=processTracker]

           

          Not sure how to fully read it - don't know what all the 'null' values are about.

           

          When I start a client that starts posting things into the cache, I don't see any traffic (other than heartbeats) going back and forth between the nodes.  The client attempts to post 100 objects into the cache, and they seem to get more or less distributed 50/50 to node1 and node2.  Of course, the desire is to have one cache object with 100 objects, not two separate caches with 50 each.

          • 2. Re: Cache data not being distributed in WildFly 8.2
            rvansa

            'sending msg to null' means to everyone. However, your nodes are not clustering, but FD_ALL is not responsible for that. What's you JGroups stack configuration and socket bindings? Also, check to what addresses are the nodes bound to (IPv4/IPv6) - it's a common pitfall to try to use IPv4 addresses on a dual-stack machine without using -Djava.net.preferIPv4Stack=true (Java uses IPv6 binding by default on dual stack).

            • 3. Re: Cache data not being distributed in WildFly 8.2
              rbenkovitz

              Hello Radim:

               

              Thank you for replying.  First, yes, I am definitely using IPv4 addresses, and have the -Djava.net.preferIPv4Stack=true parameter set.

               

              Here are my jgroups stack config and socket bindings (I'm using the standard "domain.xml" configuration delivered with WildFly 8.2):

               

                          <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">

                              <stack name="udp">

                                  <transport type="UDP" socket-binding="jgroups-udp"/>

                                  <protocol type="PING"/>

                                  <protocol type="MERGE3"/>

                                  <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>

                                  <protocol type="FD_ALL"/>

                                  <protocol type="VERIFY_SUSPECT"/>

                                  <protocol type="pbcast.NAKACK2"/>

                                  <protocol type="UNICAST3"/>

                                  <protocol type="pbcast.STABLE"/>

                                  <protocol type="pbcast.GMS"/>

                                  <protocol type="UFC"/>

                                  <protocol type="MFC"/>

                                  <protocol type="FRAG2"/>

                                  <protocol type="RSVP"/>

                              </stack>

                              <stack name="tcp">

                                  <transport type="TCP" socket-binding="jgroups-tcp"/>

                                  <protocol type="MPING" socket-binding="jgroups-mping"/>

                                  <protocol type="MERGE2"/>

                                  <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>

                                  <protocol type="FD"/>

                                  <protocol type="VERIFY_SUSPECT"/>

                                  <protocol type="pbcast.NAKACK2"/>

                                  <protocol type="UNICAST3"/>

                                  <protocol type="pbcast.STABLE"/>

                                  <protocol type="pbcast.GMS"/>

                                  <protocol type="MFC"/>

                                  <protocol type="FRAG2"/>

                                  <protocol type="RSVP"/>

                              </stack>

                          </subsystem>


              Socket bindings:

               

                  <socket-binding-groups>

                      <socket-binding-group name="full-ha-sockets" default-interface="public">

                          <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>

                          <socket-binding name="http" port="${jboss.http.port:8080}"/>

                          <socket-binding name="https" port="${jboss.https.port:8443}"/>

                          <socket-binding name="jacorb" interface="unsecure" port="3528"/>

                          <socket-binding name="jacorb-ssl" interface="unsecure" port="3529"/>

                          <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>

                          <socket-binding name="jgroups-tcp" port="7600"/>

                          <socket-binding name="jgroups-tcp-fd" port="57600"/>

                          <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>

                          <socket-binding name="jgroups-udp-fd" port="54200"/>

                          <socket-binding name="messaging-group" port="0" multicast-address="${jboss.messaging.group.address:231.7.7.7}" multicast-port="${jboss.messaging.group.port:9876}"/>

                          <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/>

                          <socket-binding name="txn-recovery-environment" port="4712"/>

                          <socket-binding name="txn-status-manager" port="4713"/>

                          <outbound-socket-binding name="mail-smtp">

                              <remote-destination host="localhost" port="25"/>

                          </outbound-socket-binding>

                      </socket-binding-group>

                  </socket-binding-groups>

               

              I have not touched anything in these sections from the standard domain.xml delivered with WildFly 8.2.

               

              Thanks again!

               

              - Rob