4 Replies Latest reply on Feb 7, 2008 4:58 AM by Mircea Markus

    Is Fqn reuseable?

    Adam Warski Master


      I'm using JBoss Cache 1.4.1 (the one bundled with Seam) and have the following method:

      public void testCache1() {
       pojoCache.put("/a/b", 1, 1);

      In my test (which runs inside a Seam app), I start a transaction, invoke testCache1() and commit the transaction, and then repeat it once again. And it works without problems.

      Now, I have another version of this method (which I though would be more efficient :) ):
      private static Fqn fqn1 = new Fqn("a");
      private static Fqn fqn2 = new Fqn(fqn1, "b");
      public void testCache2() {
       pojoCache.put(fqn2, 1, 1);

      When I repeat the test with testCache2(), it hangs on the second invocation of the put(fqn2,...) method.

      The only difference between the two is that once I use the private static fqn-s, and once not. So can I reuse Fqn's, and if not, why?

      Adam Warski

        • 1. Re: Is Fqn reuseable?
          Adam Warski Master

          Strangely, this mostly happens when transaction isolation level is set to NONE.

          My cache setup:

          <?xml version="1.0" encoding="UTF-8" ?>
           <classpath codebase="./lib" archives="jboss-cache.jar, jgroups.jar" />
           <!-- ==================================================================== -->
           <!-- Defines TreeCache configuration -->
           <!-- ==================================================================== -->
           <mbean code="org.jboss.cache.TreeCache" name="jboss.cache:service=TreeCache">
           <!-- Configure the TransactionManager -->
           <attribute name="TransactionManagerLookupClass">org.jboss.cache.JBossTransactionManagerLookup</attribute>
           Node locking scheme :
           PESSIMISTIC (default)
           <attribute name="NodeLockingScheme">PESSIMISTIC</attribute>
           Node locking isolation level :
           REPEATABLE_READ (default)
           (ignored if NodeLockingScheme is OPTIMISTIC)
           <attribute name="IsolationLevel">NONE</attribute>
           <!-- Lock parent before doing node additions/removes -->
           <attribute name="LockParentForChildInsertRemove">true</attribute>
           <!-- Valid modes are LOCAL
           <attribute name="CacheMode">LOCAL</attribute>
           <!-- Whether each interceptor should have an mbean
          registered to capture and display its statistics. -->
           <attribute name="UseInterceptorMbeans">true</attribute>
           <!-- Name of cluster. Needs to be the same for all TreeCache nodes in a
           cluster, in order to find each other -->
           <attribute name="ClusterName">JBoss-Feeds-Cluster</attribute>
           <!-- Uncomment next three statements to enable JGroups multiplexer.
          This configuration is dependent on the JGroups multiplexer being
          registered in an MBean server such as JBossAS. -->
           <attribute name="MultiplexerService">jgroups.mux:name=Multiplexer</attribute>
           <attribute name="MultiplexerStack">udp</attribute>
           <!-- JGroups protocol stack properties. ClusterConfig isn't used if the
           multiplexer is enabled and successfully initialized. -->
           <attribute name="ClusterConfig">
           <!-- UDP: if you have a multihomed machine,
           set the bind_addr attribute to the appropriate NIC IP address
           <!-- UDP: On Windows machines, because of the media sense feature
           being broken with multicast (even after disabling media sense)
           set the loopback attribute to true
           <UDP mcast_addr="" mcast_port="45566" ip_ttl="64" ip_mcast="true"
           mcast_send_buf_size="150000" mcast_recv_buf_size="80000" ucast_send_buf_size="150000"
           ucast_recv_buf_size="80000" loopback="false" />
           <PING timeout="2000" num_initial_members="3" up_thread="false" down_thread="false" />
           <MERGE2 min_interval="10000" max_interval="20000" />
           <FD shun="true" up_thread="true" down_thread="true" />
           <VERIFY_SUSPECT timeout="1500" up_thread="false" down_thread="false" />
           <pbcast.NAKACK gc_lag="50" max_xmit_size="8192" retransmit_timeout="600,1200,2400,4800" up_thread="false"
           down_thread="false" />
           <UNICAST timeout="600,1200,2400" window_size="100" min_threshold="10" down_thread="false" />
           <pbcast.STABLE desired_avg_gossip="20000" up_thread="false" down_thread="false" />
           <FRAG frag_size="8192" down_thread="false" up_thread="false" />
           <pbcast.GMS join_timeout="5000" join_retry_timeout="2000" shun="true" print_local_addr="true" />
           <pbcast.STATE_TRANSFER up_thread="false" down_thread="false" />
           <!-- The max amount of time (in milliseconds) we wait until the
           initial state (ie. the contents of the cache) are retrieved from
           existing members in a clustered environment
           <attribute name="InitialStateRetrievalTimeout">5000</attribute>
           <!-- Number of milliseconds to wait until all responses for a
           synchronous call have been received.
           <attribute name="SyncReplTimeout">10000</attribute>
           <!-- Max number of milliseconds to wait for a lock acquisition -->
           <attribute name="LockAcquisitionTimeout">15000</attribute>

          • 2. Re: Is Fqn reuseable?
            Mircea Markus Master

            Thanks for spotting this! http://jira.jboss.com/jira/browse/JBCACHE-1285 (includes a UT that reproduces this).
            The root of the problem is an endless loop which appears on a NONE isolation level only (see JIRA for full description).
            There is no correlation with having cached static FQNs.

            • 3. Re: Is Fqn reuseable?
              Adam Warski Master

              Ah, I see, glad to hear that there's an explanation :).

              Is it possible that I was also experiencing this problem (sometimes) using repeatable_read?


              • 4. Re: Is Fqn reuseable?
                Mircea Markus Master

                No, it only has to do with NONE