6 Replies Latest reply on Mar 7, 2008 6:41 AM by Manik Surtani

    Is v1.4.1 completely compatible with v1.4.0

    hang liu Newbie

      I used v1.4.0GA in my project. The project is running in the customer's envirement.

      Now I have a fatal problem: Cache A and cache B in separate JVMs, when A's jvm shut down, chache B's member list still have cache A. This can last very long time. I think it is a bug of Jbosscache or Jgoups.

      What can I do? Shoudn't I upgrade my jbosscache version to a higher version? If it is, Whitch version shoudln't I upgrade to? I have so much trouble ,thanks to somebody help me.

        • 1. Re: Is v1.4.1 completely compatible with v1.4.0
          Mircea Markus Master

          what is the configuration where this appears?
          (JBossCache version, cache config file)

          • 2. Re: Is v1.4.1 completely compatible with v1.4.0
            hang liu Newbie

            Fllowing is the config file, thanks to mircea.markus.

            <?xml version="1.0" encoding="UTF-8"?>

            <!-- ===================================================================== -->
            <!-- -->
            <!-- Sample TreeCache Service Configuration -->
            <!-- -->
            <!-- ===================================================================== -->






            <!-- ==================================================================== -->
            <!-- Defines TreeCache configuration -->
            <!-- ==================================================================== -->



            jboss:service=Naming
            jboss:service=TransactionManager

            org.jboss.cache.DummyTransactionManagerLookup
            READ_UNCOMMITTED

            INVALIDATION_ASYNC
            TestCluster_WY_2664-->



            <UDP mcast_addr="228.1.3.5" mcast_port="45577"
            ip_ttl="64" ip_mcast="true"
            mcast_send_buf_size="150000" mcast_recv_buf_size="80000"
            ucast_send_buf_size="150000" ucast_recv_buf_size="80000"
            loopback="true"/>
            <PING timeout="2000" num_initial_members="3"
            up_thread="false" down_thread="false"/>
            <MERGE2 min_interval="10000" max_interval="20000"/>
            <FD shun="true" up_thread="true" down_thread="true"/>
            <VERIFY_SUSPECT timeout="1500"
            up_thread="false" down_thread="false"/>
            <pbcast.NAKACK gc_lag="50" retransmit_timeout="600,1200,2400,4800"
            up_thread="false" down_thread="false"/>
            <pbcast.STABLE desired_avg_gossip="20000"
            up_thread="false" down_thread="false"/>
            <UNICAST timeout="600,1200,2400" window_size="100" min_threshold="10"
            down_thread="false"/>
            <FRAG frag_size="8192"
            down_thread="false" up_thread="false"/>
            <pbcast.GMS join_timeout="5000" join_retry_timeout="2000"
            shun="true" print_local_addr="true"/>
            <pbcast.STATE_TRANSFER up_thread="false" down_thread="false"/>





            20000

            15000

            <!-- Max number of milliseconds to wait for a lock acquisition -->
            10000





            <!--Name of the eviction policy class. -->
            org.jboss.cache.eviction.LRUPolicy

            <!--Specific eviction policy configurations. This is LRU -->


            5

            <!--Cache wide default -->

            10000
            0





            <!--if passivation is true, only the first cache loader is used; the rest are ignored -->
            false

            <!--comma delimited FQNs to preload -->
            /

            <!--are the cache loaders shared in a cluster? -->
            false

            <!--we can now have multiple cache loaders, which get chained -->
            <!--the 'cacheloader' element may be repeated -->
            com.primeton.eos.wf.service.instpool.treecache.optimize.EOSCacheLoaderOptimize

            <!--same as the old CacheLoaderConfig attribute -->
            <!--
            cache.jdbc.driver=com.mysql.jdbc.Driver
            cache.jdbc.url=jdbc:mysql://localhost:3306/jbossdb
            cache.jdbc.user=root
            cache.jdbc.password=
            -->

            <!--whether the cache loader writes are asynchronous -->
            false

            <!--only one cache loader in the chain may set fetchPersistentState to true.
            An exception is thrown if more than one cache loader sets this to true. -->
            true

            <!--determines whether this cache loader ignores writes -defaults to false. -->
            false

            <!--if set to true, purges the contents of this cache loader when the cache starts up.
            Defaults to false. -->
            false






            • 3. Re: Is v1.4.1 completely compatible with v1.4.0
              hang liu Newbie

              I am sorry the flags in the config content is missings when showed in the page.


              Now I have another idea. Can I delete a member from one cache's member view (delete the other member who has essentially died)? If it is, How can i do it?

              • 4. Re: Is v1.4.1 completely compatible with v1.4.0
                Mircea Markus Master

                 

                Can I delete a member from one cache's member view (delete the other member who has essentially died)? If it is, How can i do it?

                That's not the way to go. It is JGroups's failure detection protocol(FD) that should manage member removal/addition. I suggest you read /tune the Failure Detection section in the JGroups manual

                • 5. Re: Is v1.4.1 completely compatible with v1.4.0
                  hang liu Newbie

                  I read some articles about FD ,but I can't find a configration that ompletely avoid this problem.
                  I tried to cope with this question by modify the emberlist or reconnecting the channel. But soon find them doesn't work .
                  Finally I recreate the channel and solved the problem to solve this problem. If somebody have good ideas about FD configration , just reply to this post , I'll be very appreciate . Our next version of product may take this solution.