1 Reply Latest reply on Dec 4, 2007 12:50 PM by manik

    Multiple JVM distributed cache How To ?

    richiesgr

      Hi

      I've really search and read but I can't understand what's wrong with this :

      I've 2 JVM on the same or differente machine:

      My configuration (from jboss sample without any modification) :

      <?xml version="1.0" encoding="UTF-8"?>
      
      <!-- ===================================================================== -->
      <!-- -->
      <!-- Sample TreeCache Service Configuration -->
      <!-- -->
      <!-- ===================================================================== -->
      
      <server>
      
       <classpath codebase="./lib" archives="jboss-cache.jar, jgroups.jar"/>
      
      
       <!-- ==================================================================== -->
       <!-- Defines TreeCache configuration -->
       <!-- ==================================================================== -->
      
       <mbean code="org.jboss.cache.jmx.CacheJmxWrapper"
       name="jboss.cache:service=testTreeCache">
      
       <depends>jboss:service=Naming</depends>
       <depends>jboss:service=TransactionManager</depends>
      
       <!--
       Configure the TransactionManager
       -->
       <attribute name="TransactionManagerLookupClass">org.jboss.cache.transaction.GenericTransactionManagerLookup
       </attribute>
      
      
       <!--
       Node locking level : SERIALIZABLE
       REPEATABLE_READ (default)
       READ_COMMITTED
       READ_UNCOMMITTED
       NONE
       -->
       <attribute name="IsolationLevel">REPEATABLE_READ</attribute>
      
       <!--
       Valid modes are LOCAL
       REPL_ASYNC
       REPL_SYNC
       INVALIDATION_ASYNC
       INVALIDATION_SYNC
       -->
       <attribute name="CacheMode">REPL_SYNC</attribute>
      
       <!-- Name of cluster. Needs to be the same for all TreeCache nodes in a
       cluster in order to find each other.
       -->
       <attribute name="ClusterName">JBossCache-Cluster</attribute>
      
       <!--Uncomment next three statements to enable JGroups multiplexer.
      This configuration is dependent on the JGroups multiplexer being
      registered in an MBean server such as JBossAS. -->
       <!--
       <depends>jgroups.mux:name=Multiplexer</depends>
       <attribute name="MultiplexerService">jgroups.mux:name=Multiplexer</attribute>
       <attribute name="MultiplexerStack">fc-fast-minimalthreads</attribute>
       -->
      
       <!-- JGroups protocol stack properties.
       ClusterConfig isn't used if the multiplexer is enabled and successfully initialized.
       -->
       <attribute name="ClusterConfig">
       <config>
       <UDP mcast_addr="228.10.10.10"
       mcast_port="45588"
       tos="8"
       ucast_recv_buf_size="20000000"
       ucast_send_buf_size="640000"
       mcast_recv_buf_size="25000000"
       mcast_send_buf_size="640000"
       loopback="false"
       discard_incompatible_packets="true"
       max_bundle_size="64000"
       max_bundle_timeout="30"
       use_incoming_packet_handler="true"
       ip_ttl="2"
       enable_bundling="false"
       enable_diagnostics="true"
      
       use_concurrent_stack="true"
      
       thread_naming_pattern="pl"
      
       thread_pool.enabled="true"
       thread_pool.min_threads="1"
       thread_pool.max_threads="25"
       thread_pool.keep_alive_time="30000"
       thread_pool.queue_enabled="true"
       thread_pool.queue_max_size="10"
       thread_pool.rejection_policy="Run"
      
       oob_thread_pool.enabled="true"
       oob_thread_pool.min_threads="1"
       oob_thread_pool.max_threads="4"
       oob_thread_pool.keep_alive_time="10000"
       oob_thread_pool.queue_enabled="true"
       oob_thread_pool.queue_max_size="10"
       oob_thread_pool.rejection_policy="Run"/>
      
       <PING timeout="2000" num_initial_members="3"/>
       <MERGE2 max_interval="30000" min_interval="10000"/>
       <FD_SOCK/>
       <FD timeout="10000" max_tries="5" shun="true"/>
       <VERIFY_SUSPECT timeout="1500"/>
       <pbcast.NAKACK max_xmit_size="60000"
       use_mcast_xmit="false" gc_lag="0"
       retransmit_timeout="300,600,1200,2400,4800"
       discard_delivered_msgs="true"/>
       <UNICAST timeout="300,600,1200,2400,3600"/>
       <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
       max_bytes="400000"/>
       <pbcast.GMS print_local_addr="true" join_timeout="5000"
       join_retry_timeout="2000" shun="false"
       view_bundling="true" view_ack_collection_timeout="5000"/>
       <FRAG2 frag_size="60000"/>
       <pbcast.STREAMING_STATE_TRANSFER use_reading_thread="true"/>
       <!-- <pbcast.STATE_TRANSFER/> -->
       <pbcast.FLUSH timeout="0"/>
       </config>
       </attribute>
      
      
       <!--
       The max amount of time (in milliseconds) we wait until the
       state (ie. the contents of the cache) are retrieved from
       existing members in a clustered environment
       -->
       <attribute name="StateRetrievalTimeout">20000</attribute>
      
       <!--
       Number of milliseconds to wait until all responses for a
       synchronous call have been received.
       -->
       <attribute name="SyncReplTimeout">15000</attribute>
      
       <!-- Max number of milliseconds to wait for a lock acquisition -->
       <attribute name="LockAcquisitionTimeout">10000</attribute>
      
      
       <!-- Buddy Replication config -->
       <attribute name="BuddyReplicationConfig">
       <config>
       <buddyReplicationEnabled>true</buddyReplicationEnabled>
       <!-- these are the default values anyway -->
       <buddyLocatorClass>org.jboss.cache.buddyreplication.NextMemberBuddyLocator</buddyLocatorClass>
       <!-- numBuddies is the number of backup nodes each node maintains. ignoreColocatedBuddies means that
       each node will *try* to select a buddy on a different physical host. If not able to do so though,
       it will fall back to colocated nodes. -->
       <buddyLocatorProperties>
       numBuddies = 1
       ignoreColocatedBuddies = true
       </buddyLocatorProperties>
      
       <!-- A way to specify a preferred replication group. If specified, we try and pick a buddy why shares
       the same pool name (falling back to other buddies if not available). This allows the sysdmin to hint at
       backup buddies are picked, so for example, nodes may be hinted topick buddies on a different physical rack
       or power supply for added fault tolerance. -->
       <buddyPoolName>myBuddyPoolReplicationGroup</buddyPoolName>
       <!-- communication timeout for inter-buddy group organisation messages (such as assigning to and removing
       from groups -->
       <buddyCommunicationTimeout>2000</buddyCommunicationTimeout>
      
       <!-- the following three elements, all relating to data gravitation, default to false -->
       <!-- Should data gravitation be attempted whenever there is a cache miss on finding a node?
      If false, data will only be gravitated if an Option is set enabling it -->
       <autoDataGravitation>false</autoDataGravitation>
       <!-- removes data on remote caches' trees and backup subtrees when gravitated to a new data owner -->
       <dataGravitationRemoveOnFind>true</dataGravitationRemoveOnFind>
       <!-- search backup subtrees as well for data when gravitating. Results in backup nodes being able to
       answer data gravitation requests. -->
       <dataGravitationSearchBackupTrees>true</dataGravitationSearchBackupTrees>
      
       </config>
       </attribute>
       </mbean>
      
      
      </server>
      



      My first class put something in the cache like this :
       CacheFactory factory = DefaultCacheFactory.getInstance();
       Cache cache = factory.createCache("/home/rgrossman2/workspace/jbossCache/jbossconfig.xml");
      
       Node rootNode = cache.getRoot();
       Fqn xmlNodeKey = Fqn.fromString("/cluster/12345");
      
       Node xmlNode = rootNode.addChild(xmlNodeKey);
       xmlNode.put("data", xml);
      
       System.out.println("Get data from cache local");
       String xml = (String) xmlNode.get("data");
      
       System.out.println("Data In cache:" + xml);
      
       try {
       Thread.sleep(100000);
       } catch (InterruptedException e) {
       // TODO Auto-generated catch block
       e.printStackTrace();
       }
      


      The second client try to get the data from the cache
       CacheFactory factory = DefaultCacheFactory.getInstance();
       Cache cache = factory.createCache("/home/rgrossman2/workspace/jbossCache/jbossconfig.xml");
      
       try {
       Thread.sleep(100000);
       } catch (InterruptedException e) {
       // TODO Auto-generated catch block
       e.printStackTrace();
       }
      
       Node rootNode = cache.getRoot();
       Fqn xmlNodeKey = Fqn.fromString("/cluster/12345");
      
       Node xmlNode = rootNode.getChild(xmlNodeKey);
       String xml = (String) xmlNode.get("data");
      


      The trace show me that the cluster is ok but I get always null on client2 : Node xmlNode = rootNode.getChild(xmlNodeKey);

      seems that the object is not visible for the second VM

      I use the same configuration for both client

      Thanks for any help


        • 1. Re: Multiple JVM distributed cache How To ?
          manik

          Are you sure the two nodes see each other? What does Cache.getMembers() return on both nodes?

          If they do both see each other, maybe your sleep isn't long enough on the 2nd node? Try using a cache listener and logging something when a put occurs instead.