5 Replies Latest reply on Oct 28, 2009 6:34 AM by Robert Bowen

    A few (hopefully) quick questions

    Robert Bowen Newbie

      I had a few questions about Infinispan.

      1. Would it be possible to set up a Cluster with x nodes, where x > 2, where the first 2 nodes would be 'primary' nodes, replicated, and the rest distributed? I am pretty familiar with JGroups and I don't think this is possible - all members are created equal, first one alive is the coordinator. But I wanted to be sure.

      2. Would it be possible to have some data replicated and other data distributed, within the same Cluster? Somethng like, data with a certain key is replicated, all other data distributed.

      3. When a member drops out of a distributed (or replicated) cluster, when it comes back online it needs to get the current data from one of the other members (the coordinator). But while that is happening the coordinator keeps receiving requests. So when the newly-alive member finishes getting data from the coordinator there will be some data that has changed while the transfer was taking place - new data, modified data, deleted data. How does Infinispan handle this? Does it block the coordinator, or the whole Cluster, while a data transfer is going on?

      Thanks! Great work, btw, Infinispan rocks.


        • 1. Re: A few (hopefully) quick questions
          Manik Surtani Master


          1. No.

          2. No. But you can create 2 separate caches using the same cache manager. This means they will share the same JGroups channel and networking layer, but the data organization on top of that could be different. Have a look at CacheManagerXmlConfigurationTest in Infinispan's core module for an example.

          3. In both cases, it uses a non-blocking, chunked state transfer approach so that the existing cluster is not blocked (or is only blocked for a minimal amount of time). For more details, see http://jbosscache.blogspot.com/2009/03/jboss-cache-310beta1.html


          • 2. Re: A few (hopefully) quick questions
            Robert Bowen Newbie

            Very cool, thanks for the reply. I had already been turned on to NBST by Bela Ben some time ago, good to know it's being used in Infinispan.

            We'll look into creating different Caches - one replicated the other distirbuted - with one Manager, that sounds like th way to go for us.

            Thanks again.


            • 3. Re: A few (hopefully) quick questions
              Robert Bowen Newbie

              Is there a lot of overhead in creating a bunch of different caches? I ask because we have a structure like this:

              ConcurrentHashMap<String,CallPojo> allCalls

              Inside CallPojo we have another ConcurrentHashMap:

              ConcurrentHashMap<String,CallDocPojo> allCallDocs

              So allCalls contains n Calls, each one with m Docs.

              All Docs for a given Call have the same expiration time. But that expiration time is different for each Call. So it seems we need to use an Infinispan Cache inside of each CallDocPojo. Which means we'll have n Infinispan Caches.

              Currently we have about 100 Calls and within each Call hundreds of Docs. Both of these numbers need to scale.

              Is it wise to use 100+ Infinispan Caches?


              • 4. Re: A few (hopefully) quick questions
                Manik Surtani Master

                Hmm, that could get expensive. Perhaps your best bet is to create a key that is a combination of call_id and document_id?

                • 5. Re: A few (hopefully) quick questions
                  Robert Bowen Newbie

                  Yea, I thought it might be.

                  We're looking at concatenating the idCall onto the idDoc, and having one gigantic Cache, with Docs for all Calls, instead of n Cache, one for each Call.

                  This means we won't be able to define the expiration time at the Cache level; we'll have to do it each time we stick something in the Cache. But that's no big deal.

                  Thanks for your advice.