1 Reply Latest reply on Nov 30, 2009 3:21 AM by galder.zamarreno

    Profiling Infinispan

    syg6

      I have a SYNC_DIST Cache set up with the default config. I start up 4 instances - A, B, C and D - and do 2 unique puts in each:

      A
      put 1,Joe
      put 2,Bill

      B
      put 3,Ed
      put 4,John

      C
      put 5,Ron
      put 6,Jeff

      D
      put 7,Phil
      put 8,Steve

      When I call ConcurrentMap.size() on each one, A and C say they have 8 puts, B and D each say they have 2 puts. It's not always A and C. Sometimes it's A and B. It seems to always be A though. Weird.

      Weirder still - in the above scenario, where A and C say they have 8 puts and B and D have 2 each, if I call get(1) on B, its size goes up to 3. If I then call get(2) it goes up to 4. It seems that whenever I call a get() on data that's physically in that member's Cache, it goes and gets it and bumps up its size. Nomal?

      If I open up VisualVM to see what's going on, I get the same info, looking at CacheDelegate's dataContainer field.

      Is this normal? When Infinispan is running in DIST mode and only has 2 members, is all info replicated to both? And when I have 3, 4 or more members, does it still always replicate all data to 2 members, just to have a copy, and then distribute the rest?

      Or am I looking at the wrong info? Is ConcurrentMap.size() not reliable? And CacheDelegate's dataContainer? I'm curious why the size goes up only after a get. Is this because a remote get is done and the result is put in the member's L1 cache as mentioned in the distributed cache mode design doc?

      It's not that I don't trust Infinispan. In fact I think it's right up there with sliced bread, and I think (hope!) it's going to make my life much easier. I just wanted to see it 'in action', namely, see it physically distrbute data across members, then when members die, watch it redistribute, etc.

      Thanks for any insights ...

      Bob

        • 1. Re: Profiling Infinispan
          galder.zamarreno

           

          "syg6" wrote:
          Or am I looking at the wrong info? Is ConcurrentMap.size() not reliable? And CacheDelegate's dataContainer? I'm curious why the size goes up only after a get. Is this because a remote get is done and the result is put in the member's L1 cache as mentioned in the distributed cache mode design doc?


          Indeed. Try with l1Enabled="false" and you'll see the difference.