4 Replies Latest reply on Oct 23, 2009 10:45 AM by syg6

    How to see contents of each member in a cluster?

    syg6

      I finally got the guiDemo working and it's quite cool. One thing I'd like to be able to see is which entries are physically in which member's cache in a distributed cache. Something like Bela Ben's ReplCache demo. If you open 4 instances, as you add entries you see them added to the various members' cache. Very cool.

      Currently the guiDemo only shows you the number of members in the cluster, but not each member's contents. I realize that making this change in guiDemo is a bit much to ask. So I am not asking for it to be added to the guiDemo, I'd just like to know how to do it.

      I've been playing around all morning with Cache, AdvancedCache, DataContainer, RpcManager, etc. The closest I got was calling:

      cache.getAdvancedCache().getRpcManager().retrieveState() but this didn't work since my RpcManager was null. :(

      Thanks!

      Bob

        • 1. Re: How to see contents of each member in a cluster?
          manik

           

          "syg6" wrote:
          I finally got the guiDemo working and it's quite cool. One thing I'd like to be able to see is which entries are physically in which member's cache in a distributed cache. Something like Bela Ben's ReplCache demo. If you open 4 instances, as you add entries you see them added to the various members' cache. Very cool.

          Currently the guiDemo only shows you the number of members in the cluster, but not each member's contents. I realize that making this change in guiDemo is a bit much to ask. So I am not asking for it to be added to the guiDemo, I'd just like to know how to do it.


          Hmm, that is pretty tough, since it would mean that each node would need a global view of all keys in the system, even keys that are not mapped to the node itself. Not very scalable in terms of memory footprint.

          Is there a particular reason why you need this?

          "syg6" wrote:

          I've been playing around all morning with Cache, AdvancedCache, DataContainer, RpcManager, etc. The closest I got was calling:

          cache.getAdvancedCache().getRpcManager().retrieveState() but this didn't work since my RpcManager was null. :(


          Odd that the RpcManager was null; but either way, retrieveState should be a no-op if you are using distribution. It is a replication-only codepath to deal with state transfers. In DIST this is dealt with by the DistributionManager, which handles rebalancing based on the CH remapping keys.

          In future, what you could do is use the cluient/server API to issue RPC calls to specific nodes. Stuff like a keySet() to retrieve locally owned keys on each node.

          1 of 1 people found this helpful
          • 2. Re: How to see contents of each member in a cluster?
            syg6

             

            "manik.surtani@jboss.com" wrote:
            "syg6" wrote:
            I finally got the guiDemo working and it's quite cool. One thing I'd like to be able to see is which entries are physically in which member's cache in a distributed cache. Something like Bela Ben's ReplCache demo. If you open 4 instances, as you add entries you see them added to the various members' cache. Very cool.

            Currently the guiDemo only shows you the number of members in the cluster, but not each member's contents. I realize that making this change in guiDemo is a bit much to ask. So I am not asking for it to be added to the guiDemo, I'd just like to know how to do it.


            Hmm, that is pretty tough, since it would mean that each node would need a global view of all keys in the system, even keys that are not mapped to the node itself. Not very scalable in terms of memory footprint.

            Is there a particular reason why you need this?


            Well we'd just like to see how well it's distributing the data, that's all. As I mentioned in my first message, Bela Ben's ReplCache has a demo that launches 4 instances and shows you what each one contains. Since Infinispan sits on top of JGroups I figured it would be possible to do the same thing.

            "manik.surtani@jboss.com" wrote:

            "syg6" wrote:

            I've been playing around all morning with Cache, AdvancedCache, DataContainer, RpcManager, etc. The closest I got was calling:

            cache.getAdvancedCache().getRpcManager().retrieveState() but this didn't work since my RpcManager was null. :(


            Odd that the RpcManager was null; but either way, retrieveState should be a no-op if you are using distribution. It is a replication-only codepath to deal with state transfers. In DIST this is dealt with by the DistributionManager, which handles rebalancing based on the CH remapping keys.

            In future, what you could do is use the cluient/server API to issue RPC calls to specific nodes. Stuff like a keySet() to retrieve locally owned keys on each node.


            By 'in the future' you mean when Infinispan is no longer beta? Does this API exist now?

            Thanks,
            Bob

            • 3. Re: How to see contents of each member in a cluster?
              manik

               

              "syg6" wrote:

              Well we'd just like to see how well it's distributing the data, that's all. As I mentioned in my first message, Bela Ben's ReplCache has a demo that launches 4 instances and shows you what each one contains. Since Infinispan sits on top of JGroups I figured it would be possible to do the same thing.


              ReplCache was just a POC, not production code. Doing this in a scalable manner is tough, since each node needs to know what the global keyset is, and that won't scale.

              "syg6" wrote:

              By 'in the future' you mean when Infinispan is no longer beta? Does this API exist now?


              4.1.0. There are a bunch of JIRAs open for this under 4.1.0.


              1 of 1 people found this helpful
              • 4. Re: How to see contents of each member in a cluster?
                syg6

                Fair enough, thanks for the reply!

                Bob