13 Replies Latest reply on May 28, 2006 8:12 PM by Brian Stansberry

    Cluster membership with JGroups 2.3 multiplexer


      From an email thread:

      I'm running tests using the JGroups multiplexer in a situation where a
      service isn't present on all nodes in a cluster. For example, assume a
      two node cluster with two clustered application services, A and B. Both
      services use the multiplexer, both services are deployed on Node1, and
      only Service A is deployed on Node 2. When Node2 is started, Service A
      and Service B on Node1 are notified that the cluster membership has
      changed. This is certainly true for Service A but it's not really true
      for Service B, since there is no new deployment of Service B in the

      I tested the same scenario with non-multiplexed clusters and in that
      case, Service B didn't receive a membership change notification.

      Is this behavior by design or oversight?

      Response from Bela: "this is by design; the view change is for the
      node, *not* the application."

        • 1. Re: Cluster membership with JGroups 2.3 multiplexer
          Brian Stansberry Master

          This is an area we have to think through carefully as we think about cluster management.

          For example, IIRC, JBoss Cache has an implicit assumption that a cache instance is running on all nodes in the cluster. Thus if it makes an RPC call to the cluster, it expects a response from all nodes; if it doesn't get a response from one it treats that as an error condition.

          With the multiplexer, there is a somewhat higher possibility that not all nodes in the cluster will have a particular service installed. Maybe a much higher possibility, depending on how we do cluster management.

          One of the features of HAPartition is that it deals with this issue for its dependent services -- services register an RPC handler with the HAPartition, and if it receives a remote call for an unregistered service, it returns a NoHandlerForRPC object. The calling node then excludes that node from the response list, shielding the dependent service from having to worry about the missing response.

          For services that need to know what other nodes in the cluster have the same service deployed, the DRM maintains a per-service registry.

          • 2. Re: Cluster membership with JGroups 2.3 multiplexer

            I don't think the multiplexer necessarily creates a "higher possibility" that a service will be deployed on a subset of a cluster but it does handle notifications differently. In a non-multiplexer cluster environment. a node is notified of a cluster membership change if another node sharing the cluster configuration is added or removed. With the multiplexer, this is also true except that the service using the multiplexer may not actually be available on the node being added or removed.

            So how does a service using the multiplexer determine the cluster membership for the service rather than for the multiplexed cluster?

            I don't know if this is a real issue or just a theoretical one but the concept of placing services on a subset of the cluster is similar to the concept of buddy partitioning so it probably has real applications.

            • 3. Re: Cluster membership with JGroups 2.3 multiplexer
              Brian Stansberry Master

              It seems that if the Multiplexer is used, the creation of MuxChannels that use it *must* be homogeneous across the cluster. From Multiplexer.up():

              MuxChannel mux_ch=(MuxChannel)apps.get(hdr.id);
              if(mux_ch == null) {
               log.error("didn't find an application for id=" + hdr.id + " discarding messgage " + msg);

              Heterogeneous topologies at the MuxChannel level will not work.

              HAPartition allows mixed topologies of higher-level services that use it, because it creates a single MuxChannel, and then hides the fact that some services may not be registered with the HAPartition on some nodes.

              This has bearing on the usefulness of a Region-based cache for JBC. A question raised a while back was what was the purpose of a Region-based cache when the Multiplexer could allow multiple independent caches using the same underlying channel. The requirement to have homogeneous MuxChannels reduces the attractiveness of having multiple independent caches -- too easy to deploy non-homogeneous caches. Whereas the activate/inactivateRegion API already provides support for non-homogeneous uses of regions in a cache.

              • 4. Re: Cluster membership with JGroups 2.3 multiplexer
                Bela Ban Master

                Okay, so I could simply change the error message to a trace message, and that should be fine. If the Multiplexer doesn't find an application, then it simply discards the request. However, this could also be by mistake...

                • 5. Re: Cluster membership with JGroups 2.3 multiplexer
                  Brian Stansberry Master

                  If the message that was discarded was a synchronous RPC call, the sender will block (until timeout) waiting for a response that will never come.

                  • 6. Re: Cluster membership with JGroups 2.3 multiplexer

                    It seems that a client service should be able to ascertain membership of its service cluster, regardless of whether all multiplexer nodes run the service or not. If this information were available, the service could issue requests to its own members and not have to deal with requests being issued to nodes where the service wasn't running. I think this probably dictates that new semantics be added to JGroups to differentiate between nodes in the service cluster and nodes in the multiplexer cluster.

                    Currently a service can exist on a subset of nodes in a cluster by defining its own JGroups cluster. If the service chooses to utilize the multiplexer for performance reasons, it should still be able to exist reasonably transparently on the same subset while using the multiplexer.

                    • 7. Re: Cluster membership with JGroups 2.3 multiplexer

                      On further thought, it seems that deploying services to subsets of a cluster (Brian - is this what you mean by 'heterogeneous topologies') is incompatible with use of the multiplexer.

                      One issue that comes to mind is the coordinator. Unless each multiplexed service has its own coordinator, it's possible that a service deployed on a subset of the cluster won't have a coordinator (e.g., the coordinator is on a node where the service isn't deployed).

                      Assuming this is the case, it seems that we should advertise the multiplexer as targeted for services that are deployed on all nodes. We should also consider enhancing the multiplexer to handle services deployed on a subset of the nodes.

                      • 8. Re: Cluster membership with JGroups 2.3 multiplexer
                        Brian Stansberry Master

                        Yep, 'deploying services to subsets of a cluster' is what I meant by 'heterogeneous topology'. I like your phrasing better :-)

                        Your point about a coordinator is a good one. I'd like to think a bit about exactly why a service would want a coordinator.

                        1) To manage the process of notifying members of the group when instances *of that service* are started/stopped. This would function at the MuxChannel level similarly to what CoordGmsImpl does at the JChannel level.

                        There may be ways to achieve this without a coordinator, i.e. peers notify each other when they start/stop. Services using the mux would also monitor view changes, i.e. if a node is removed from the view it's automatically removed from the list of providers of the service.

                        2) State transfer. Multiplexer currently throws an IllegalArgumentException if it receives a state transfer request for an unregistered service. If a properly typed exception could propogate back to the state requestor, the state could be requested from the next member of the group, continuing through the list until all members had been contacted. JBossCache currently does something like this (via RPC calls) when it does partial state transfer.

                        Anything else we'd want from the coordinator?

                        • 9. Re: Cluster membership with JGroups 2.3 multiplexer

                          I guess the underlying "coordinator" issue is that services may simply access the first member of the membership list to accomplish some task. At least one service currently does this for state transfer (ClusterPartiton?). So any service doing this would need to verify that it was actually deployed on the node returned in the membership list.

                          It's possible that user services could rely on a coordinator for some application purpose. In this case, they couldn't readily use the multiplexer on a subset of the cluster. Not sure whether this is a real issue or just a theoretical one.

                          • 10. Re: Cluster membership with JGroups 2.3 multiplexer
                            Brian Stansberry Master

                            The coordinator is really just the oldest living node. If a service has a registry of what nodes have the service deployed, it can use the overall view to impose a consistent ordering of those nodes. Then for that service, the "coordinator" becomes the oldest living node with the service deployed. The trick is having the registry.

                            Subclasses of HAServiceMBeanSupport get this functionality for provided for them. In that case the registry is the DRM. JBossCache can't count on the DRM though :(

                            • 11. Re: Cluster membership with JGroups 2.3 multiplexer
                              Bela Ban Master

                              I never had 'heterogeneous clusters' in mind when designing the Multiplexer. The use case I had in mind was that *all* applications were deployed on *all* nodes, but sometimes some of those applications would get redeployed.
                              Now what exactly is the use case for heterogeneous clusters ? Note that a webapp is *not* an application, JBossCache which hosts that webapp *is* !

                              • 12. Re: Cluster membership with JGroups 2.3 multiplexer
                                Brian Stansberry Master

                                I'll begin here by saying I don't think supporting 'heterogeneous clusters' is a requirement or even a good idea; from a management point of view it adds complexity and confusion so in some ways the presence of technical reasons not to support it is a good thing.

                                Use case for 'heterogeneous cluster':

                                6 servers, 1-6. 1-3 have some distributable webapps deployed, maybe some clustered SFSBs. 4-6 have some MDBs deployed that are called by 1-3 to do certain resource intensive tasks. All 6 servers want to have access to a common HA-JNDI and HA-JMS service.

                                This could either be done with 1 multiplexed channel that supports heterogeneous deployments, or 3 homogeneous mulitplexed channels -- one for 1-3's state replication, one for 4-6's MDB deployment, and one to support HA-JNDI and HA-JMS across all 6 servers. In the latter case each server uses two multiplexed channels.

                                I don't see anything wrong with the latter approach.

                                • 13. Re: Cluster membership with JGroups 2.3 multiplexer
                                  Brian Stansberry Master

                                  Merging a private e-mail thread into this discussion:

                                  For me, the bigger issue is a 'temporary heterogeneous cluster' that will result from the independent lifecyle of the Multiplexer and any RpcDispatcher-based applications that use it.

                                  Say we have 2 nodes A and B in a cluster, each of which has a Mutliplexer SAR and a JBC instance. Server A is up and running; server B is coming on line:

                                  With the Multiplexer, the time delay between when 1) JBC A gets a viewAccepted() because the Multiplexer SAR got deployed on B and 2) JBC B gets deployed could be quite long -- seconds, perhaps 10s of seconds. How long depends on what else gets deployed in between. I'm assuming JBC B is *not* deployed as part of the same -service.xml file as the Multiplexer. Throughout those 10s of seconds, replication traffic from JBC A will fail with a timeout because Multiplexer B will not have a MuxChannel for JBC and will ignore the messages. JBC A's RpcDispatcher will wait a bit, then return a received=false in the RspList, which will raise a TimeoutException in the cache.