Version 7

    Overview

     

     

    The RiftSaw 3 clustering support will be built on top of the Switchyard Clustering feature (namely the remote module), it only works since the SwitchYard 0.8 and above.

     

    Updated modules for supporting cluster.

     

    1. Scheduler node name.

     

         Using the Infinispan's Address as the node name in the clustering environment. For the non cluster schedule node name, it can be configured in the bpel.properties of 'bpel.riftsaw.node.name', or in the standalone.xml bpel component xml 'riftsaw.node.name' property.

     

    2. Scheduler Job. (which is stored in ODE_JOB table)

     

         Registered the Infinispan's listener to move the jobs on the dead node to the alive nodes.

     

    3. Cache for the ProcessConf.

     

         Using the Infinispan cache for storing the ProcessConf.

    Test Cases

     

    Following test cases (quickstarts) are used to demonstrate the clustering functionality, like the fail-over, also serves to be a check list for testing.

     

    1. loan-service, loan-assessor

     

    These two bpel artifacts are splitted from the loan_approval quick start.

    1) deploy the loan-service in node1.

    2) deploy the loan-assessor in node2.

    3) send soap message to the loan-service on the node1, it should invoke the loan-assessor on the node.

     

    2. loan-wait-service (with wait activity), loan-assessor.

     

    1) deploy the loan-wait-service, loan-assessor in node1 and node2.

    2) send soap message to the loan-service on the node1, on the time of executing wait activity (it has 5 mins wait), kill the node 1 server.

    3) the wait activity should be able to fail over to the node2's loan-wait-service, and then invoke the locan-assessor in the end.

     

    3. process 'retire' functionality in the bpel-console.

    1) deploy the bpel-console in node1 and node2.

    2) deploy a bpel artifact, (like the loan-service) in both node1 and node2.

    3) start both node1 and node2.

    4) login node1's bpel console, and then retire the process definition.

    5) login node2's bpel console, the process definition should be 'retire' as well.

     

    The reason for the 3rd test case is because the ProcessConf is kept in the memory once it is deployed, so need to make sure the change in one node will be propogated to other nodes in the cluster properly.

     

    Limits

     

    1. It is recommended that you deploy the bpel artifacts over all nodes, instead of deploying some artifacts on some nodes, and deploying rest over the other nodes.

         Because in RiftSaw, once a node is dead, it will move all of the Jobs over to a random live node, if you didn't deploy same bpel artifact on all nodes, it might happen to move the jobs over to a node that didn't have the bpel service that you are looking for.

     

    Running RiftSaw in Clustering

     

    1. You'll need to use a shared database for the riftsaw clustering. Instructions can be found at : https://docs.jboss.org/author/display/SWITCHYARD/Update+BPEL+component+database

    2. Update the switchyard.xml to incorporate the remote module to enable the switchyard clustering. Details can be found at: SwitchYard Clustering documentation

    3. Start the switchyard clustering nodes. like following:

     

    • node1 : "bin/standalone.sh -c standalone-ha.xml -Djboss.node.name=node1"
    • node2 : "bin/standalone.sh -c standalone-ha.xml -Djboss.node.name=node2 -Djboss.socket.binding.port-offset=1000"

     

     

    1. SwitchYard Clustered Registry Architecture.

    2. SwitchYard Clustering documentation.