1 Reply Latest reply on Nov 6, 2014 11:12 AM by hchiorean

    Building a simple Modeshape cluster

    janpetzold

      I'm trying to build a cluster with Modeshape 4 and a simple web application deployed on Tomcat. I've read somewhere that Modeshape does not have a classic Master/Slave environment, instead all nodes are "equal", however when I look at the JBoss clustering examples there is a distinction between master and slave. It's confusing. My general questions:

       

      1. Is it true that all cluster nodes can use the same JGroups configuration?

      2. Can I dynamically add/remove cluster participants at runtime so that they are automatically detected?

       

      I have no experience with Infinispan or JGroups, so it it very likely that something is wrong in my configuration files. Whats happening is that I can start one server but as soon as I start the next one (expecting it to joiun the cluster) an error appears.

       

      repository-config.json:

       

      {
          "name" : "jpd-repo",
          "jndiName" : "",
          "monitoring" : {
              "enabled" : true
          },
          "storage" : {
              "cacheConfiguration" : "infinispan-config.xml",
              "cacheName" : "persisted_repository",
              "binaryStorage" : {
                  "type" : "file",
                  "directory": "stored",
                  "minimumBinarySizeInBytes" : 40
              }
          },
          "workspaces" : {
              "predefined" : [],
              "default" : "default",
              "allowCreation" : true
          },
          "security" : {
              "anonymous" : {
                  "roles" : ["readonly","readwrite","admin"],
                  "useOnFailedLogin" : false
              },
              "providers" : [
                  { "classname" : "servlet" }
              ]
          }
      }
      
      

       

      infinispan-config.xml:

       

      <?xml version="1.0" encoding="UTF-8"?>
      <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                  xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd
                                      urn:infinispan:config:jdbc:6.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-6.0.xsd"
                  xmlns="urn:infinispan:config:6.0">
      
          <global>
              <globalJmxStatistics enabled="false" allowDuplicateDomains="true" />
      
              <!-- Configure clustering -->
              <transport clusterName="jpd-repo-cluster">
                  <properties>
                      <property name="configurationFile" value="jgroups-config.xml" />
                  </properties>
              </transport>
          </global>
      
      
          <namedCache name="persisted_repository">
              <eviction strategy="LIRS" maxEntries="600"/>
      
      
              <!-- Configure a synchronous replication cache -->
              <clustering mode="replication">
                  <stateTransfer fetchInMemoryState="true" timeout="2000" />
                  <sync />
              </clustering>
      
              <locking isolationLevel="READ_COMMITTED" writeSkewCheck="false" lockAcquisitionTimeout="1000" />
      
              <transaction
                      transactionManagerLookupClass="org.modeshape.example.spring.jcr.AtomikosTransactionManagerLookup"
                      transactionMode="TRANSACTIONAL"
                      lockingMode="OPTIMISTIC"/>
      
      
              <persistence passivation="false">
                  <singleFile
                          preload="false"
                          shared="false"
                          fetchPersistentState="true"
                          purgeOnStartup="true"
                          location="storage/repository_${cluster-id}/store">
                  </singleFile>
              </persistence>
          </namedCache>
      </infinispan>
      
      

       

      jgroups-config.xml:

       

      <config xmlns="urn:org:jgroups"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">
          <UDP
                  mcast_port="${jgroups.udp.mcast_port:45588}"
                  tos="8"
                  ucast_recv_buf_size="5M"
                  ucast_send_buf_size="640K"
                  mcast_recv_buf_size="5M"
                  mcast_send_buf_size="640K"
                  loopback="true"
                  max_bundle_size="64K"
                  max_bundle_timeout="30"
                  ip_ttl="${jgroups.udp.ip_ttl:8}"
                  enable_diagnostics="true"
                  thread_naming_pattern="cl"
      
      
                  timer_type="new3"
                  timer.min_threads="2"
                  timer.max_threads="4"
                  timer.keep_alive_time="3000"
                  timer.queue_max_size="500"
      
      
                  thread_pool.enabled="true"
                  thread_pool.min_threads="2"
                  thread_pool.max_threads="8"
                  thread_pool.keep_alive_time="5000"
                  thread_pool.queue_enabled="true"
                  thread_pool.queue_max_size="10000"
                  thread_pool.rejection_policy="discard"
      
      
                  oob_thread_pool.enabled="true"
                  oob_thread_pool.min_threads="1"
                  oob_thread_pool.max_threads="8"
                  oob_thread_pool.keep_alive_time="5000"
                  oob_thread_pool.queue_enabled="false"
                  oob_thread_pool.queue_max_size="100"
                  oob_thread_pool.rejection_policy="discard"/>
      
      
          <PING timeout="2000"
                num_initial_members="20"/>
          <MERGE2 max_interval="30000"
                  min_interval="10000"/>
          <FD_SOCK/>
          <FD_ALL/>
          <VERIFY_SUSPECT timeout="1500"  />
          <BARRIER />
          <pbcast.NAKACK2 xmit_interval="500"
                          xmit_table_num_rows="100"
                          xmit_table_msgs_per_row="2000"
                          xmit_table_max_compaction_time="30000"
                          max_msg_batch_size="500"
                          use_mcast_xmit="false"
                          discard_delivered_msgs="true"/>
          <UNICAST3 xmit_interval="500"
                    xmit_table_num_rows="100"
                    xmit_table_msgs_per_row="2000"
                    xmit_table_max_compaction_time="60000"
                    conn_expiry_timeout="0"
                    max_msg_batch_size="500"/>
          <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                         max_bytes="4M"/>
          <pbcast.GMS print_local_addr="true" join_timeout="3000"
                      view_bundling="true"/>
          <UFC max_credits="2M"
               min_threshold="0.4"/>
          <MFC max_credits="2M"
               min_threshold="0.4"/>
          <FRAG2 frag_size="60K"  />
          <RSVP resend_interval="2000" timeout="10000"/>
          <pbcast.STATE_TRANSFER />
          <!-- pbcast.FLUSH  /-->
      </config>
      
      

       

       

      Starting a single node works, no errors are reported. However once I start a second Tomcat (using the same web app with the same configuration) I see the error

       

      java.lang.IllegalArgumentException: fork-channel with id=modeshape-fork-channel is already present

       

      According to the release notes, this was fixed in Modeshape 4 Beta1, I'm using the FINAL version.

       

      Many thanks,

       

      Jan

       

      UPDATE

       

      I did not see any message like that:

       

      SPN000094: Received new cluster view:


      I switched to Wildfly now where everything worked right away.

        • 1. Re: Building a simple Modeshape cluster
          hchiorean

          In 4.x all nodes in the cluster are equal from a configuration perspective. There is no special master/slave configuration required (in 3.x this was not necessarily the case because you could configure indexing in a master/slave fashion).

          JGroups-wise all nodes should use the same JGroups stack/configuration. In your example, before you get the exception, you should be able to tell if JGroups is correctly configured or not by looking at the console output. If JGroups is configured correctly you should see messages like: INFO ISPN000094: Received new cluster view: [HORIA-LPT-20326|1] (2) [HORIA-LPT-20326, HORIA-LPT-15101].

          From looking at the attached XML the config looks fine, but whether it works or not is very much a function of the local machine (there's stuff like VPNs, network protocols, IPv4/6 etc which can all play a part).

          If JGroups is configured correctly, you should be able to add/remove nodes at runtime from the cluster.

           

          Regarding the exception, as you correctly point out it's something that should've been fixed already in 4.0.Final. We've tested clustering in Wildfly (see quickstart/modeshape-clustering at master · ModeShape/quickstart · GitHub) and we also have local (in-memory) unit tests which start multiple clustered repositories. It may happen that there is still a bug somewhere, so before opening a new JIRA, please attach the full exception stack (ideally the entire server output log) from the node where you're seeing the exception.