6 Replies Latest reply on Jul 11, 2007 3:11 PM by nhelder

    JBossAS 4.2.0, TCP Stack, Segmenting Partitions

    nhelder

      Hello,

      My situation is this: I would like to run two separate JBoss partitions on the same network, using the TCP stack for cluster communications.

      First possible confusion: to enable the TCP stack, I've been commenting out the UDP stack (in deploy/cluster-service.xml) and uncommenting the TCP stack. Is this the correct approach?

      Second area of confusion: from everything I've read, that the usual way to separate partitions is to change the partition name and the mcast port. However, I'm not able to find a spot in the TCP stack configuration where the mcast port can be specified (and maybe this makes sense because the TCP stack doesn't use mcast...?). So, if there's no spot to specify the mcast port in the TCP stack, how does one go about segmenting TCP-based partitions?

      Thanks in advance,

      - Nathan

        • 1. Re: JBossAS 4.2.0, TCP Stack, Segmenting Partitions
          quinine

          TCP.start_port

          You will find this wiki useful:

          http://wiki.jboss.org/wiki/Wiki.jsp?page=JGroups

          I wish I had found it earlier.

          • 2. Re: JBossAS 4.2.0, TCP Stack, Segmenting Partitions
            nhelder

            Thanks for the tip, but changing the TCP start_port (along with changing the partition name) doesn't seem to do the trick... I still see messages along the lines of:

            2007-07-11 10:44:15,658 WARN [org.jgroups.protocols.UDP] discarded message from different group "DefaultPartition-EntityCache" (our group is "xyz_partition-EntityCache"). Sender was xxx.xxx.12.185:2961

            From this (older) thread http://www.jboss.org/index.html?module=bb&op=viewtopic&p=3840501#3840501 I'm led to believe these are legitimate WARN messages - that the partition I'm creating is not "validly" unique.


            So, I guess I'm still wondering how to properly segment my partition from the other one... any additional suggestions would be greatly appreciated.

            Thanks,

            - Nathan


            PS. The "other one" in this case is another developer's machine - interestingly, one that isn't using the TCP stack at all, just the default UDP stack. And when I don't specify a different partition name on my machine, the two JBoss servers actually do appear to cluster successfully... which is somewhat surprising to me. I wouldn't have thought a TCP based server would accept views from a UDP based server. But anyway.

            PPS. For reference, here's the JGroups section of my cluster-service.xml file:

            <!-- The JGroups protocol configuration -->
             <attribute name="PartitionConfig">
             <!--
             The default UDP stack:
             - If you have a multihomed machine, set the UDP protocol's bind_addr attribute to the
             appropriate NIC IP address, e.g bind_addr="192.168.0.2".
             - On Windows machines, because of the media sense feature being broken with multicast
             (even after disabling media sense) set the UDP protocol's loopback attribute to true
             <Config>
             <UDP mcast_addr="${jboss.partition.udpGroup:228.1.2.3}"
             mcast_port="${jboss.hapartition.mcast_port:45566}"
             tos="8"
             ucast_recv_buf_size="20000000"
             ucast_send_buf_size="640000"
             mcast_recv_buf_size="25000000"
             mcast_send_buf_size="640000"
             loopback="false"
             discard_incompatible_packets="true"
             enable_bundling="false"
             max_bundle_size="64000"
             max_bundle_timeout="30"
             use_incoming_packet_handler="true"
             use_outgoing_packet_handler="false"
             ip_ttl="${jgroups.udp.ip_ttl:2}"
             down_thread="false" up_thread="false"/>
             <PING timeout="2000"
             down_thread="false" up_thread="false" num_initial_members="3"/>
             <MERGE2 max_interval="100000"
             down_thread="false" up_thread="false" min_interval="20000"/>
             <FD_SOCK down_thread="false" up_thread="false"/>
             <FD timeout="10000" max_tries="5" down_thread="false" up_thread="false" shun="true"/>
             <VERIFY_SUSPECT timeout="1500" down_thread="false" up_thread="false"/>
             <pbcast.NAKACK max_xmit_size="60000"
             use_mcast_xmit="false" gc_lag="0"
             retransmit_timeout="300,600,1200,2400,4800"
             down_thread="false" up_thread="false"
             discard_delivered_msgs="true"/>
             <UNICAST timeout="300,600,1200,2400,3600"
             down_thread="false" up_thread="false"/>
             <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
             down_thread="false" up_thread="false"
             max_bytes="400000"/>
             <pbcast.GMS print_local_addr="true" join_timeout="3000"
             down_thread="false" up_thread="false"
             join_retry_timeout="2000" shun="true"
             view_bundling="true"/>
             <FRAG2 frag_size="60000" down_thread="false" up_thread="false"/>
             <pbcast.STATE_TRANSFER down_thread="false" up_thread="false" use_flush="false"/>
             </Config>
             -->
            
             <!-- Alternate TCP stack: customize it for your environment, change bind_addr and initial_hosts -->
             <Config>
             <TCP bind_addr="nhelder07l" start_port="7850" loopback="true"
             tcp_nodelay="true"
             recv_buf_size="20000000"
             send_buf_size="640000"
             discard_incompatible_packets="true"
             enable_bundling="false"
             max_bundle_size="64000"
             max_bundle_timeout="30"
             use_incoming_packet_handler="true"
             use_outgoing_packet_handler="false"
             down_thread="false" up_thread="false"
             use_send_queues="false"
             sock_conn_timeout="300"
             skip_suspected_members="true"/>
             <TCPPING initial_hosts="nhelder07l[7850]" port_range="3"
             timeout="3000"
             down_thread="false" up_thread="false"
             num_initial_members="1"/>
             <MERGE2 max_interval="100000"
             down_thread="false" up_thread="false" min_interval="20000"/>
             <FD_SOCK down_thread="false" up_thread="false"/>
             <FD timeout="10000" max_tries="5" down_thread="false" up_thread="false" shun="true"/>
             <VERIFY_SUSPECT timeout="1500" down_thread="false" up_thread="false"/>
             <pbcast.NAKACK max_xmit_size="60000"
             use_mcast_xmit="false" gc_lag="0"
             retransmit_timeout="300,600,1200,2400,4800"
             down_thread="false" up_thread="false"
             discard_delivered_msgs="true"/>
             <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
             down_thread="false" up_thread="false"
             max_bytes="400000"/>
             <pbcast.GMS print_local_addr="true" join_timeout="3000"
             down_thread="false" up_thread="false"
             join_retry_timeout="2000" shun="true"
             view_bundling="true"/>
             <pbcast.STATE_TRANSFER down_thread="false" up_thread="false" use_flush="false"/>
             </Config>
             </attribute>
             <depends>jboss:service=Naming</depends>
             </mbean>


            • 3. Re: JBossAS 4.2.0, TCP Stack, Segmenting Partitions
              brian.stansberry

              The messages you are seeing are not related to the channel created in cluster-service.xml.

              If you have EJB3 in your AS config, there are typically 4 channels created:

              cluster-service.xml
              tc5-cluster.sar/META-INF/jboss-service.xml (in 4.2 now called jboss-web-cluster.sar)
              ejb3-clustered-sfsbcache-service.xml
              ejb3-entity-cache-service.xml

              The WARN you report is related to the last one. They all need to be properly isolated.

              In AS 5, this will be easier, since by default all four of these services will share a single channel.

              • 4. Re: JBossAS 4.2.0, TCP Stack, Segmenting Partitions
                nhelder

                Thanks. For whatever reason, that phrasing sunk in and made sense. :-)

                Two things...

                1) I assume that the tc5-cluster.sar has been replaced with jboss-web-cluster.sar in AS 4.2.0?

                2) The ejb3-clustered-sfsbcache-service.xml and ejb3-entity-cache-service.xml files don't contain example TCP stacks. Am I correct in assuming that the TCP stack definition from the cluster-service.xml file can be adapted to these other locations?

                (We have a cluster across subnets with firewalls that filter mcasts, thus the focus on TCP as the communication protocol.)

                Thanks again,

                - Nathan

                • 5. Re: JBossAS 4.2.0, TCP Stack, Segmenting Partitions
                  brian.stansberry

                   

                  "nhelder" wrote:

                  1) I assume that the tc5-cluster.sar has been replaced with jboss-web-cluster.sar in AS 4.2.0?


                  Yes, that's correct. We decided to give it a name that isn't tied to whatever Tomcat version we integrate.

                  "nhelder" wrote:
                  2) The ejb3-clustered-sfsbcache-service.xml and ejb3-entity-cache-service.xml files don't contain example TCP stacks. Am I correct in assuming that the TCP stack definition from the cluster-service.xml file can be adapted to these other locations?


                  Suggest you start with the one from jboss-web-cluster.sar. But if either cache is REPL_SYNC (the entity cache is by default) you can remove the FC protocol.

                  • 6. Re: JBossAS 4.2.0, TCP Stack, Segmenting Partitions
                    nhelder

                    Great, that worked. Thanks again for all the help.