7 Replies Latest reply on Sep 22, 2005 12:49 AM by Sreenath Venkataramanappa

    Load balancing

    gunjan shrivastava Newbie

      Hi all,
      can anyone help me on this issue
      I have two jboss4.0.2 instances running on two different machines with ip 10.1.1.131 and 10.1.1.69.
      I am using mod_jk_1.2 load balancer and Apache webserver.
      I have deployed my ear on node1 which is 10.1.1.131 and its successfully deployed on node2 also.
      I have got my EJBs clustered by setting true and defaultpartition tag in jboss.xml.
      when i run my application seperately on both machines everything is working fine.
      Both machines are recognising each other.
      Can anyone please tell me how can i send a request to one unique ip address which is then load balanced to different machines using the load balancer.
      I am using the following url on node1 and node2 machines
      http://10.1.1.131:8080/xlWebApp
      http://10.1.1.69:38080/xlWebApp

      I will be very very thankful if anyone can suggest me.
      I am new to jboss clustering so please help me.
      My workers.properties file is
      # Define list of workers that will be used
      # for mapping requests
      worker.list=loadbalancer,status
      # Define Node1
      worker.node1.port=8009
      worker.node1.host=10.1.1.131
      worker.node1.type=ajp13
      worker.node1.lbfactor=1
      #worker.node1.local_worker=1 (1)
      worker.node1.cachesize=10

      # Define Node2
      worker.node2.port=8009
      worker.node2.host=10.1.1.69
      worker.node2.type=ajp13
      worker.node2.lbfactor=1
      #worker.node2.local_worker=1 (1)
      worker.node2.cachesize=10

      # Load-balancing behaviour
      worker.loadbalancer.type=lb
      worker.loadbalancer.balance_workers=node1, node2
      worker.loadbalancer.sticky_session=1
      worker.loadbalancer.local_worker_only=1
      worker.list=loadbalancer

      # Status worker for managing load balancer
      worker.status.type=status

      My mod-jk_conf file is
      # Load mod_jk module
      # Specify the filename of the mod_jk lib
      LoadModule jk_module modules/mod_jk.so

      # Where to find workers.properties
      JkWorkersFile conf/workers.properties

      # Where to put jk logs
      JkLogFile logs/mod_jk.log

      # Set the jk log level [debug/error/info]
      JkLogLevel info

      # Select the log format
      JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

      # JkOptions indicates to send SSK KEY SIZE
      JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories

      # JkRequestLogFormat
      JkRequestLogFormat "%w %V %T"

      # Mount your applications
      JkMount /application/* loadbalancer

      # You can use external file for mount points.
      # It will be checked for updates each 60 seconds.
      # The format of the file is: /url=worker
      # /examples/*=loadbalancer
      JkMountFile conf/uriworkermap.properties

      # Add shared memory.
      # This directive is present with 1.2.10 and
      # later versions of mod_jk, and is needed for
      # for load balancing to work properly
      JkShmFile logs/jk.shm

      # Add jkstatus for managing runtime data
      <Location /jkstatus/>
      JkMount status
      Order deny,allow
      Deny from all
      Allow from all



      My server.xml is
      Server>

      <!-- Use a custom version of StandardService that allows the
      connectors to be started independent of the normal lifecycle
      start to allow web apps to be deployed before starting the
      connectors.
      -->


      <!-- A HTTP/1.1 Connector on port 8080 -->


      <!-- A AJP 1.3 Connector on port 8009 -->


      <!-- SSL/TLS Connector configuration using the admin devl guide keystore

      -->



      <!-- The JAAS based authentication and authorization realm implementation
      that is compatible with the jboss 3.2.x realm implementation.
      - certificatePrincipal : the class name of the
      org.jboss.security.auth.certs.CertificatePrincipal impl
      used for mapping X509[] cert chains to a Princpal.
      -->

      <!-- A subclass of JBossSecurityMgrRealm that uses the authentication
      behavior of JBossSecurityMgrRealm, but overrides the authorization
      checks to use JACC permissions with the current java.security.Policy
      to determine authorized access.

      -->



      <!-- Uncomment to enable request dumper. This Valve "logs interesting
      contents from the specified Request (before processing) and the
      corresponding Response (after processing). It is especially useful
      in debugging problems related to headers and cookies."
      -->
      <!--

      -->

      <!-- Access logger -->
      <!--

      -->

      <!-- Uncomment to enable single sign-on across web apps
      deployed to this host. Does not provide SSO across a cluster.

      If this valve is used, do not use the JBoss ClusteredSingleSignOn
      valve shown below.
      -->
      <!--

      -->

      <!-- Uncomment to enable single sign-on across web apps
      deployed to this host AND to all other hosts in the cluster
      with the same virtual hostname.

      If this valve is used, do not use the standard Tomcat SingleSignOn
      valve shown above.

      This valve uses JGroups to communicate across the cluster. The
      JGroups Channel used for this communication can be configured
      by editing the "sso-channel.xml" file found in the same folder
      as this file. If this valve is running on a machine with multiple
      IP addresses, configuring the "bind_addr" property of the JGroups
      UDP protocol may be necessary. Another possible configuration
      change would be to enable encryption of intra-cluster communications.
      See the sso-channel.xml file for more details.

      Besides the attributes supported by the standard Tomcat
      SingleSignOn valve (see the Tomcat docs), this version also supports
      the following attribute:

      partitionName the name of the cluster partition in which
      this node participates. If not set, the default
      value is "sso-partition/" + the value of the
      "name" attribute of the Host element that
      encloses this element (e.g. "sso-partition/localhost")
      -->
      <!--

      -->

      <!-- Uncomment to check for unclosed connections and transaction terminated checks
      in servlets/jsps.
      Important: You need to uncomment the dependency on the CachedConnectionManager
      in META-INF/jboss-service.xml

      -->









      my cluster-service.xml is
      <?xml version="1.0" encoding="UTF-8"?>

      <!-- ===================================================================== -->
      <!-- -->
      <!-- Sample Clustering Service Configuration -->
      <!-- -->
      <!-- ===================================================================== -->





      <!-- ==================================================================== -->
      <!-- Cluster Partition: defines cluster -->
      <!-- ==================================================================== -->



      <!-- Name of the partition being built -->
      ${jboss.partition.name:DefaultPartition}

      <!-- The address used to determine the node name -->
      ${jboss.bind.address}

      <!-- Determine if deadlock detection is enabled -->
      False

      <!-- Max time (in ms) to wait for state transfer to complete. Increase for large states -->
      30000

      <!-- The JGroups protocol configuration -->

      <!--
      The default UDP stack:
      - If you have a multihomed machine, set the UDP protocol's bind_addr attribute to the
      appropriate NIC IP address, e.g bind_addr="192.168.0.2".
      - On Windows machines, because of the media sense feature being broken with multicast
      (even after disabling media sense) set the UDP protocol's loopback attribute to true
      -->

      <UDP mcast_addr="228.1.2.3" mcast_port="45566"
      ip_ttl="8" ip_mcast="true"
      mcast_send_buf_size="800000" mcast_recv_buf_size="150000"
      ucast_send_buf_size="800000" ucast_recv_buf_size="150000"
      loopback="true" bind_addr="10.1.1.131"/>
      <PING timeout="2000" num_initial_members="3"
      up_thread="true" down_thread="true"/>
      <MERGE2 min_interval="10000" max_interval="20000"/>
      <FD shun="true" up_thread="true" down_thread="true"
      timeout="2500" max_tries="5"/>
      <VERIFY_SUSPECT timeout="3000" num_msgs="3"
      up_thread="true" down_thread="true"/>
      <pbcast.NAKACK gc_lag="50" retransmit_timeout="300,600,1200,2400,4800"
      max_xmit_size="8192"
      up_thread="true" down_thread="true"/>
      <UNICAST timeout="300,600,1200,2400,4800" window_size="100" min_threshold="10"
      down_thread="true"/>
      <pbcast.STABLE desired_avg_gossip="20000"
      up_thread="true" down_thread="true"/>
      <FRAG frag_size="8192"
      down_thread="true" up_thread="true"/>
      <pbcast.GMS join_timeout="5000" join_retry_timeout="2000"
      shun="true" print_local_addr="true"/>
      <pbcast.STATE_TRANSFER up_thread="true" down_thread="true"/>


      <!-- Alternate TCP stack: customize it for your environment, change bind_addr and initial_hosts -->
      <!--

      <TCP bind_addr="thishost" start_port="7800" loopback="true"/>
      <TCPPING initial_hosts="thishost[7800],otherhost[7800]" port_range="3" timeout="3500"
      num_initial_members="3" up_thread="true" down_thread="true"/>
      <MERGE2 min_interval="5000" max_interval="10000"/>
      <FD shun="true" timeout="2500" max_tries="5" up_thread="true" down_thread="true" />
      <VERIFY_SUSPECT timeout="1500" down_thread="false" up_thread="false" />
      <pbcast.NAKACK down_thread="true" up_thread="true" gc_lag="100"
      retransmit_timeout="3000"/>
      <pbcast.STABLE desired_avg_gossip="20000" down_thread="false" up_thread="false" />
      <pbcast.GMS join_timeout="5000" join_retry_timeout="2000" shun="false"
      print_local_addr="true" down_thread="true" up_thread="true"/>
      <pbcast.STATE_TRANSFER up_thread="true" down_thread="true"/>

      -->




      <!-- ==================================================================== -->
      <!-- HA Session State Service for SFSB -->
      <!-- ==================================================================== -->


      jboss:service=${jboss.partition.name:DefaultPartition}
      <!-- Name of the partition to which the service is linked -->
      ${jboss.partition.name:DefaultPartition}
      <!-- JNDI name under which the service is bound -->
      /HASessionState/Default
      <!-- Max delay before cleaning unreclaimed state.
      Defaults to 30*60*1000 => 30 minutes -->
      0


      <!-- ==================================================================== -->
      <!-- HA JNDI -->
      <!-- ==================================================================== -->


      jboss:service=${jboss.partition.name:DefaultPartition}
      <!-- Name of the partition to which the service is linked -->
      ${jboss.partition.name:DefaultPartition}
      <!-- Bind address of bootstrap and HA-JNDI RMI endpoints -->
      ${jboss.bind.address}
      <!-- Port on which the HA-JNDI stub is made available -->
      1100
      <!-- Accept backlog of the bootstrap socket -->
      50
      <!-- The thread pool service used to control the bootstrap and
      auto discovery lookups -->
      <depends optional-attribute-name="LookupPool"
      proxy-type="attribute">jboss.system:service=ThreadPool

      <!-- A flag to disable the auto discovery via multicast -->
      false
      <!-- Set the auto-discovery bootstrap multicast bind address. If not
      specified and a BindAddress is specified, the BindAddress will be used. -->
      ${jboss.bind.address}
      <!-- Multicast Address and group port used for auto-discovery -->
      230.0.0.4
      1102
      <!-- The TTL (time-to-live) for autodiscovery IP multicast packets -->
      16

      <!-- RmiPort to be used by the HA-JNDI service once bound. 0 => auto. -->
      0
      <!-- Client socket factory to be used for client-server
      RMI invocations during JNDI queries
      custom
      -->
      <!-- Server socket factory to be used for client-server
      RMI invocations during JNDI queries
      custom
      -->



      ${jboss.bind.address}
      <!--
      0
      custom
      custom
      -->


      <!-- the JRMPInvokerHA creates a thread per request. This implementation uses a pool of threads -->

      1
      300
      300
      60000
      ${jboss.bind.address}
      4446
      ${jboss.bind.address}
      0
      false
      <depends optional-attribute-name="TransactionManagerService">jboss:service=TransactionManager


      <!-- ==================================================================== -->

      <!-- ==================================================================== -->
      <!-- Distributed cache invalidation -->
      <!-- ==================================================================== -->


      jboss:service=${jboss.partition.name:DefaultPartition}
      jboss.cache:service=InvalidationManager
      jboss.cache:service=InvalidationManager
      ${jboss.partition.name:DefaultPartition}
      DefaultJGBridge





      Thanks and Regards,
      Gunjan
      Thanks
      gunjan