Version 2

    Prerequisities

    We are going to use JBoss AS7.2 in domain mode, so we will need three network addresses (could be of course more) to bind our AS7 instances on them.

    On Linux operting system we can create new adresses like this (need to be root):

     

    ip a a 192.168.148.143/16 brd + dev eth0
    ip a a 192.168.148.144/16 brd + dev eth0
    ip a a 192.168.148.140/16 brd + dev eth0
    

     

    Create records in your /etc/hosts to propery assign names to addresses:

     

    192.168.148.143 nic1 nic1.example.com
    192.168.148.144 nic2 nic2.example.com
    192.168.148.140 lb lb.example.com
    

     

    To properly setup load balancing we need to disable SE Linux. This is not for production purpose, so make sure your production env. is sutup corectly with SE Linux enabled.

     

    setenforce 0
    

     

    Set you firewall to enable clustering or for short disable firewall.

     

    service iptables stop
    

     

    Make sure you have Apache httpd installed on your system.

    For Fedora or RHEL use:

     

    yum install httpd
    

     

    More information can be found at https://docs.jboss.org/author/display/AS71/AS7+Cluster+Howto

     

    Download and install mod_cluster 1.2.0.Final from http://www.jboss.org/mod_cluster/downloads/1-2-0-Final make sure you use the one for your CPU/arch.

    Copy mod_slotmem.so, mod_manager.so, mod_proxy_cluster.so and mod_advertise.so to modules directory on instaled Apache httpd (/etc/httpd/modules/).

     

    Configuration

    Our setup will include load balancer using mod_cluster with two AS7 nodes where the application (idp.war) is setup.

    We used current master branch of JBoss AS7 (7.2.0.Alpha1-SNAPSHOT). Download or build your version of JBoss AS7.

     

    Load balancer configuration

    We already installed mod_cluster modules in prerequisities, so let's create a configuration for it.

    Open /etc/httpd/conf/httpd.conf and add following to the and of the file:

     

    # This Listen port is for the mod_cluster-manager, where you can see the status of mod_cluster.
    # Port 10001 is not a reserved port, so this prevents problems with SELinux.
    Listen lb:10001
    
    <VirtualHost lb:10001>
    
      <Directory />
        Order deny,allow
        Deny from all
        Allow from all
      </Directory>
    
    
      # This directive allows you to view mod_cluster status at URL http://10.211.55.4:10001/mod_cluster-manager
      <Location /mod_cluster-manager>
       SetHandler mod_cluster-manager
       Order deny,allow
       Deny from all
       Allow from all
      </Location>
    
    
      EnableMCPMReceive
      KeepAliveTimeout 3600
      MaxKeepAliveRequests 0
    
      ManagerBalancerName other-server-group
      AdvertiseFrequency 5
    
    </VirtualHost>
    

     

    Note that lb is alias to lb.example.com created in prerequisities.

     

    One can also change listening port of httpd to load balancer instaead of listening on all interfaces.

    Listen lb:80
    

     

    JBoss AS7 Nodes Setup

    Create two copies of JBoss AS7 in two directories jb1/ and jb2/.

    jb1 will be home of master host and jb2 home of slave host. Configure them to use addresses 192.168.148.143 and 192.168.148.144 respectively (in $JBOSS_HOME/domain/configuration/host.xml file).

     

    Example:

     

    <interfaces>
        <interface name="management">
            <inet-address value="${jboss.bind.address.management:192.168.148.143}"/>
        </interface>
        <interface name="public">
            <inet-address value="${jboss.bind.address:192.168.148.143}"/>
        </interface>
        <interface name="unsecure">
            <inet-address value="${jboss.bind.address.unsecure:192.168.148.143}"/>
        </interface>
    </interfaces>
    

     

    From default JBoss configuration we have used server-one on master to run service provider applications. Server-three on master and slave hosts to run idp.war in HA environment. Don't forget to use "full-ha" profile and "full-ha-sockets" socket binding.

    Once your domain is started on both hosts (./bin/domain.sh) you should see following text in console log of later started server (usually slave).

     

    [Server:server-three-slave] 13:30:13,858 INFO  [stdout] (ServerService Thread Pool -- 75) -------------------------------------------------------------------
    [Server:server-three-slave] 13:30:13,859 INFO  [stdout] (ServerService Thread Pool -- 75) GMS: address=slave:server-three-slave/ejb, cluster=ejb, physical address=192.168.148.144:55450
    [Server:server-three-slave] 13:30:13,859 INFO  [stdout] (ServerService Thread Pool -- 75) -------------------------------------------------------------------
    .
    .
    .
    [Server:server-three-slave] 13:30:14,502 INFO  [org.jboss.as.clustering] (MSC service thread 1-8) JBAS010238: Number of cluster members: 2
    [Server:server-three-slave] 13:30:14,508 INFO  [org.jboss.as.clustering] (MSC service thread 1-7) JBAS010238: Number of cluster members: 2
    

     

    Now your servers formed a cluster.

     

    IDP.WAR changes

    Checkout PicketLink Quickstarts. We have to change web.xml of idp.war to work in cluster adding <distributed/> to web.xml. (necessary changes are already in https://github.com/pskopek/picketlink-quickstarts/tree/clustered-idp).

    Changes to picketlink.xml:

    We have to modify identity provider application to send all redirects to proper IDP url (<IdentityURL>${idp.url::http://lb.example.com/idp/}</IdentityURL>) and trust certail domains (example.com in our case).


    <Trust>
        <Domains>example.com,localhost</Domains>
    </Trust>
    

     

     

    Service provider application changes

    SP application requires just changes to picketlink.xml file. We have to let it to know where the identity provider resides and what is it's own service URL.

     

     <PicketLink xmlns="urn:picketlink:identity-federation:config:2.1">
         <PicketLinkSP xmlns="urn:picketlink:identity-federation:config:1.0"
             ServerEnvironment="tomcat" BindingType="POST">
            <IdentityURL>${idp.url::http://lb.example.com/idp/}</IdentityURL>
            <ServiceURL>${sales-post.url::http://nic1.example.com:8080/sales-post/}</ServiceURL>
         </PicketLinkSP>
    .
    .
    .
    

     

    Exmples could be seen here https://github.com/pskopek/picketlink-quickstarts/tree/clustered-idp

     

    Now build and deploy applications to your newly created cluster.

    idp.war to server-three on both master and slave nodes.

    sales-post.war (or more service provider applications) to server-one on master node.

     

    Configuring cluster SSO for nodes running idp.war.

    It is very important that cluster SSO is configured for the nodes running idp.war.

    Check my web subsystem configuration.

     

    <subsystem xmlns="urn:jboss:domain:web:1.4" default-virtual-server="default-host" native="false">
      <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/>
      <connector name="ajp" protocol="AJP/1.3" scheme="http" socket-binding="ajp"/>
      <virtual-server name="default-host" enable-welcome-root="true">
        <alias name="localhost"/>
        <alias name="example.com"/>
        <sso cache-container="web" cache-name="sso" domain="example.com" reauthenticate="false"/>
      </virtual-server>
    </subsystem>
    

     

    Play and Test

    Start httpd (sudo service httpd start) to have load balancer ready.

    After successful deploy you can try your use cases at http://localhost:8080/sales-post/ and try to shutdown/start nodes and see how request gets processed in each node console log.

    Good tip is to use management console for AS7 to see deploy/undeploy apps and start/stop servers.