Skip navigation

We are happy to announce the latest mod_cluster release 1.4.0.Final! To summarize:


  • Users can now leverage the expanded support for Tomcat 8.5 and Tomcat 9.0 and support for JDK 9 and 10.
  • Configuration-wise, the Tomcat connector is no longer chosen by auto-magic, but must now be specified explicitly (e.g. by connectorPort="8080")
  • Functionally, nodes can now be gradually ramped up preventing the issue of nodes being initially overloaded.
  • Moreover decay rates can now be floating point numerals allowing for better load control.
  • Implementers of custom load metric are now provided with the Load SPI module and the SPI is expanded for metrics to be able to mark the node in error state.
  • The distribution zips are all new and easy to use.
  • ...and dozen of bugs squashed.
  • Note that the 1.4.x branch does not bring native httpd module and is fully compatible with 1.3.x native releases.


Here is the full changelog:


        Component Upgrade

  • [MODCLUSTER-517] -        Upgrade jboss-parent to 21
  • [MODCLUSTER-520] -        Upgrade jboss-logging/tools to 3.3.0.Final and 2.0.1.Final respectively
  • [MODCLUSTER-560] -        Upgrade mockito to 2.5.2
  • [MODCLUSTER-561] -        Upgrade maven-war-plugin to version compatible with JDK 9-ea+149 and higher
  • [MODCLUSTER-586] -        Upgrade jboss-parent to 22
  • [MODCLUSTER-597] -        Upgrade jboss-parent to 24 (to support jdk-9+175)
  • [MODCLUSTER-598] -        Include building the distribution profile in CI
  • [MODCLUSTER-604] -        Upgrade jboss-logging to 3.3.1.Final
  • [MODCLUSTER-605] -        Upgrade jboss-logging-processor to 2.0.2.Final


        Feature Request

  • [MODCLUSTER-642] -        Support JDK10 build 10+44
  • [MODCLUSTER-449] -        Implement ramp-up when starting new nodes
  • [MODCLUSTER-457] -        Expose Tomcat configuration to explicitly specify a connector to register with the proxy
  • [MODCLUSTER-479] -        Add support for Tomcat 9
  • [MODCLUSTER-493] -        Add support for Tomcat 8.5
  • [MODCLUSTER-531] -        Eliminate automagic
  • [MODCLUSTER-539] -        Support JDK9 build 9-ea+139
  • [MODCLUSTER-562] -        Support configuring SocketFactory for MCMP to integrate with Elytron-provided SSLContext
  • [MODCLUSTER-564] -        Introduce configuration builder API
  • [MODCLUSTER-574] -        Allow custom LoadMetric implementations to put node into error state (load of -1)
  • [MODCLUSTER-575] -        Create a Load SPI module
  • [MODCLUSTER-607] -        Support floating-point numerals for decay factor



  • [MODCLUSTER-469] -        Tomcat 8 container integration does not add jvm-route to JSESSIONID when generated by UUIDJvmRouteFactory
  • [MODCLUSTER-639] -        proxy reset requests can allow for other MCMPs to bad proxy
  • [MODCLUSTER-653] -        mod_cluster DefaultMCMPHandler should handle "Connection: close" response header and close a connection
  • [MODCLUSTER-480] -        Tomcat 8 Context#isDistributable implementation needs to consult the underlying context instead of the manager
  • [MODCLUSTER-497] -        ModClusterConfig#setAdvertiseInterface(java.lang.String) should not have been deprecated as it's used by Tomcat modeller
  • [MODCLUSTER-509] -        Excluded contexts which are not specific to a host should be excluded on all hosts
  • [MODCLUSTER-511] - script fails in most cases
  • [MODCLUSTER-529] -        MBeans descriptions are not loaded in JMX for ModClusterListener
  • [MODCLUSTER-566] -        Exclusion list cannot be pre-populated in init()
  • [MODCLUSTER-572] -        Normalize hostnames in MCMP messages
  • [MODCLUSTER-581] -        Session draining with non-positive timeout may wait indefinitely
  • [MODCLUSTER-584] -        httpd Host header validation check with IPv6 address in MCMP requests
  • [MODCLUSTER-585] -        mod_cluster excluded-contexts doesn't exclude slash prefixed /contexts; should perform normalization
  • [MODCLUSTER-596] -        mod_cluster stop/stop-context operations do not send STOP-* in when session draining was unsuccessful
  • [MODCLUSTER-612] -        Test DefaultMCMPRequestFactoryTestCase#createConfigRequest fails on Windows OS with other than lowercase hostname



  • [MODCLUSTER-571] -        Maven generates an empty distribution tarball on first execution
  • [MODCLUSTER-620] -        Travis CI: workaround dropping openjdk6/oraclejdk7 on stable/trusty
  • [MODCLUSTER-628] -        Travis CI: Add oraclejdk9 to the matrix
  • [MODCLUSTER-631] -        Maven pushes only one tomcat distribution archive on deploy
  • [MODCLUSTER-652] -        Documentation 2.0: living docs
  • [MODCLUSTER-439] -        Overhaul distribution profile
  • [MODCLUSTER-482] -        Drop Tomcat 6 and JBoss Web support
  • [MODCLUSTER-485] -        Move CI to Travis
  • [MODCLUSTER-494] -        Split mod_cluster code base into container integration and native repositories
  • [MODCLUSTER-496] -        Drop support for legacy JBoss AS 5/6 versions (sar module)
  • [MODCLUSTER-498] -        Extract Tomcat modeler specific methods from ModClusterConfig into separate Tomcat-specific class
  • [MODCLUSTER-516] -        Move /container-spi module into /container/spi module
  • [MODCLUSTER-518] -        Support JDK9 build 9-ea (June 2016)
  • [MODCLUSTER-519] -        Keep Tomcat dependency versions on latest released versions
  • [MODCLUSTER-559] -        Support JDK9 build 9-ea+160
  • [MODCLUSTER-599] -        Update maven-assembly-plugin goal to 'single'
  • [MODCLUSTER-611] -        Add maven-checkstyle-plugin to the Maven build
  • [MODCLUSTER-614] -        Disable "Unable to locate Source XRef to link to" warnings in Maven build
  • [MODCLUSTER-615] -        Fix compiler warnings in demo compilation
  • [MODCLUSTER-616] -        Fix Javadoc issues
  • [MODCLUSTER-617] -        Drop legacy "site-mod_cluster" scripting

Celebrating a public holiday in middle of the week finally gave me some extra time to play around with OpenShift.


If you are just starting with Arquillian to test persistence this article might come in handy:


Use Arquillian to test directly in OpenShift

You can use Arquillian to test your application directly in OpenShift. Lets go through the common issues and how to get around them.


You need to treat OpenShift's AS instance as remote container. In your projects pom.xml add a dependency to jboss-as-arquillian-container-remote:




(just use parent's pom dependency management take care of the version)


If you try to run this configuration while running rhc client tool port forwarding you will run into the following exception (pasting here if anyone is googling for it):


May 08, 2013 3:53:24 PM$1 call
WARNING: Exception encountered during export of archive
org.jboss.shrinkwrap.api.exporter.ArchiveExportException: Failed to write asset to output: /junit/extensions/TestDecorator.class
          at org.jboss.shrinkwrap.impl.base.exporter.StreamExporterDelegateBase$3.handle(
          at org.jboss.shrinkwrap.impl.base.exporter.StreamExporterDelegateBase.processNode(
          at org.jboss.shrinkwrap.impl.base.exporter.AbstractExporterDelegate.processNode(
          at org.jboss.shrinkwrap.impl.base.exporter.AbstractExporterDelegate.processNode(
          at org.jboss.shrinkwrap.impl.base.exporter.AbstractExporterDelegate.processNode(
          at org.jboss.shrinkwrap.impl.base.exporter.AbstractExporterDelegate.doExport(
          at org.jboss.shrinkwrap.impl.base.exporter.StreamExporterDelegateBase.access$001(
          at org.jboss.shrinkwrap.impl.base.exporter.StreamExporterDelegateBase$
          at org.jboss.shrinkwrap.impl.base.exporter.StreamExporterDelegateBase$
          at java.util.concurrent.FutureTask$Sync.innerRun(
          at java.util.concurrent.ThreadPoolExecutor.runWorker(
          at java.util.concurrent.ThreadPoolExecutor$
Caused by: Pipe closed
          at org.jboss.shrinkwrap.impl.base.exporter.StreamExporterDelegateBase$2.execute(
          at org.jboss.shrinkwrap.impl.base.exporter.StreamExporterDelegateBase$2.execute(
          ... 15 more


This is because ARQ failed to authenticate with the server. You will need to add a management user.


Creating a management user

Unfortunately, your instance in OpenShift does not come with the necessary scripting ( script). So lets run it locally on your instance and add a user:


[rhusar@x220 jboss-eap-6.0]$ ./bin/ 


and feed it with your user details. This will generate a password hash which you need to feed to your OpenShift's container configuration:


[rhusar@x220 jboss-eap-6.0]$ cat standalone/configuration/ | grep admin


Now, you can SSH into your OpenShift instance add the generated user to the server's file (there is probably a better way to do this via the git repository). You will find instructions how to ssh in the web console:


Afterwards, you will need to restart the server, click on the restart icon from the console and wait a few minutes.


Update arquillian.xml

Now, configure arquillian.xml in your project to pick up these credentials:


    <!-- Remote on OpenShift with port-forwarding -->
    <container qualifier="openshift-eap6" default="true">
            <property name="managementAddress"></property>
            <property name="managementPort">9999</property>
            <property name="username">admin</property>
            <property name="password">mL8oA4sctCe2epf</property>


Now run your tests (mvn test) and voila!


[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 45.172s
[INFO] Finished at: Wed May 08 15:52:43 CEST 2013
[INFO] Final Memory: 33M/332M
[INFO] ------------------------------------------------------------------------


Err, that was not so fast!

Oops, 45 seconds with sample application with 1 deployment? That doesn't seem useful in all cases.


What if we could test only with the database in OpenShift?

So how about you want to test your persistence layer, not necessarily testing in OpenShift's container, but use database running in OpenShift? The client tooling makes this feasible.


First, start the port forwarding client:

[rhusar@x220 myapplication]$ rhc port-forward myapplication
Checking available ports ... done
Forwarding ports ...

To connect to a service running on OpenShift, use the Local address 

Service Local               OpenShift
------- -------------- ---- ----------------
java  =>
java  =>
java  =>
java  =>
java  =>
java  =>
java  =>
mysqld  =>

Press CTRL-C to terminate port forwarding


This is pretty neat! The rhc client is forwarding all ports, which is useful in most cases but if you want to run your container on the same ports, you will run into:


The server is already running! Managed containers does not support connecting to running server instances due to the possible 
harmful effect of connecting to the wrong server. Please stop server before running or change to another type of container.(..)


Since rhc client does not support selective forwarding of ports, we will need to switch to a different port for our local managed instance. Just tell the configuration to offset the ports by a constant number like this:


    <!-- Managed -->
    <container qualifier="local-eap6-managed" default="true">
            <property name="jbossHome">target/jboss-eap-6.0</property>

            <!-- Do not overlap ports with port forward -->
            <property name="javaVmArguments">-Djboss.socket.binding.port-offset=100</property>
            <property name="managementPort">10099</property>


Now, the same test with starting and stopping of the AS instance took only few seconds:


[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.033s
[INFO] Finished at: Wed May 08 16:20:36 CEST 2013
[INFO] Final Memory: 30M/343M
[INFO] ------------------------------------------------------------------------


Also, make sure that you bundled a persistence.xml or data source with the port forwarded host and port.


Happy persistence testing!

Buenos días,


We just got back from the clustering team meeting. Intense like every year and packed with great talks and discussions.


Here, I just want to list out few interesting features that will be coming in future versions of AS, any of which is always looking for community contributions (just let us know what you want to hack on!):


  • non-blocking state transfer (look at docs)
  • cluster management improvements (see Jira)
  • HA Singleton deployments (see pull request)
  • x-site replication (see some JavaDocs)
  • mod_cluster discovery protocol
  • mod_cluster management operations
  • better remote EJB support




Namaste नमस्ते everyone!


As I am sitting at the last airport (Dubai!), I am happy to report from journey of mod_cluster going all the way to India! To back up a little bit, when the JUDCon 2012 call for papers announcement appeared, I thought to myself that India needs to see something about clustering. Moreover mod_cluster now being nicely bundled in JBoss AS7 I thought it would wounderfully fit. As I meet lof of people form India in the community I knew it would come in handy for many developers.


My session was on Tuesday morning and attendance was astonishing! We went through the main mod_cluster concepts explaining where mod_jk or other naive balancers just fall short. The main benefits presented were simplified configuration, better load balancing and fine-grained application lifecycle control. We also had a look of rolling upgrades without outage and briefly how to turn large flat clusters into smaller ones for better manageability and what else it gives us. For better understanding I showcased a load-balancing demo -- as many are asking the demo is here bundled with the distribution :). Everything was automatically (or rather automagically) configured using UDP multicast autodiscovery for balancers and nodes, we saw balancing happen and an example of how to roll an upgrade using session draining strategy.


The feedback was very positive (I remember one of attendees: "Sir, thank you, best presentation on JudCon!"; needless to say there were many great talks!). Actually, I didn't get to see any other sessions after that because there was so much community interaction and lots of other questions also on general clustering in AS7. Thinking about this, next time a general and an advanced topics in clustering talks would definitely benefit many developers over here.


For what I have seen, Ales has shown how to do CDI on CapeDwarf. The session was pretty advanced but well accepted by the developers. AS7 track by Bruno and Dimitris has given a very good start for devs. Greg gave an amazing presentation on JPA and Hibernate best practices -- if you are still a little lost in the world of persistence I encourage to definitely see his presentation. Dimitris gave a very inspiring presentation on open source citizenship. Manik and Galder did altogether several interesting presentations on different aspects of Infinispan (which is as we know the core of AS7 clustering). Galder even talked about "polyglot", go check it out.


After demanding JUDCon, Galder and I went for little visit to Delhi to regain the energy and refreshment for more hard work!


Thanks again for everyone coming and making India:2012 the biggest JUDCon to date! It was great experience. See you soon India!