Skip navigation

Both the Express and Flex cartridges are based on the JBossAS7 web profile. Express reduces some of the services enabled by default in the web profile as I discussed in this post, while Flex actually adds addition services for clustering of web sessions using Infinispan and JGroups.


Web Session Clustering

The configuration of the web session cache replication employs a configuration that utilizes a TCP based JGroups configuration, and a FILE_PING discovery protocol that utilizes the Flex underlying clustered file system based on the glusterfs. The reason for use of TCP is that multicast is not usable in the Amazon cloud environment, and the reason the FILE_PING discovery protocol is used is that the Flex layer is in control of the cluster membership. By writing the cluster membership to the application shared directory, Flex is in control of what nodes an application sees. The Infinispan and JGroups subsystem configuration fragments are listed here:


        <subsystem xmlns="urn:jboss:domain:infinispan:1.0" default-cache-container="hibernate">
            <cache-container name="hibernate" default-cache="local-query">
                <local-cache name="entity">
                    <eviction strategy="LRU" max-entries="10000"/>
                    <expiration max-idle="100000"/>
                <local-cache name="local-query">
                    <eviction strategy="LRU" max-entries="10000"/>
                    <expiration max-idle="100000"/>
                <local-cache name="timestamps">
                    <eviction strategy="NONE"/>
            <!-- web session replication cache definitions -->
            <cache-container name="web" default-cache="repl">
                <transport stack="tcp" />
                <replicated-cache name="repl" mode="ASYNC" batching="true">
                    <locking isolation="REPEATABLE_READ"/>
                <distributed-cache name="dist" mode="ASYNC" batching="true">
                    <locking isolation="REPEATABLE_READ"/>
        <subsystem xmlns="urn:jboss:domain:jgroups:1.0" default-stack="tcp">
            <stack name="tcp">
                <transport type="TCP" socket-binding="jgroups-tcp" diagnostics-socket-binding="jgroups-diagnostics"/>
                <protocol type="FILE_PING">
                  <property name="location">${}</property>
                  <property name="timeout">5000</property>
                  <property name="num_initial_members">1</property>
                <protocol type="MERGE2"/>
                <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
                <protocol type="FD"/>
                <protocol type="VERIFY_SUSPECT"/>
                <protocol type="BARRIER"/>
                <protocol type="pbcast.NAKACK"/>
                <protocol type="UNICAST"/>
                <protocol type="pbcast.STABLE"/>
                <protocol type="VIEW_SYNC"/>
                <protocol type="pbcast.GMS"/>
                <protocol type="UFC"/>
                <protocol type="MFC"/>
                <protocol type="FRAG2"/>
                <protocol type="pbcast.STREAMING_STATE_TRANSFER"/>
                <protocol type="pbcast.FLUSH"/>


The ${} value is an application specific directory that is shared across the cluster members assocaited with the application. The Flex environment maintains the application distributed directory for each application.


Server Isolation

Both the Flex and Express environments launch a JBossAS7 instance for the logical PAAS application. In the current implementations, the mechanism by which the JBossAS7 server instances are isolated on a given PAAS envionrment EC2 server instance differ due to differences in the current Flex and Express implementations.


Isolation by Interface

The Express environment assigns each application a unique IP address on the loopback interface. It is a property of Fedora/RHEL that one can bind unique loopback addresses without having to configure a specific alias. When the AS7 cartridge is called to configure an application, it locates the next unique loopback interface IP address, and set the {ip} placeholder in the following standalone.xml fragement with the IP address:


<server xmlns="urn:jboss:domain:1.0">

        <interface name="management">
            <loopback-address value="{ip}"/>
        <interface name="public">
            <loopback-address value="{ip}"/>



Isolation by Ports

In the Flex environment, there is no support currently available for assigning the cartridge a unique loopback address, so what the Flex cartridge does is to determine a unique port offset based on the available of a free http port. It starts at the default value of 8080, and advances by 100 until a free port is found. The offset is then used as the port-offset value socket-binding-group configuration. The offset also has to be applied separately to each of the managment-interfaces ports due to issue: ( Note that this configuration fragment also show the use of the eth0 interface by the JGroups port references shown in the above JGroups fragment. The eth0 interface is usable interface between the instances in the Flex cluster.


<server xmlns="urn:jboss:domain:1.0">




           <native-interface interface="management" port="9999" />

           <http-interface interface="management" port="9990"/>





        <interface name="management">

            <inet-address value=""/>


        <interface name="public">

           <inet-address value=""/>


        <interface name="eth0">

           <nic name="eth0"/>




    <socket-binding-group name="standard-sockets" default-interface="public" port-offset='0'>


        <socket-binding name="jgroups-tcp" port="7600" interface="eth0" />

        <socket-binding name="jgroups-tcp-fd" port="57600" interface="eth0" />




In this post I'm drilling into some of the details of OpenShift from the perspective of the JBossAS7 cartridges that were created for Express and Flex. The basic notion in terms of providing a PaaS container is that of a cartridge. A cartridge plugs functionality into the PaaS environment, and is responsible for handling cartridge callouts known as hooks. The hooks are what handle container specific details of installing/starting/stopping/removing PaaS applications that rely on a given container type. A PaaS application may use more than one cartridge as part of the application. One example of this usage for the JavaEE applications the JBossAS7 cartridge supports would be a MySQL cartridge that provides a MySQL database for use by the application.


Let's look at what the JBossAS7 cartridge does in the two environments.


The Express environment is oriented toward a developer getting their application running. It runs with a more limited JBossAS7 server as I described in JBossAS7 Configuration in OpenShift Express. The focus is on a single node, git repository development model where you update your application in either source or binary form and push it out to the OpenShift environment to have an application running quickly. There is little you can configure in the environment in terms of the server the app runs on, and the only access to the server you have is through the command line tools and the application git repository.


The Express cartridge framework supports the following cartridge hooks. The hooks highlighted in bold are the only ones the JBossAS7 cartridge provides an implementation for:


  • add-module
  • configure
    • This is where most of the work is done. It is a bash script which creates an application local JBossAS7 instance with it's standalone/deployments directory mapped to the user's git repository deployments content. It creates the git repository with git hooks to build and restart the server if a source development model is in effect, sets up a control shell script which handles the real work for the start/stop/restart/status hooks, links the log files to where the Express framework picks them up, updates the standalone.xml with the loopback address assigned to the application, and installs an httpd configuration to proxy the external application url to the JBossWeb container. This also starts the JBossAS7 server.
  • deconfigure
    • Removal of application and it's setup
  • info
  • post-install
  • post-remove
  • pre-install
    • simply checks that the java-1.6.0-openjdk httpd rpm packages are installed
  • reload
  • remove-module
  • restart
  • start
    • This is a simple bash script which calls out to the control shell script to start the server. This ends up calling the application's JBossAS7 bin/ to launch the server.
  • status
    • Checks if the server is running and if so, returns the tail of the server.log. If the server is not running, reports that as the status.
  • stop
    • This is a simple bash script which calls out to the control shell script to stop the server.
  • update_namespace


The git repository for the application contains some configuration and scripts that can be updated to control your application deployment on the server. I'll talk about those in a seperate blog entry.


The Flex framework is not surprisingly, much more flexible with respect to what you can control in the PaaS environment. You have control over cluster definitions, and other IaaS aspects in addition to your PaaS containers.


  • configure
    • This is a Python class where the initial setup of the application specific JBossAS7 instance is done. It lays down the JBossAS7 structure. There is integration with the Flex console configuration wizard which displays the MySQL datasource fields as well.
  • deconfigure
    • Removal of application and it's setup
  • post-install
    • Integrates the JBossWeb instrumentation module that allows tracking of web requests by Flex
  • start
    • This is a bash script finishes some configuration details like determining which port offset to use . As I described in this post Differences Between the Express and Flex JBossAS7 Configurations, Flex and Express differ in how they isolate the JBossAS7 instances. In Flex, a port offset is determined at startup time based on the existence of other http listening ports. The start script also links the standalone/deployments directory to the application git repository, and well as the log file location the Flex console looks to, and installs an httpd configuration to proxy the external application url to the JBossWeb container.
  • stop
    • This calls out to the bin/ script to shutdown the server, using the port offset information to determine how to connect to the server.


Future (Codename TBD)

Right now the Express/Flex environments are based on very different internal infrastructures, and even though they build on the concept of a cartridge, the implementations are different. This is not a good thing for many reasons, not the least of which is that it complicates opening up development to a wider community. To address this, the OpenShift architecture is moving to a public, open source development mode that will be hosted on github under the following Organization:


The new project is called Codename TBD (not really, but it is still being discussed), and the goal is to develop a common cartridge SPI/API and infrastructure to address the current duplication of effort and limitations. To that end, I invite you to browse the existing code and docs in the github organization repositories, as well as the OpenShift Community pages.


We are looking for feedback from both the end user PaaS developer as well as PaaS container providers. My involvement will be from the perspective of what PaaS notions can be pushed as standards for consieration in JavaEE 7 and 8.

The JBossAS7 OpenShift Express cartridge runs in a constrained environment that restricts what ports can be used as well as how much memory, and  the number of processes the user can run. The current limitations set the Java memory at 128Mb of max heap, and 83Mb of permgen, so your applications need to fit within that constraint. Also, the express user running the application is limited to about 100 processes, which translates to a max of 80 or so java threads, so excessive thread creation can eat up available processes and begin to cause java.lang.OutOfMemoryErrors with a failure to create native thread cause.

The configuration of the JBossAS7 server used by the OpenShift Express JBoss cartridge is a simple modification of the jboss-as-web-7.0.0.Final release which my be obtained from


The contents of the server are then overwritten by the attached standalone.xml, standalone.conf and mysql.tar archive for the mysql jdbc driver module. The exact steps would be:

  1. wget
  2. unzip
  3. Download this documents attachments
  4. unzip to get standalone.xml
  5. cp standalone.xml  jboss-as-web-7.0.0.Final/standalone/configuration
  6. unzip to get standalone.conf
  7. cp standalone.conf  jboss-as-web-7.0.0.Final/bin
  8. tar -xvf mysql.tar -C jboss-as-web-7.0.0.Final


At this point starting the server using the  jboss-as-web-7.0.0.Final/bin/ command has the server configured the same as that run by the Express JBossAS7 cartridge. In terms of services or subsystems that are available in the jboss-as-web-7.0.0.Final that have been removed from the server as used by Express, they are:

  • JMS
  • Managment interfaces/console
  • Webservices
  • OSGi
  • JMX connector
  • Remote EJB access



For reference, the exact difference between the express standalone.xml and the jboss-as-web-7.0.0.Final version is:



[650](ironmaiden:tmp) > diff express-standalone.xml jboss-as-web-7.0.0.Final/standalone/configuration/standalone.xml 
>         <extension module=""/>
>         <extension module=""/>
>     <management>
>          <security-realms>
>               <security-realm name="PropertiesMgmtSecurityRealm">
>                    <authentication>
>                         <properties path="" relative-to="jboss.server.config.dir" />
>                    </authentication>
>               </security-realm>
>          </security-realms>
>         <management-interfaces>
>            <native-interface interface="management" port="9999" />
>            <http-interface interface="management" port="9990"/>
>         </management-interfaces>
>     </management>
<                     <connection-url>jdbc:h2:${}/test;DB_CLOSE_DELAY=-1</connection-url>
>                     <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url>
>         <subsystem xmlns="urn:jboss:domain:jmx:1.0">
>             <jmx-connector registry-binding="jmx-connector-registry" server-binding="jmx-connector-server" />
>         </subsystem>
>         <subsystem xmlns="urn:jboss:domain:osgi:1.0" activation="lazy">
>             <configuration pid="org.apache.felix.webconsole.internal.servlet.OsgiManager">
>                 <property name="manager.root">jboss-osgi</property>
>             </configuration>
>             <properties>
>                 <!--
>                     A comma seperated list of module identifiers. Each system module
>                     is added as a dependency to the OSGi framework module. The packages
>                     from these system modules can be made visible as framework system packages.
>                 -->
>                 <property name="org.jboss.osgi.system.modules">
>                 org.apache.commons.logging,
>                 org.apache.log4j,
>       ,
>                 org.slf4j,
>                 </property>
>                 <!--
>                     Framework environment property identifying extra packages which the system bundle
>                     must export from the current execution environment
>                 -->
>                 <property name="org.osgi.framework.system.packages.extra">
>                 org.apache.commons.logging;version=1.1.1,
>                 org.apache.log4j;version=1.2,
>       ;version=7.0,
>                 org.jboss.osgi.deployment.interceptor;version=1.0,
>                 org.jboss.osgi.spi.capability;version=1.0,
>                 org.jboss.osgi.spi.util;version=1.0,
>                 org.jboss.osgi.testing;version=1.0,
>                 org.jboss.osgi.vfs;version=1.0,
>                 org.slf4j;version=1.5.10,
>                 </property>
>                 <!-- Specifies the beginning start level of the framework -->
>                 <property name="org.osgi.framework.startlevel.beginning">1</property>
>             </properties>
>             <modules>
>                 <!-- modules registered with the OSGi layer on startup -->
>                 <module identifier="javaee.api"/>
>                 <module identifier="org.jboss.logging"/>
>                 <!-- bundles installed on startup -->
>                 <module identifier="org.apache.aries.util"/>
>                 <module identifier="org.jboss.osgi.webconsole"/>
>                 <module identifier="org.osgi.compendium"/>
>                 <!-- bundles started in startlevel 1 -->
>                 <module identifier="org.apache.felix.log" startlevel="1"/>
>                 <module identifier="org.jboss.osgi.logging" startlevel="1"/>
>                 <module identifier="org.apache.felix.configadmin" startlevel="1"/>
>                 <module identifier="" startlevel="1"/>
>                 <!-- bundles started in startlevel 2 -->
>                 <module identifier="org.apache.aries.jmx" startlevel="2"/>
>                 <module identifier="org.apache.felix.eventadmin" startlevel="2"/>
>                 <module identifier="org.apache.felix.metatype" startlevel="2"/>
>                 <module identifier="org.apache.felix.scr" startlevel="2"/>
>                 <module identifier="org.apache.felix.webconsole" startlevel="2"/>
>                 <module identifier="org.jboss.osgi.jmx" startlevel="2"/>
>                 <module identifier="org.jboss.osgi.http" startlevel="2"/>
>                 <!-- bundles started in startlevel 3 -->
>                 <module identifier="org.jboss.osgi.blueprint" startlevel="3"/>
>                 <module identifier="org.jboss.osgi.webapp" startlevel="3"/>
>                 <module identifier="org.jboss.osgi.xerces" startlevel="3"/>
>             </modules>
>         </subsystem>
>         <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
<             <virtual-server name="default-host" enable-welcome-root="false">
>             <virtual-server name="default-host" enable-welcome-root="true">
>                <alias name="" />
<             <loopback-address value=""/>
>             <inet-address value=""/>
<             <loopback-address value=""/>
>            <inet-address value=""/>

Filter Blog

By date: By tag: