Skip navigation

One of the major features introduced in Keycloak 1.1.0.Final is improved clustering capabilities. Through clustering Keycloak you can provide high availability and scalability, which is obviously important if you rely on Keycloak to login to critical applications.


We could have chosen to build this clustering capability from the ground, but we’re in the business of developing an identity and access management solution not a clustering solution. Second option could have been the excellent JGroups, but that’s still low level stuff though. Luckily at JBoss we have no shortage of middleware and we obviously have a solution for this. Unless you’ve been living under a stone the last few years you should have heard about it, it’s called Infinispan!

Infinispan is a distributed in-memory key/value data grid and cache. It’s a mature project and loaded with features, which fits our needs perfectly.


In Keycloak we have 3 different types of data:

  • Realm and application meta-data
  • Users, credentials and role-mappings
  • User sessions


Each have different needs when it comes to clustering.


Realm and application meta-data are frequently read, but unless your sysadmin really loves reconfiguring things, not so frequently changed. There’s also only so many realms and applications in an organization so the size is limited. This mean we can save it all in a database and cache it all in memory as its access. If a change is made it’s written directly to the database. To make sure all nodes retrieve the updates we use an invalidation cache. An invalidation cache simply sends a message to the cluster that invalidates an entry in the cache. Next time the data is requested it won’t be in the cache and the updated version is loaded from the database. This is beneficial for us compared to a replicated cache as it doesn’t send sensitive data throughout the cluster and also reduces network traffic.


Users are by default handled the same way as realm and app meta-data. It’s good practice to change your password once in a while, but certainly not every time you  log in! So that means users are also frequently read, but not so frequently changed. There can be a lot more users than realms though, so we set a maximum number of users that are cached. This results in active users being held in memory, while inactive users are purged from memory.


User sessions are very different as they are frequently updated. Every time a user logs in a new user session is created. We also have a mechanism to expire idle sessions, which results in every time a user session is accessed it’s also updated. If we stored user sessions in a database and used an invalidation cache performance and scalability wouldn’t be very good. Also, user sessions are not critical data so doesn’t have to be persisted in a database. In the worst case scenario if a user session is lost a user has to log back in. For user sessions we use a distributed cache. This provides good performance as they are not persisted, it provides good scalability as they’re split into segments where each node only holds a subset of the sessions in memory. Finally, if you really need higher availability for user sessions we recommend you configure replicating each segment to more than one node rather than persisting the sessions.

Everyone wants to do a screencast these days and I've found a pretty good way to do it on Linux, which I thought I'd share.


You'll need to install two packages, Record My Desktop and wmctrl. In the world of Linux installing packages are nice and simple. On Fedora simply run:


su -
yum install gtk-recordmydesktop wmctrl


Or if you're using Ubuntu run:


sudo apt-get install gtk-recordmydesktop wmctrl


Next step is to stop all applications you're not going to use as part of the screencast.


If you're going to use Chrome or Firefox as part of your screencast I'd recommend creating a new blank profile for that purpose. This helps reduce the amount of crud, such as bookmarks, extensions, etc. For Chrome this can be done by running:


google-chrome --user-data-dir=/tmp/chrome-screencast


For Firefox it's a bit more awkward. First start Firefox with:


firefox -P


Then click on create profile and click next, next, a few times..


If I'm using the terminal I also like to remove the hostname and such from the prompt. This can be done by running the following in the terminal:


export PS1=' $ '


Now, start all the applications you're going to use as part of the screencast. This includes Chrome/Firefox, your favourite IDE and quite likely (since you're using a proper operating system) a terminal.


Then you want to use wmctrl to resize all the applications to the same size. This is done by running:


wmctrl -r 'title' -e 0,100,200,1280,800


Replace 'title' with part of the title of your window, for example 'Chrome' works for Chrome. The numbers 100,200,1280,800 are the left, top, width, height. I've found 1280x800 is a good size for videos. Replace 100,200 with values that makes sure your applications doesn't overlap any menus, and places the applications centrally on your screen. The important bit is to use the same setting for all applications.


You'll also want to make sure you have a neutral background before you start recording. I tend to not want to include pictures of my children in screencasts .


You're now ready to start recording your screencast. Open RecordMyDesktop. The default settings has worked well for me, so I've just left everything untouched (I personally love things I don't have to tweak). Click on 'Select Window' and choose any of the windows you previously resized. RecordMyDesktop doesn't actually record that specific window, but it records that area of the screen.


Now click Record and start doing your thing!


To stop the recording click on the RecordMyDesktop stop icon, in Fedora this is found in the Message Tray (see while on Ubuntu in application indicators. After you stop the recording, RecordMyDesktop will encode your screencast, this takes a while so it's a perfect time for a cup of coffee (or a beer on a Friday).


If you need to trim the video avconv is nice and simple. The following example will trim the first 10 seconds, and include the following 60 seconds from input.ogv and save it as output.ogv:


avconv -i input.ogv -ss 10 -t 60 -vcodec copy -acodec copy output.ogv


Personally I find it's quicker and easier to just redo the screencast, but if you need to do some more editing OpenShot is a great app.


For an example on the results have a look at:

At the time of writing this post, a WildFly cartridge is not available on OpenShift. However, it's relatively simple to get it running using the DIY cartridge.

If you don't already have an OpenShift account go to and create one now.

Create a new application

First step to doing this is to create a new application on OpenShift using the DIY cartridge. This can either be done through the web console or using the rhc command line tool.

To create the application using the web interface open and select add application. Select the Do-It-Yourself cartridge. Insert a name for the application and click on Create Application. On the next page follow the instructions to clone the Git repository for the application.

Install WildFly

Download WildFly from and extract it into the directory where you cloned the applications Git repository.

To make WildFly run on OpenShift you'll need to do a few minor configuration changes. Open the wildfly-8.0.0.Alpha1/standalone/configuration/standalone.xml in your favourite text editor.

First thing to do is to set the interfaces to use loopback addresses. Search for <interfaces> and replace inet-address with loopback-address. The result should be:


   <interface name="management">  
     <loopback-address value="${}"></loopback-address>  
   <interface name="public">  
     <loopback-address value="${jboss.bind.address:}"></loopback-address>  
   <interface name="unsecure">  
     <loopback-address value="${jboss.bind.address.unsecure:}">  


Next, if your application is using the example datasource you may want to configure this to be persisted. Search for <datasources> for the ExampleDS datasource entry replace the value of the connection-url with:




Finally, the timeout for a deployment is set to 60 seconds by default, and if you're using the free gear this may cause deployments to fail. To increase the deployment timeout search for deployment-scanner and set the deployment-timeout attribute. For 300 seconds timeout it would be:


<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000" deployment-timeout="300"/>


Edit start and stop scripts

Open .openshift/action_hooks/start in a text editor and replace the contents with:



ln -s $OPENSHIFT_DATA_DIR $OPENSHIFT_REPO_DIR/wildfly-8.0.0.Alpha1/standalone/data

cd $OPENSHIFT_REPO_DIR/wildfly-8.0.0.Alpha1
nohup bin/ -b $OPENSHIFT_DIY_IP -bmanagement=$OPENSHIFT_DIY_IP > $OPENSHIFT_DIY_DIR/logs/server.log 2>&1 &


Then open .openshift/action_hooks/stop and replace the contents with:



jps | grep jboss-modules.jar | cut -d ' ' -f 1 | xargs kill

exit 0


Push changes to Openshift

Now it's time to commit and push the changes to OpenShift. Run:


git commit -m "Added WildFly" -a
git push


Once the data has been sent to OpenShift the application will be restarted and you should have a running instance of WildFly on OpenShift. To deploy your applications to the WildFly instance simply copy them to standalone/deployments and use git commit/push to upload to OpenShift.